Avatar_zqiakf

Sam Thielman

Sam Thielman is an investigative reporter for Talking Points Memo based in Manhattan. He has worked as a reporter and critic for the Guardian, Variety, Adweek and Newsday, where he covered stories from the hacking attacks on US and international targets by Russian GRU and FSB security services to the struggle to bring broadband internet to the Navajo nation. He lives in Brooklyn with his wife and son and too many comic books.

Articles by Sam

The National Review’s Jay Nordlinger wrote Sunday that one of Vladimir Putin’s fiercest critics had been barred from entering the U.S.—and the outcry was immediate. Had the State Department revoked Bill Browder’s visa? Was Russia trying to get him arrested?

Rep. Eliot Engel (D-NY), ranking member of the House Foreign Affairs committee, issued a statement decrying “the Department of State’s baffling decision to revoke Bill Browder’s visa” and calling on Rex Tillerson to personally reinstate him. A State Department spokesperson referred TPM’s questions to the Department of Homeland Security—like most Brits, Browder didn’t have a formal visa, they said—he was welcome to apply for one.

Browder himself was irate when he spoke to TPM on Monday afternoon: “I’m pretty sure that this is just an automatic thing. So the question is, will they lift it or not?”

“There’s no way that I can get any information about any of this stuff,” he added. “When I called [DHS’] help line, after waiting an hour and a half they said ‘I can’t tell you anything about why this has happened, you’ll have to write a FOIA.'”

Hours later, the Department of Homeland Security provided an answer: Customs and Border Protection had to manually approve Browder’s travel authorization after the Interpol notice went out—but says it did so on Wednesday. Browder contests this—he said he still couldn’t fly on Thursday, when he got the email notice. TPM has asked DHS for clarification and will update this story when and if it comes.

So what exactly happened here?

Russian authorities had issued a “diffusion” through international police service Interpol calling for his arrest last Tuesday, Browder told TPM (the country’s government had already tried to force Interpol to issue a “red notice,” much like putting Browder on an American “Most Wanted” list, but Interpol repeatedly refused). Exploiting that bureaucratic loophole, Russian authorities appear to have succeeded in automatically rescinding permission for Browder to visit the U.S., albeit briefly.

The problem stemmed from Browder traveling not on a visa issued by the State Department, but on the less formal Electronic System for Travel Authorization (ESTA). ESTA is supposed to ease diplomatic restrictions on travel for foreigners who are unlikely to overstay the 90-day limit on their visas, Chicago-based immigration lawyer Richard Hanus noted. The automated nature of the system makes it convenient for most people, but speed and responsiveness pose a problem when Russia can use the system to game American border controls to cause trouble for its critics.

“The only times we see things like this is when there’s an irregularity,” Hanus told TPM: “previous U.S. immigration violations—when somebody stays more than the 90 day—or when there’s a criminal matter.”

The New Jersey-born, Chicago-raised Browder, who became a British citizen in 1998, said that he’d gotten a form email on Thursday telling him to check his “Global Entry” status, which is like TSA Pre-check for non-citizens.

“And so I logged into Global entry and it said ‘your status has been revoked’ and so I said, ‘Well I wonder if my Visa has been revoked, so I tried to check into a flight and I couldn’t,'” he said.

Yet a DHS spokesperson told TPM that Browder was supposed to be good to go by that point. The agency’s statement reads:

“As the agency charged with preventing the entry of terrorists and other criminal actors from entering the United States, U.S. Customs and Border Protection regularly screens law enforcement systems in order to determine if any travelers present a security or law enforcement risk. This vetting is done on a recurrent basis and decisions on travel are made on the latest information available. The decision to approve or deny an ESTA application is made on a case-by-case basis on the totality of the circumstances. When possible matches to derogatory information are found, applications will be vetted through normal CBP procedures which include a manual review by a CBP analyst and a supervisor prior to a determination being made. Applications being manually reviewed may temporarily be placed in a pending status until a final determination is made. William Browder’s ESTA remains valid for travel to the United States. His ESTA was manually approved by CBP on Oct. 18—clearing him for travel to the United States.

Russia had given Browder a similar headache in August, shortly after a Council of Europe report condemned that country for misusing anti-crime protocols for political ends in his case. Turkey recently had been scolded for trying to use Interpol to arrest a Spanish journalist critical of military dictator Recep Tayyip Erdoğan, too.

Four days after publication, Interpol responded to TPM’s request for clarification on the topic of the diffusion calling for Browder’s arrest. No member nation, Interpol said, is required to honor its Red Notices or diffusions beyond its own laws. On the topic of Russia’s specific diffusion related to Browder, the agency said:

All notices and diffusions must meet Interpol’s rules and regulations, and prior to publication all Red Notice requests are checked by a dedicated task force to ensure they are compliant. Diffusions are circulated without prior approval from the General Secretariat.  However, the dedicated task force also checks diffusions for wanted persons, even though they are already circulated, to ensure that they are compliant.

When a Red Notice or diffusion is cancelled, for whatever reason, a message is sent to all member countries informing them of the decision and they are requested to remove any related information from their national databases and not to use Interpol’s channels in relation to the case.

A diffusion recently circulated in relation to Mr Browder was found to be non-compliant following a review by the General Secretariat. All information in relation to this request has been deleted from Interpol’s databases and all Interpol member countries informed accordingly.

Browder Tweeted Thursday that Interpol had changed its policies to prevent Russia from abusing the diffusion system.

As Canadian reporter Daniele Hamamdjian also pointed out on Twitter, the law enforcement agency did the same thing in 2013.

This post has been updated.

Read More →

In one of the stranger aspects of the Russian influence campaign reported to date, the Federal News Agency (FAN) troll farm funded activism and social programs in black communities as recently as May. The Russian operation set up a news site that interviewed prominent thinkers like Occupy Wall Street’s Micah White and former Black Panther Party leading member Ericka Huggins. It also sponsored self-defense programs around the country, including one in Queens that specializes in de-escalating conflict between black people and police officers. None of those American activists who had contact with the operatives knew they were in contact with Russian agents at the time.

White (pictured above) has a theory for why those operatives were supporting black activism in the U.S. He has written on the tactical use of social movements to wage war—he wrote on the topic more than a year before the 2016 election—and he says that while Russia “can be pursuing multiple objectives simultaneously,” he thinks subverting the American status quo may be a mutual objective for a hostile Russian operation and the U.S. protest movements striving for more equality and justice—the challenge is to use that signal boost for noble ends.

“I think there aren’t many examples of social change that isn’t created by outside forces,” White told TPM. “Lenin was allowed back into Russia on German railroads while Russia and Germany were at war.”

As White observes in one essay, Russian state news couldn’t get enough of Occupy Wall Street: he claims RT flew Occupy organizers to London to be interviewed by Julian Assange for his TV show on the Kremlin-backed network.

One thing White said discourages him about the Russian propaganda efforts is how successful they were in terms of pure reach. As a rule, activists operate on a shoestring. The funds from the Russian trolls helped do things activists normally may be hard-pressed to pull off, like those classes in de-escalation.

“I do think it’s a watershed moment for American activism where American activists have to say, ‘Why is Russia able to create fake Facebook pages that get more likes than we do?’ I think it’s another sign that protest is broken,” he told TPM.

Another person contacted by the troll farm said he was surprised when the person who reached out wanting to facilitate political action wasn’t especially interested in talking politics.

“Their idea was they wanted to address police brutality, maybe do know-your-rights training,” said Omowale Adewale, a trainer in New York City who was asked to lead self-defense classes in Brooklyn and Queens by a troll-run group called BlackFist. “I was doing street harassment self-defense classes for women, so they caught me really at a time when I was already kind of engaged in a lot of this work.”

Adewale told TPM that while he was skeptical of the person who contacted him, he never thought a foreign government was recruiting him. He just thought the whole thing was probably a setup for a scam that would end up stealing from him.

“There never was any politics, which was just nuts,” he said.

But then the people Adewale thought might be scammers sent money to him. The prospect of offering something good to his community, especially bankrolled from the outside, thrilled him—but he was still curious about where the money was coming from. His thoughts, though, were primarily with a black community living in fear.

“I don’t know if you can fathom in the community the way people feel really targeted by police brutality,” Adewale told TPM. “I’m a fighter myself. Sometimes I jog and I’m running and cops are around. You can’t just run past them! White folks can just keep jogging, but if I’m in jogging gear, my jogging gear might include a hoodie! That’s problematic on a huge level, that somebody might be nervous and I might get shot, or at least get stopped and harassed. That’s the kind of thing that happens to me. A lot of things have to take place before you physically get somebody’s hands off of you. [It’s about] de-escalation ad knowing your rights. You really don’t want to die.”

That fear was a good litmus test for Adewale when it came to the intentions of “Taylor,” as well. “Taylor” didn’t seem to feel it, for himself or for anyone else.

“The lack of any kind of caring,” Adewale told TPM, “gave me insight.”

This post has been updated.

Read More →

Alan Yuhas contributed English-language translation

As many as 100 unwitting activists were recruited to help organize events in the United States both before and after the election by the same St. Petersburg-based Russian troll farm behind scores of fake social media accounts that purchased ads to sow discord during the 2016 campaign.

The revelation comes from a report in the Russian business magazine RBC published on Tuesday morning.

The events included an October 2016 rally in Charlotte, North Carolina to protest police violence mere weeks after a protester was fatally shot at a Black Lives Matter protest there. The organizers of the October protest were not with BLM, though, according to RBC’s report. They were with BlackMattersUS, the organization outed as a Russian front last week by Casey Michel at ThinkProgress.

The Charlotte rally was one of ten BlackMattersUS events catalogued by RBC journalists Polina Rusyaeva and Andrey Zakharov. The two reporters interviewed numerous former employees at the Federal News Agency (FAN), the troll farm formerly known as the Internet Research Agency, and reviewed chats on encrypted messaging app Telegram from senior personnel.

The report also found that from January-May 2017, the troll farm contacted martial arts instructors through a puppet group called BlackFist. In places as disparate as New York City, Los Angeles, Lansing, Michigan and Tampa, Florida, BlackFist offered to pay the instructors to provide free self-defense course for “anyone who wanted them.” Those instructors told RBC that they had indeed received sponsorship for free classes, although it was abruptly withdrawn.

“Up to 100 American citizens helped to organize the events for the ‘Trolls factory,’ not knowing who’s really behind all these groups,” Zakharov told TPM.

A source familiar with the troll farm’s activities told RBC that it spent about $80,000 total—just $20,000 less than Facebook said was spent promoting divisive ads on its platform—on “paying for these local organizers’ work (flights, printing costs, technical equipment),” according to a translation of the report commissioned by TPM.

RBC found that the troll farm was carrying out dry runs for political protests in the U.S. as early as 2015. That spring, the organization used publicly accessible webcams in Times Square to see if people would follow instructions on Facebook to show up at a designated place and time for a free hot dog. They did, and didn’t even get a promised hot dog for their trouble.

FAN considered that show of hungry Facebook users a huge success, according to the translation of RBC’s report:

The action was meant to test the effectiveness of a hypothesis: can you remotely organize measures in American cities. “Simply a test of possibilities, an experiment. And it succeeded,” remembered one of the “factory” workers, not concealing their pleasure. From this day forward, almost a year and a half before the US presidential election, began the full work of the “trolls” in American communities.

In March 2015, on the web portal SuperJob, there appeared vacancies for “internet operators (night),” with a salary of 40-50 thousand roubles and a work schedule of 21pm to 9am, in the office on Primorsky district; job duties included writing materials “on designated themes” and “news information and analysis.” On the list of requirements for the position, “natural English,” “confident ownership” of written language, and creativity.

Russian reporter Alexey Kovalev told TPM last month that a troll he took to task for praising Putin in the comments of one of his articles made him a similar offer for work.

The RBC report also identified the head of FAN’s American division, Jayhoon (also spelled Dzheikhun) Aslanov, 27, who studied abroad in the U.S. in 2009 and graduated with a degree in economics from Russian State Hydrometeorological University in 2012. Three sources confirmed Aslanov’s role at the troll farm to RBC, including one who showed the reporters messages from Aslanov on Telegram; Aslanov himself denied it to the news outlet.

FAN’s American unit spent $2.3 million between June 2015 and August 2017 and employed 90 people at its peak, according to the report; it is still active and today employs 50 people. During the period RBC studied, the troll farm’s budget for promotion on social media was $5,000 a month, fully half of which was devoted to “posts touching on race issues.”

But Trump himself factored into that material far less than his opponent, Hillary Clinton, RBC found. From the translated report:

A RBC analysis of hundreds of posts showed that Clinton figured in troll posts far more frequently than Trump.
“Share if you believe that Muslims did not do 9/11,” (United Muslims of America, 11 September 2016), “Clinton insists ‘We have not lost a single American in Libya’ Four coffins, covered in flags, were not empty, Hillary.” (Being Patriotic, in a post about Clinton’s relation to the tragedy, from 8 September 2016). In a statement, Facebook said that for the most part the blocked ads “range across the ideological spectrum,” touching on issues like LGBT rights, race, immigrants and firearms.

RBC’s investigation uncovered more than 100 community pages and associated accounts on Facebook, Instagram, Twitter, and other platforms active through August 2017 that it believes were run by the troll farm. It confirmed those accounts’ authenticity using screenshots of posts and by consulting “a source close to the factory’s leadership.” The report estimates about 70 million people a week saw something posted by those accounts.

Zakharov told TPM that he believes there are accounts run by FAN with a total following around 1 million that remain active to this day.

This post has been updated.

Read More →

Reporters and analysts have long suspected and, over the past several weeks, confirmed that Russian cyberactors were running propaganda campaigns under the noses of three major tech companies—Facebook, Twitter and Google—during the 2016 elections. Even Microsoft’s Bing network reportedly sold ads to the Russians.

Those interlocking propaganda campaigns didn’t consist of merely stumping for Donald Trump or deriding Hillary Clinton. Instead, most of the ads unearthed thus far appear to have been devoted to reinforcing the American electorate’s own prejudices; that gambit appears terribly obvious and unsubtle in hindsight, as the contents of the ads continue to trickle out in the press. But no one spotted it at the time.

For example, YouTube videos recently uncovered by the Daily Beast feature two black men with African accents calling Clinton an “evildoer” next to a Black Lives Matter logo. One meme posted on a Russian troll-operated Facebook account read—with a dropped article worthy of Boris Badenov—“Why do I have a gun? Because it’s easier for my family to get me out of jail than out of cemetery.”

Facebook has said the Russian-bought ads were probably viewed 10 million times; Columbia University professor Jonathan Albright has suggested that, when all traffic to Russian-run accounts—not just the ads—is combined, that number increases to hundreds of millions, and possibly billions, of times. It’s not known whether all the propaganda itself was as hamfisted as the ads the public has seen, but even the amateurish material was unlikely to raise eyebrows, because the function of social media is to affirm its users, said Gordon Borrell, CEO of ad industry analytics firm Borrell Associates.

On Facebook, as opposed to a medium like television, “you’re able to hone in on someone who will likely vote Republican or will likely vote Democrat and hold on to them a bit more,” Borrell told TPM. “You don’t see a lot of crossover. They’ll hold onto you as a voter—at least that’s what [social media] campaigns appear to do.”

It’s certainly possible that some people who saw the laughable YouTube videos, crummy Facebook memes and broken-English tweets were suspicious of them; it’s also possible that the low-quality Russian ads were the exception. But it’s also generally true that when people hear what they want to hear, they’re unlikely to question who’s talking.

Thread 1: Smearing Black Lives Matter

Facebook, Twitter and Google have flattened the media ecosystem to such a degree that traditional news outlets like the Washington Post and the New York Times effectively compete with whitewashed demagoguery masquerading as information on sites like InfoWars and Breitbart. The Google News ranking algorithm gives those sites equal footing, and until very recently treated digital troll hive 4Chan as a news source. Partisan Facebook pages like @BeingConservative rack up millions of followers.

Against that backdrop, the now-defunct “conservative news” Twitter account @tpartynews amassed tens of thousands of followers before it was deactivated in August. @tpartynews frequently trashed Black Lives Matter, the decentralized black activist movement that protests systemic racism and police killings of black people in particular. And as TPM previously reported, @tpartynews’ followers were lapping up state-sponsored Russian propaganda: the feed was run by the now-notorious Russian troll farm, the Federal News Agency (previously known as the Internet Research Agency), that purchased $100,000 worth of Facebook ads.

“Williams and Kalvin,” the black YouTube personalities who the Daily Beast reported were part of Russia’s propaganda effort, used a Black Lives Matter logo and invoked the Black Panthers in poorly-produced videos of their own. The content is barely pro-Trump, but as Justin Hendrix, executive director of the NYC Media Lab, pointed out, that didn’t really matter—some of the Russian propaganda was even pro-Bernie Sanders.

“One of the reasons people are dismissing this stuff is they’ll look at one particular instance of this stuff and say, ‘That looks like it might be vaguely anti-Trump,'” Hendrix told TPM. “And you’ll dig under it and see that while it may initially appear anti-Trump it has a subtler purpose, to discourage people from being engaged or to suggest that all politics are so corrupt that there’s an equivalence between the candidates.”

That equivalence boosted Trump’s electoral prospects even as a score of women accused him of grotesque sexual misconduct. The Trump campaign didn’t need conservatives who didn’t dig Trump as a candidate to like him—they just needed those holdouts to believe he was better than Clinton, and the image of a black person supporting him, or at least deriding her as a “racist bitch,” might do the trick.

“A lot of it does seem to really prey on identity politics,” Hendrix said.

That identity politics was already surging in reaction to the presence of a black president: Conservative pundits have been quick to attribute any unrest that follows episodes of police brutality to Black Lives Matter, wielding #bluelivesmatter and #alllivesmatter hashtags on social media, and to tie all Black Lives Matter positions to Obama, whose justice department had taken first steps toward police reform. Russian-operated accounts gleefully exploited that festering sore spot: the @tpartynews Twitter account pushed out the message that “Crimminals [sic] commit less crime after they have been shot! That’s why I say #BlueLivesMatter.”

The Russian campaign played the other side of the issue as well. According to CNNMoney, Facebook ads were targeted around Ferguson, Missouri and Baltimore, two areas with reputations for police brutality and vicious clashes with the protestors who objected to it. A Facebook account called “Blacktivist” posted ostensibly pro-black liberation rhetoric that was filled with dogwhistles designed to play on the worst right-wing fears: “Our race is under attack, but remember, we are strong in numbers,” one post uncovered by CNN proclaimed. “Black people should wake up as soon as possible,” said another.

The Daily Beast reported that former NFL quarterback Colin Kaepernick, who introduced kneeling during the national anthem to the league as a form of protest against systemic racism, was a frequent target of the Kremlin-backed propaganda campaign as well.

Thread 2: Exploiting anti-immigrant and anti-Muslim sentiment

People who fear disloyalty don’t just fear activists like BLM. Trump’s resoundingly anti-immigrant campaign, with its cornerstone of a border wall he may or may not ever build, and the nativist grievances that anchor his base dovetail with the Putin government’s desire to see less military and diplomatic cooperation across the West.

The @tpartynews account was quick to tie together everything the right fears about undocumented people: “Illegal Immigrants today.. Democrat on welfare tomorrow!” Russian-linked Facebook pages went a step further: “Due to the town of Twin falls, Idaho, becoming a center of refugee resettlement, which led to the huge upsurge of violence towards American citizens, it is crucial to draw society’s attention to this problem,” read a post on the SecuredBorders page, according to the Daily Beast. That page went so far as to promote an anti-refugee rally in Twin Falls, Idaho, although it’s not clear that anyone actually showed up.

Another Russian-linked group called Heart of Texas, with about 225,000 followers, successfully organized anti-immigrant rallies protesting “higher taxes to feed undocumented aliens” and warned of the scourge of “mosques,” according to Business Insider. CNN reported that the group, which proposed “no mosques in America,” also succeeded in organizing one rally that was captured on video.

Thread 3: Amplifying gun rights issues

Undergirding both the anti-immigrant and anti-black sentiment the Russian propaganda campaign capitalized on is a fear of violence. It’s something the NRA exploited throughout the tenure of the United States’ first black president to great effect, and it was easy for Russian trolls to exploit too.

The New York Times listed “gun rights” among the topics covered by the divisive Russian Facebook ads turned over to special counsel Robert Mueller’s investigators. @tpartynews tweeted several pro-gun messages, and the “out of cemetery” meme referenced above appeared on a Russian-linked Facebook page called “Defend the 2nd.”

Tying it all together

Looking at the ads—though scant few of them have been unearthed by reports as tech companies have declined to publicly release them—it’s clear that the issue of race is paramount. The ads that have surfaced play relentlessly on prejudices against black people, immigrants and Muslims, and Trump’s campaign was a symphony of insults maligning all three groups.

Advertising from the Trump campaign was notable for the brazenness of its racialized invective; the Russian propaganda campaign followed suit with a microtargeted series of ads explicitly playing up racism and bigotry, rather than trying to sanitize it with coded phrases and winks. The results were inexpert and scattershot—the improbably named “Williams and Kalvin” seem to be looking at cue cards occasionally in their videos—but Facebook, Twitter and their peers had honed the delivery mechanism so carefully that the relative sophistication of the Russian propaganda may not have mattered.

“It doesn’t take a Ph.D. in computer science to use Facebook’s targeting tools,” Hendrix said. “These are tools that were built for anybody to be able to target messages and ads to any constituency. They’re designed for the lowest common denominator—to be as simple as possible and to work at scale.”

This post has been updated.

Read More →

It appears Russia didn’t attempt to disrupt the 2016 election through ads only on social media.

Nearly lost amid the deluge of reports about Kremlin-run Facebook and Twitter campaigns designed to influence the American electorate, the Department of Homeland Security last week messily notified 21 states, including Wisconsin, that Russia had targeted their election systems. The Wisconsin Elections Commission (WEC) then quietly issued a press release describing an unsuccessful August 2016 cyberattack that took the form of neither a targeted phishing attack nor an attempt to crack a password, but an ad.

The elections commission said that the state IT division’s protective measures had “blocked an advertisement embedded in a publicly available website from being displayed on a WEC computer.” When the state Department of Enterprise Technology provided the IP address it had blocked to DHS, the agency identified that address as “connected to Russian government cyber actors,” according to the release.

Steve Michels, the DET’s communications director, told TPM that the ad his department’s firewall identified and blocked was consistent with run-of-the-mill website advertising. Such filtering “commonly occurs in conjunction with an internet advertising pop-up or banner,” he said. While Michels said he couldn’t confirm what site the ad originated with, neither Facebook nor Twitter serves pop-up or banner ads.

“This attempt was blocked by our web content filtering tool and no data was exfiltrated,” Michels told TPM. “This blocked content request came on an elections commission network, likely a desktop computer.”

Toni Gidwani, director of research operations for respected cybersecurity firm ThreatConnect, told TPM that such “malvertising”—malicious advertising—” is a pretty common attack vector.” Gidwani, who cautioned that ThreatConnect couldn’t independently verify DHS’s claim of Russian targeting of Wisconsin without the actual IP address, which Michels declined to disclose, said the tactic is often used on general-interest sites where advertisers don’t exercise broad control over their audiences.

“If the website was something really specific to elections and/or something that WEC workers specifically would navigate to more consistently than other targets, that would be notable,” she told TPM. “If the website was something really general, then it might be hard to make the case that the activity was targeting the WEC. ”

The first kind of attack to which Gidwani referred is sometimes called a “watering hole,” a trap set for a particular set of users at a website they seem likely to visit; it’s not clear that the WEC employee who set off the ad was targeted that way. But it does appear that Russian cyberactors were able to participate in the broader digital ad ecosystem, with its self-applied regulations and well-documented vulnerability to malicious activity, in addition to their use of Facebook and Twitter ads.

It’s not clear what the 2016 attack was intended to accomplish, but tools designed for ad fraud—usually used to inflate the records of successfully completed ads, which determine how much an advertiser pays—have been repurposed in the service of Russian propaganda efforts before. In 2015, someone used a network of bots designed for malvertising to redirect users to pro-Russian videos on Dailymotion.

Jonathan Albright, research director at the Two Center for Digital Journalism who was mapping ecosystems of online disinformation as far back as November 2016, told TPM that many of the websites spreading that disinformation contained malicious code.

“There were definitely suspicious resources (i.e., content and code) in the batch of propaganda/disinfo/hoax sites I looked at back in November 2016,” Albright said. “If I remember, I believe 3 of the 116 sites were pre-emptively blocked by my browser as I scraped the ad tech. Lots of redirects to weird IPs, external insecure image/graphics loading, etc.”

Targeting a body like the Wisconsin Elections Commission would not be a particularly difficult or sophisticated operation, Albright said.

“It’s absolutely possible to target a business, and based on what I’ve seen, even more likely that an individual government office or bureau/department would be targeted,” he told TPM. “I think a directed story or topical hoax piece could be written to bring in a specific audience and then used as a vector to compromise individual computers and/or ranges of IP addresses.”

Wisconsin blocks tens of thousands of attempts to game its web applications and more than half a million attempts to crack passwords annually, Michels said. He was emphatic that the ad served to the WEC computer was one small attack in a sea of similar attempts, and that it was thwarted.

Regardless, experts already suspected that Russian government operators had used malvertising elsewhere; now DHS has confirmed they used it in the 2016 elections, too.

Read More →

A report out Tuesday connects Rep. Dana Rohrabacher (R-CA), the ardently pro-Russia congressman, to yet another player in the ever-widening Trump-Russia scandal: Natalia Veselnitskaya, the Kremlin-linked lawyer who promised to bring damaging information about Hillary Clinton to a June 2016 meeting with Donald Trump, Jr. and other Trump campaign officials.

According to a Russian-language interview with Veselnitskaya surfaced by Foreign Policy, Veselnitskaya said she met with Rohrabacher in Moscow during an April 2016 trip that also took him to Berlin, where it was previously reported that he’d met with Washington, D.C. lobbyist and former Soviet operative Rinat Akhmetshin.

“We just asked to listen to us, just to listen to the alternative version,” Veselnitskaya said, according to FP’s translation of the nearly 27-minute-long interview—the “alternative version” being her explanation of the events involving one of her clients that led to Obama-era sanctions on Russia. She recalled telling Rohrabacher “do not let yourself be used by scammers,” in reference to proponents of the Magnitsky Act. The interview was conducted by New Front, a pro-Russian news outlet based in Crimea.

Both Veselnitskaya and Ahkmetshin, who also attended the June 2016 meeting with Trump Jr., have been crusading against the Magnitsky Act for years.

Veselnitskaya had been retained to represent Denis Katsyv, owner of the Prevezon Group and a central figure in one of the largest money-laundering scandals in history. Katsyv was the defendant in a case against the company prosecuted by Preet Bharara, then the U.S. Attorney in Manhattan, and had bankrolled a major media campaign, both to rehabilitate his own image and to discredit a key witness in Bharara’s case, William Browder. Browder, through his own company Hermitage Capital, had caught wind of alleged money-laundering via an investigation by an attorney working for him named Sergei Magnitsky, who died in Russian custody after he accused a number of high-level Russian officials of participating in the scheme.

Veselnitskaya’s characterization of Browder, who championed the Magnitsky Act, as a “scammer” in conversation with Rohrabacher is consistent with the way she sought to portray him on Katsyv’s behalf. With the aid of Akhmetshin, Katsyv even commissioned the screening of an anti-Browder documentary at the Newseum in Washington, D.C., followed by a talkback moderated by veteran investigative journalist Seymour Hersh.

A Rohrabacher spokesman confirmed the meeting to FP and said that Rohrabacher “was not focused on [Veselnitskaya’s] identity;” she was “among many people” he met while abroad, he said.

Read More →

Tierney Sneed contributed reporting.

In an unattributed statement on the company blog published Thursday afternoon, Twitter disclosed that it had identified nearly 200 accounts associated with the same Facebook pages that were part of a Russian troll farm’s $100,000 ad buy on that platform.

Twitter said it had searched for accounts associated with the “roughly 450” Facebook pages shared as part of that company’s review. Twitter found 22 accounts that directly corresponded to the Russian Facebook accounts, and then found 179 others associated with those Twitter users.

Facebook shared those accounts directly with Twitter, TPM has learned; Congress still does not have the ads Facebook promised to share with elected officials, though those ads are expected by Monday.

“Neither the original accounts shared by Facebook, nor the additional related accounts we identified, were registered as advertisers on Twitter,” the statement read. “However, we continue to investigate these issues, and will take action on anything that violates our Terms of Service.”

The statement came after Twitter representatives met investigators from the Senate and House Intelligence Committees. Alongside Facebook, Twitter is at the center of both congressional and federal inquiries into the Trump campaign’s role, if any, in Russian interference in the 2016 election.

The social media company also said in the statement that Kremlin-backed news outlet RT purchased $274,000 in advertisements in 2016. It’s unclear how that figure compares to RT’s spending on Twitter in other years, or how it compares to Twitter ad budgets at news organizations of similar size.

“In [2016], the @RT_com, @RT_America, and @ActualidadRT accounts promoted 1,823 Tweets that definitely or potentially targeted the U.S. market,” the Twitter statement’s authors wrote. “These campaigns were directed at followers of mainstream media and primarily promoted RT Tweets regarding news stories.”

RT undertook a major expansion into the United States in 2013, and since the election it has been the subject of intensifying scrutiny, first from an official assessment by the US intelligence community and more recently directly from the DOJ, which has asked the organization to register as a foreign agent.

Executives from the company presented their findings to Senate Intelligence Committee staff on Thursday afternoon. Intelligence Vice-chair Mark Warner (D-VA) said he was unimpressed that Twitter’s research was “based on accounts that Facebook had identified” rather than a proactive review of their user base.

Other figures Twitter provided in the statement painted a picture of a social media platform under seige: The microblogging service said it blocks some 130,000 attempts to artificially promote hashtags to its “trending topics” category each day, in addition to 450,000 suspicious other logins daily. The company also said it had identified and suspended 117,000 programs that were abusing its proprietary interface to send “low-quality tweets.” Those programs had already tweeted 1.5 billion times in 2017.

This post has been updated.

Read More →

In recent weeks, Facebook has received the lion’s share of attention when it comes to the social media component of Russia’s interference in the U.S. election. But the service the President so frequently and famously uses hasn’t received quite the same level of scrutiny yet—perhaps because it’s much harder to nail down exactly what happened on Twitter during the 2016 campaign.

Much of the activity on Twitter is a morass of bot traffic, spam accounts mobbing hashtags and plain old harassment, so teasing out the Twitter component of a coordinated influence campaign that spanned multiple platforms is a seriously tall order. Sens. Mark Warner (D-VA) and Amy Klobuchar (D-MN) have proposed some of the first regulations that would specifically affect Twitter and Facebook; a Twitter spokesman told TPM that, regarding regulation, “we are open to discussing this with the FEC and Congress.”

There are a few facts about Russian-linked activity on Twitter during the 2016 campaign we already know thanks to published reports, but there’s much more that remains unclear. Answers to some of those unanswered questions could emerge from Twitter’s closed-door meeting with the Senate Intelligence Committee on Wednesday.

What We Know

Guccifer 2.0 and DCLeaks spread propaganda on Twitter

The primary arms of the Russian disinformation campaign operated on Twitter—in fact, you still can visit the Twitter pages for DCLeaks and Guccifer 2.0, two of the outlets for emails stolen from Democratic organizations and operatives.

Twitter has a laissez-faire attitude toward who can and can’t use its network; short of distributing something illegal or advocating violence—and sometimes even then—users can do pretty much whatever they want with impunity. In this case, it appears to have given useful platforms to what the U.S. intelligence community says were fronts for a Russian intelligence service.

The Guccifer 2.0 and DCLeaks accounts haven’t tweeted since January 2017 and December 2016, respectively.

Groups of synchronized, automated accounts promoted Trump in the interest of Russians

Russian intelligence also used networks of automated accounts, or social botnets, on Twitter, although it’s hard to tell which were actually harnessed by the GRU and which were simply a function of Russia’s burgeoning cybercrime industry. Much of the work that has been done tracking bot accounts is inductive, which has made the task of labeling bot accounts a perilous one. Plenty of amateur Trump-Russia sleuths have managed to look foolish for accusing run-of-the-mill conservative Twitter users of being Russian bots.

But some of the reasoning is convincing and comes from reliable sources. Cybersecurity researcher Brian Krebs, formerly a reporter for the Washington Post, noted that any time he criticized Putin, it mysteriously generated defensive tweets about Trump. He also observed that the service’s like and retweet buttons were being used as part of a strategic offense.

Russian-linked accounts promoted fake news stories

Russian social botnets appear to have been used to promote a lot of far-right news hashtags, according to Hamilton 68, a program that tracks probable bots of Russian origin. This is in itself not especially unusual. Twitter charges to promote tweets and tags on its service, so an underhanded advertiser may feel the need to promote its work through a network of linked accounts that will get it the requisite number of likes and retweets.

But a January report from the Office of the Director of National Intelligence (DNI) in noted that Russian state-affiliated bloggers had prepared such a campaign for Clinton’s victory. “Before he election, Russian diplomats had publicly denounced the US electoral process and were prepared to publicly call into question the validity of the results,” the report’s authors wrote. “Pro- Kremlin bloggers had prepared a Twitter campaign, #DemocracyRIP, on election night in anticipation of Secretary Clinton’s victory, judging from their social media activity.”

At other moments, Russian Twitter users glommed onto the far-right news of the day, including the conspiracy theory that murdered Democratic National Committee staffer Seth Rich had something to do with the stolen emails.

Russians ran at least one pro-Trump news account

The @tpartynews account had some 22,000 followers and regularly insulted Black Lives Matter activists. The account was followed by former Trump advisor Sebastian Gorka, who himself has been linked to far-right racist and anti-Semitic groups in Hungary.

What We Don’t Know

How much bot traffic was actually directed by Russian intelligence?

Bot traffic on Twitter is vast. While it accounted for 33 percent of pro-Trump tweets during the run-up to the 2016 election, it also accounted for 22 percent of pro-Clinton tweets. It’s very difficult to tell which tweets are of Russian origin and which Russian tweets are part of a Kremlin influence campaign. Much of this simply speaks to a vulnerability on the platform that activists have been complaining about for years: Twitter’s sign-up process is very simple and open to abuse by anyone who, for whatever reason, wants to promote a malicious agenda or harass other users.

To what extent did Russia use Twitter’s ad technology?

We now know Russian operators used Facebook to run ad campaigns around divisive social issues. They made use of the company’s microtargeting capabilities, which are especially effective at locating people who may be sympathetic to the deluge of anti-Clinton, pro-Trump news that the GRU had already seeded through WikiLeaks, Guccifer 2.0 and DCLeaks. Twitter hasn’t yet answered the question of whether Russian intelligence was able to operate to its satisfaction merely using botnets and sock-puppet accounts like @tpartynews, or whether it needed to buy promoted tweets or hashtags; so far there’s no evidence that it did.

Why are some Russian accounts dormant while others are still active?

One group tracking Russian bots notes that many of them haven’t stopped tweeting. In fact, they tweeted in support of alt-right groups in the aftermath of the slaying of Heather Heyer at a white supremacist rally in Charlottesville, Virginia. Again, some of this is inductive reasoning: ProPublica identified one account as a bot by noting it used a stolen photo, sent five tweets in a single minute that all used a URL shortener, and that the account’s tweets “were reported to use similar language from Russian government–backed outlets Sputnik and RT.” Of course, all this could be true of a human account, too.

What does Twitter plan to do about any of this?

Twitter is due on Capitol Hill Wednesday and Thursday. The company has thus far been tight-lipped about its strategy for dealing with malicious foreign governments trying to tamper in each others’ elections—similar influence campaigns in France and Germany have taken place since the American election. The company may come up with some kind of internal proposal for enhancing its ability to detect and root out activity like the GRU influence campaign in much the same way it, along with Facebook, has agreed to help the U.S. deal with social media accounts run by the Islamic State.

Read More →

Buried in a Washington Post story out Sunday night is a surprising new development: Facebook’s cybersecurity team told the FBI in June 2016 that it believed the Russian hacking team APT 28, also known as “Fancy Bear” and believed to be a proxy for the Russian state security service GRU, was active on the platform.

From the Post:

Soon thereafter, Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.

As cybersecurity analyst Marcy Wheeler observes, this is quite an admission—anonymously sourced—from the company, which said in April that it wasn’t in a position to attribute the unusual activity to anybody in particular.

The company has consistently downplayed the effect of false information on its users and the significance of what now appear to be a great many dummy accounts on its platform run by Russian trolls.

“Facebook conducted research into overall civic engagement during this time on the platform, and determined that the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared during the US election,” the company’s threat analysts wrote in that April report.

At first, Facebook’s own review “did not find clear evidence of Russian disinformation or ad purchases by Russian-linked accounts,” the Post reported, but that public assessment changed radically on Sept. 6 when the company announced it had found $100,000 worth of advertisements purchased by the Internet Research Agency, a troll farm with Kremlin ties.

Yet according to the timeline laid out in the Post’s report, Facebook was concerned enough to raise the alarm to law enforcement in June 2016, just as the Russian disinformation campaign began in earnest. The first word that the federal governmental was investigating Russia’s influence in the campaign came in July 2016, a few weeks after Guccifer 2.0’s first post—in which he tries to claim sole credit for hacking the Democratic National Committee. Less than a week later, a Motherboard reporter who interviewed someone claiming to be Guccifer 2.0 appeared to be the first to suggest that the hacker might be Russian rather than Romanian, as he claimed.

Special counsel Robert Mueller’s team of investigators recently have focused more on Facebook itself, rather than on hyperpartisan “fake news.” That suggests Americans were unable to see how they were being manipulated on the platform, despite the tactics appearing obvious in hindsight: While a number of stories in the conservative news media were sourced to dumps of emails hacked by the Russians, the news outlets themselves weren’t exactly breaking with tradition by reporting that information in a disingenuous and credulous way. Russians didn’t make the American news ecosystem on social media so toxic—that was already true—they just used it to amplify stories that might serve their specific interests.

Fancy Bear, too, was hardly a secret. The Russian hacking collective had been a topic of much discussion among American cybersecurity researchers for more than a year before it breached the DNC. It made news among cybersecurity researchers in May 2015 for a brazen attempt to hack American banks. The group also breached the World Anti-Doping Agency and distributed strategically falsified information alongside information from that hack in August 2016.

But it’s safe to say that no one in the mainstream press immediately understood the primary role social networks like Facebook and Twitter would come to play in Fancy Bear’s operations. News organizations questioned the origins of emails stolen from the DNC, Democratic Congressional Campaign Committee, and Clinton campaign chairman John Podesta—as far as the public knew, those were primarily distributed through DCLeaks and Guccifer 2.0, both WordPress sites, and later by WikiLeaks—but until recently, personal social media accounts weren’t considered any more suspicious than the news articles they shared.

At the moment, it’s also unclear how much of the U.S. government’s investigation into Russian hacking attacks explored Facebook, and what it may have found. The FBI announced only that it had investigated “malicious cyber activity” in a brief joint report with the Department of Homeland Security issued in December 2016. The Joint Analysis Report (JAR) contains nary a mention of Facebook, although it does warn readers generally about suspicious social media interactions.

One could now read between the lines in Facebook’s April repot and see the suggestion that the Russian government was directly stumping for Trump on Facebook, although the authors did not write the word “Russia” once in its pages. Months after the U.S. government formally accused the Kremlin of that malicious activity, Facebook defined “influence operations” as “Actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.” The title of the report is Influence Operations and Facebook.

It’s also now possible to read comments from people who might have known more about the attacks a little closer: James Clapper, the former director of national intelligence, suggested in January that disinformation masquerading as news had been a part of the Russian campaign on Facebook.

We know there appears to have been a concerted propaganda effort across at least $100,000 worth of Facebook advertisements, many of them promoted by accounts made with stolen user photos and some used to organize rallies on U.S. soil. The company’s CEO, Mark Zuckerberg, has come forward to say Facebook will try to make it “much harder” for foreign operators to interfere with the American political process. But the Post story raises another question: Who else knew about the unusual activity Facebook detected on its platform and when—and what did they do to try to stop it?

Read More →

Facebook initially withheld from Congress the thousands of ads it says were purchased by Kremlin-affiliated trolls because some of them contain photos stolen from other Facebook users, a congressional staffer briefed on the content of the ads told TPM on Friday.

The staffer said some of the ads include images of people who are essentially innocent bystanders to the propaganda war Russia waged across social media platforms during the 2016 campaign. The staffer suggested it may be possible for Congress to redact the ads to maintain the privacy of any users whose photos were stolen, in order to give the public access to some of the material Russian operators deployed to try to illegally influence voters.

A Facebook spokesman declined to comment to TPM.

In a reversal, the company announced Thursday that it had “reached out to congressional leadership to agree on a process and schedule to provide the content of these ads, along with related information, to congressional investigators.” Facebook already had handed over details of the ad buys, including copies of the ads themselves, to special counsel Robert Mueller.

On the company’s blog, general counsel Colin Stretch essentially handed the public disclosure matter over to Congress: “We believe Congress is best placed to use the information we and others provide to inform the public comprehensively and completely.”

The staffer TPM spoke with speculated that the stolen photos may have been used to build fake Facebook profiles—under the site’s default settings, every user’s profile photos, past and present, are not merely visible but also available for download by any other user.

There’s already some evidence that Russian operators built fake Facebook accounts using stolen photos: The New York Times found that pictures belonging to Charles David Costacurta, a Brazilian man, had been used to build a profile under the name “Melvin Redick” that used to disseminate Russian propaganda.

In Congress, there is a growing sense that the public should know how, specifically, it may have been affected by foreign interference on social media platforms.

Sen. Mark Warner (D-VA), vice chair of the Senate Intelligence Committee, went further: “An American can still figure out what content is being used on TV advertising,” he told CNN. “But in social media there’s no such requirement.”

Warner suggested the need for “a reform process” that would enable Americans “to know if there is foreign-sponsored content coming into their electoral process.” The senator is writing a bill that would require online media companies to publish disclosures similar to those mandated of broadcast television stations, which are individually licensed by the Federal Communications Commission.

Zuckerberg said publicly that he didn’t “think society should want us to “pre-screen political ads. Requiring the company to pre-screen any category of advertisement would necessitate a major increase in human staff, since, by Zuckerberg’s admission, most ad buying on Facebook is automated.

Read More →

LiveWire