Tag Archives: The Intercept

Japan’s Directorate for Signals Intelligence

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/japans_director.html

The Intercept has a long article on Japan’s equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister’s inner circle, are kept in the dark about the directorate’s activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK — produced in collaboration with The Intercept — reveals for the first time details about the inner workings of Japan’s opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency’s intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.

DeleteFacebook

Post Syndicated from Йовко Ламбрев original https://yovko.net/deletefacebook/

DeleteFacebook

Когато започнах рубриката си „Аз, киборгът“ в Тоест, имах в главата си две идеи. Едната е да обяснявам на човешки и нетехнически език важни неща от света на технологиите, а другата – постепенно да разказвам за възможните злоупотреби с данните, които безразсъдно сеем из т.нар. социални мрежи и най-вече Facebook.

Междувременно журналисти на The Guardian и The Observer, заедно с The New York Times и Channel 4 са работили цяла година по разследване, което потвърждава всички опасения, че данните, които Facebook е трупал с доброволното съучастие на потребителите си, са използвани безцеремонно за мръсни и подмолни манипулации от компанията Cambridge Analytica, която е превърнала това в свой бизнес.

В името на коректността е редно да се отбележи, че темата сама по себе си не е новина. Още преди една година разследване на The Intercept извади на повърхността мащаба на проблема – че данните на 30 милиона потребители във Facebook са използвани за предизборни манипулации в полза на Доналд Тръмп. Сега обаче разполагаме със свидетелствата на whistleblower (бивш ключов служител на Cambridge Analytica), който разказва с детайли как се е случвало всичко и още купчина самопризнания за детайли и пикантерии лично от мениджмънта на компанията, записани със скрита камера, докато ухажват мним потенциален нов клиент.


Компанията Cambridge Analytica, извличайки данни от подбрана извадка потребители на Facebook и свързаните с тях лица, създава огромна мрежа за влияние, есплоатирайки страховете и слабостите на хората. Така е манипулирала обществената среда и общественото мнение в полза на клиентите си. Прецизно е оценявала психологическите профили на хората, търсейки техни слабости и възползвайки се от податливостта им на влияния. Оценявала е личните профили, фокусирайки се внимателно върху трите критерия, които в психологията са наречени „черната триада” или „тъмната тройка” – макиавелизъм, психопатия и нарцисизъм.

Предоставяли са услугите си и през фирми-посредници, за да не бъдат уличени в директни връзки с политическите кампании, за които са работили. А за Източна Европа изпускат интересна самохвална реплика, че толкова потайно са си свършили работата, че никой дори не е разбрал…

Извън скандала с Cambridge Analytica обаче е важно да се осъзнае, че те не са единственият злодей в историята. Всичко това се случва, защото просто така работи Facebook. Това, което днес наричаме социални медии, всъщност са инструменти за събиране на данни. И не трябва да има никаква дискусия чии са тези данни и на кого принадлежат. Наши са! Не допускайте други тълкувания! Във времето, в което живеем, данните ни са проекция на самите нас. Данните ни, това сме ние! Допускайки друг да разполага с тях, позволяваме да ни застигат такива скандали, като не на шега ни заплашва някаква форма на дигитален феодализъм или дигитално робство.

Важен е и друг детайл. Всичко това излиза наяве благодарение на свободната преса. Ето защо тя е толкова важна за демокрацията. А със социалните мрежи е свършено! Би трябвало! Поне в този им вид.

Cambridge Analytica е злодеят, който се е възползвал от това, което е свършил друг по-страшен злодей, а именно Facebook. И не на последно място от наивността на всички ни, които още не сме си затворили акаунтите там. Facebook трябва да понесат всички последствия и цялата отговорност, защото направиха възможно всичко това. И не заслужават никаква милост!

DeleteFacebook

P.S. В европейски контекст е важно да отбележим, че е по-разумно да изтрием Facebook акаунта си след 25 май 2018 г. След тази дата GDPR ще бъде в пълна сила и Facebook са длъжни да го спазват, а той изисква ако потребителят пожелае неговият профил да бъде заличен, това наистина да бъде направено. А не просто замразен, както е било досега.

DeleteFacebook

Post Syndicated from Йовко Ламбрев original https://yovko.net/deletefacebook/

DeleteFacebook

Когато започнах рубриката си „Аз, киборгът“ в Тоест, имах в главата си две идеи. Едната е да обяснявам на човешки и нетехнически език важни неща от света на технологиите, а другата – постепенно да разказвам за възможните злоупотреби с данните, които безразсъдно сеем из т.нар. социални мрежи и най-вече Facebook.

Междувременно журналисти на The Guardian и The Observer, заедно с The New York Times и Channel 4 са работили цяла година по разследване, което потвърждава всички опасения, че данните, които Facebook е трупал с доброволното съучастие на потребителите си, са използвани безцеремонно за мръсни и подмолни манипулации от компанията Cambridge Analytica, която е превърнала това в свой бизнес.

В името на коректността е редно да се отбележи, че темата сама по себе си не е новина. Още преди една година разследване на The Intercept извади на повърхността мащаба на проблема – че данните на 30 милиона потребители във Facebook са използвани за предизборни манипулации в полза на Доналд Тръмп. Сега обаче разполагаме със свидетелствата на whistleblower (бивш ключов служител на Cambridge Analytica), който разказва с детайли как се е случвало всичко и още купчина самопризнания за детайли и пикантерии лично от мениджмънта на компанията, записани със скрита камера, докато ухажват мним потенциален нов клиент.


Компанията Cambridge Analytica, извличайки данни от подбрана извадка потребители на Facebook и свързаните с тях лица, създава огромна мрежа за влияние, есплоатирайки страховете и слабостите на хората. Така е манипулирала обществената среда и общественото мнение в полза на клиентите си. Прецизно е оценявала психологическите профили на хората, търсейки техни слабости и възползвайки се от податливостта им на влияния. Оценявала е личните профили, фокусирайки се внимателно върху трите критерия, които в психологията са наречени „черната триада” или „тъмната тройка” – макиавелизъм, психопатия и нарцисизъм.

Предоставяли са услугите си и през фирми-посредници, за да не бъдат уличени в директни връзки с политическите кампании, за които са работили. А за Източна Европа изпускат интересна самохвална реплика, че толкова потайно са си свършили работата, че никой дори не е разбрал…

Извън скандала с Cambridge Analytica обаче е важно да се осъзнае, че те не са единственият злодей в историята. Всичко това се случва, защото просто така работи Facebook. Това, което днес наричаме социални медии, всъщност са инструменти за събиране на данни. И не трябва да има никаква дискусия чии са тези данни и на кого принадлежат. Наши са! Не допускайте други тълкувания! Във времето, в което живеем, данните ни са проекция на самите нас. Данните ни, това сме ние! Допускайки друг да разполага с тях, позволяваме да ни застигат такива скандали, като не на шега ни заплашва някаква форма на дигитален феодализъм или дигитално робство.

Важен е и друг детайл. Всичко това излиза наяве благодарение на свободната преса. Ето защо тя е толкова важна за демокрацията. А със социалните мрежи е свършено! Би трябвало! Поне в този им вид.

Cambridge Analytica е злодеят, който се е възползвал от това, което е свършил друг по-страшен злодей, а именно Facebook. И не на последно място от наивността на всички ни, които още не сме си затворили акаунтите там. Facebook трябва да понесат всички последствия и цялата отговорност, защото направиха възможно всичко това. И не заслужават никаква милост!

#DeleteFacebook

P.S. В европейски контекст е важно да отбележим, че е по-разумно да изтрием Facebook акаунта си след 25 май 2018 г. След тази дата GDPR ще бъде в пълна сила и Facebook са длъжни да го спазват, а той изисква ако потребителят пожелае неговият профил да бъде заличен, това наистина да бъде направено. А не просто замразен, както е било досега.

#DeleteFacebook

Post Syndicated from Йовко Ламбрев original https://yovko.net/deletefacebook/

Когато започнах рубриката си „Аз, киборгът“ в Тоест, имах в главата си две идеи. Едната е да обяснявам на човешки и нетехнически език важни неща от света на технологиите, а другата – постепенно да разказвам за възможните злоупотреби с данните, които безразсъдно сеем из т.нар. социални мрежи и най-вече Facebook.

Междувременно журналисти на The Guardian и The Observer, заедно с The New York Times и Channel 4 са работили цяла година по разследване, което потвърждава всички опасения, че данните, които Facebook е трупал с доброволното съучастие на потребителите си, са използвани безцеремонно за мръсни и подмолни манипулации от компанията Cambridge Analytica, която е превърнала това в свой бизнес.

В името на коректността е редно да се отбележи, че темата сама по себе си не е новина. Още преди една година разследване на The Intercept извади на повърхността мащаба на проблема – че данните на 30 милиона потребители във Facebook са използвани за предизборни манипулации в полза на Доналд Тръмп. Сега обаче разполагаме със свидетелствата на whistleblower (бивш ключов служител на Cambridge Analytica), който разказва с детайли как се е случвало всичко и още купчина самопризнания за детайли и пикантерии лично от мениджмънта на компанията, записани със скрита камера, докато ухажват мним потенциален нов клиент.

Компанията Cambridge Analytica, извличайки данни от подбрана извадка потребители на Facebook и свързаните с тях лица, създава огромна мрежа за влияние, есплоатирайки страховете и слабостите на хората. Така е манипулирала обществената среда и общественото мнение в полза на клиентите си. Прецизно е оценявала психологическите профили на хората, търсейки техни слабости и възползвайки се от податливостта им на влияния. Оценявала е личните профили, фокусирайки се внимателно върху трите критерия, които в психологията са наречени „черната триада” или „тъмната тройка” – макиавелизъм, психопатия и нарцисизъм.

Предоставяли са услугите си и през фирми-посредници, за да не бъдат уличени в директни връзки с политическите кампании, за които са работили. А за Източна Европа изпускат интересна самохвална реплика, че толкова потайно са си свършили работата, че никой дори не е разбрал…

Извън скандала с Cambridge Analytica обаче е важно да се осъзнае, че те не са единственият злодей в историята. Всичко това се случва, защото просто така работи Facebook. Това, което днес наричаме социални медии, всъщност са инструменти за събиране на данни. И не трябва да има никаква дискусия чии са тези данни и на кого принадлежат. Наши са! Не допускайте други тълкувания! Във времето, в което живеем, данните ни са проекция на самите нас. Данните ни, това сме ние! Допускайки друг да разполага с тях, позволяваме да ни застигат такива скандали, като не на шега ни заплашва някаква форма на дигитален феодализъм или дигитално робство.

Важен е и друг детайл. Всичко това излиза наяве благодарение на свободната преса. Ето защо тя е толкова важна за демокрацията. А със социалните мрежи е свършено! Би трябвало! Поне в този им вид.

Cambridge Analytica е злодеят, който се е възползвал от това, което е свършил друг по-страшен злодей, а именно Facebook. И не на последно място от наивността на всички ни, които още не сме си затворили акаунтите там. Facebook трябва да понесат всички последствия и цялата отговорност, защото направиха възможно всичко това. И не заслужават никаква милост!

#DeleteFacebook

P.S. В европейски контекст е важно да отбележим, че е по-разумно да изтрием Facebook акаунта си след 25 май 2018 г. След тази дата GDPR ще бъде в пълна сила и Facebook са длъжни да го спазват, а той изисква ако потребителят пожелае неговият профил да бъде заличен, това наистина да бъде направено. А не просто замразен, както е било досега.

Оригинален линк: “#DeleteFacebook” • Някои права запазени

Some notes about the Kaspersky affair

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/some-notes-about-kaspersky-affair.html

I thought I’d write up some notes about Kaspersky, the Russian anti-virus vendor that many believe has ties to Russian intelligence.

There’s two angles to this story. One is whether the accusations are true. The second is the poor way the press has handled the story, with mainstream outlets like the New York Times more intent on pushing government propaganda than informing us what’s going on.

The press

Before we address Kaspersky, we need to talk about how the press covers this.
The mainstream media’s stories have been pure government propaganda, like this one from the New York Times. It garbles the facts of what happened, and relies primarily on anonymous government sources that cannot be held accountable. It’s so messed up that we can’t easily challenge it because we aren’t even sure exactly what it’s claiming.
The Society of Professional Journalists have a name for this abuse of anonymous sources, the “Washington Game“. Journalists can identify this as bad journalism, but the big newspapers like The New York Times continues to do it anyway, because how dare anybody criticize them?
For all that I hate the anti-American bias of The Intercept, at least they’ve had stories that de-garble what’s going on, that explain things so that we can challenge them.

Our Government

Our government can’t tell us everything, of course. But at the same time, they need to tell us something, to at least being clear what their accusations are. These vague insinuations through the media hurt their credibility, not help it. The obvious craptitude is making us in the cybersecurity community come to Kaspersky’s defense, which is not the government’s aim at all.
There are lots of issues involved here, but let’s consider the major one insinuated by the NYTimes story, that Kaspersky was getting “data” files along with copies of suspected malware. This is troublesome if true.
But, as Kaspersky claims today, it’s because they had detected malware within a zip file, and uploaded the entire zip — including the data files within the zip.
This is reasonable. This is indeed how anti-virus generally works. It completely defeats the NYTimes insinuations.
This isn’t to say Kaspersky is telling the truth, of course, but that’s not the point. The point is that we are getting vague propaganda from the government further garbled by the press, making Kaspersky’s clear defense the credible party in the affair.
It’s certainly possible for Kaspersky to write signatures to look for strings like “TS//SI/OC/REL TO USA” that appear in secret US documents, then upload them to Russia. If that’s what our government believes is happening, they need to come out and be explicit about it. They can easily setup honeypots, in the way described in today’s story, to confirm it. However, it seems the government’s description of honeypots is that Kaspersky only upload files that were clearly viruses, not data.

Kaspersky

I believe Kaspersky is guilty, that the company and Eugene himself, works directly with Russian intelligence.
That’s because on a personal basis, people in government have given me specific, credible stories — the sort of thing they should be making public. And these stories are wholly unrelated to stories that have been made public so far.
You shouldn’t believe me, of course, because I won’t go into details you can challenge. I’m not trying to convince you, I’m just disclosing my point of view.
But there are some public reasons to doubt Kaspersky. For example, when trying to sell to our government, they’ve claimed they can help us against terrorists. The translation of this is that they could help our intelligence services. Well, if they are willing to help our intelligence services against customers who are terrorists, then why wouldn’t they likewise help Russian intelligence services against their adversaries?
Then there is how Russia works. It’s a violent country. Most of the people mentioned in that “Steele Dossier” have died. In the hacker community, hackers are often coerced to help the government. Many have simply gone missing.
Being rich doesn’t make Kaspersky immune from this — it makes him more of a target. Russian intelligence knows he’s getting all sorts of good intelligence, such as malware written by foreign intelligence services. It’s unbelievable they wouldn’t put the screws on him to get this sort of thing.
Russia is our adversary. It’d be foolish of our government to buy anti-virus from Russian companies. Likewise, the Russian government won’t buy such products from American companies.

Conclusion

I have enormous disrespect for mainstream outlets like The New York Times and the way they’ve handled the story. It makes me want to come to Kaspersky’s defense.

I have enormous respect for Kaspersky technology. They do good work.

But I hear stories. I don’t think our government should be trusting Kaspersky at all. For that matter, our government shouldn’t trust any cybersecurity products from Russia, China, Iran, etc.

NSA Spied on Early File-Sharing Networks, Including BitTorrent

Post Syndicated from Andy original https://torrentfreak.com/nsa-spied-on-early-file-sharing-networks-including-bittorrent-170914/

In the early 2000s, when peer-to-peer (P2P) file-sharing was in its infancy, the majority of users had no idea that their activities could be monitored by outsiders. The reality was very different, however.

As few as they were, all of the major networks were completely open, with most operating a ‘shared folder’ type system that allowed any network participant to see exactly what another user was sharing. Nevertheless, with little to no oversight, file-sharing at least felt like a somewhat private affair.

As user volumes began to swell, software such as KaZaA (which utilized the FastTrack network) and eDonkey2000 (eD2k network) attracted attention from record labels, who were desperate to stop the unlicensed sharing of copyrighted content. The same held true for the BitTorrent networks that arrived on the scene a couple of years later.

Through the rise of lawsuits against consumers, the general public began to learn that their activities on P2P networks were not secret and they were being watched for some, if not all, of the time by copyright holders. Little did they know, however, that a much bigger player was also keeping a watchful eye.

According to a fascinating document just released by The Intercept as part of the Edward Snowden leaks, the National Security Agency (NSA) showed a keen interest in trying to penetrate early P2P networks.

Initially published by internal NSA news site SIDToday in June 2005, the document lays out the aims of a program called FAVA – File-Sharing Analysis and Vulnerability Assessment.

“One question that naturally arises after identifying file-sharing traffic is whether or not there is anything of intelligence value in this traffic,” the NSA document begins.

“By searching our collection databases, it is clear that many targets are using popular file sharing applications; but if they are merely sharing the latest release of their favorite pop star, this traffic is of dubious value (no offense to Britney Spears intended).”

Indeed, the vast majority of users of these early networks were only been interested in sharing relatively small music files, which were somewhat easy to manage given the bandwidth limitations of the day. However, the NSA still wanted to know what was happening on a broader scale, so that meant decoding their somewhat limited encryption.

“As many of the applications, such as KaZaA for example, encrypt their traffic, we first had to decrypt the traffic before we could begin to parse the messages. We have developed the capability to decrypt and decode both KaZaA and eDonkey traffic to determine which files are being shared, and what queries are being performed,” the NSA document reveals.

Most progress appears to have been made against KaZaA, with the NSA revealing the use of tools to parse out registry entries on users’ hard drives. This information gave up users’ email addresses, country codes, user names, the location of their stored files, plus a list of recent searches.

This gave the NSA the ability to look deeper into user behavior, which revealed some P2P users going beyond searches for basic run-of-the-mill multimedia content.

“[We] have discovered that our targets are using P2P systems to search for and share files which are at the very least somewhat surprising — not simply harmless music and movie files. With more widespread adoption, these tools will allow us to regularly assimilate data which previously had been passed over; giving us a more complete picture of our targets and their activities,” the document adds.

Today, more than 12 years later, with KaZaA long dead and eDonkey barely alive, scanning early pirate activities might seem a distant act. However, there’s little doubt that similar programs remain active today. Even in 2005, the FAVA program had lofty ambitions, targeting other networks and protocols including DirectConnect, Freenet, Gnutella, Gnutella2, JoltID, MSN Messenger, Windows Messenger and……BitTorrent.

“If you have a target using any of these applications or using some other application which might fall into the P2P category, please contact us,” the NSA document urges staff. “We would be more than happy to help.”

Confirming the continued interest in BitTorrent, The Intercept has published a couple of further documents which deal with the protocol directly.

The first details an NSA program called GRIMPLATE, which aimed to study how Department of Defense employees were using BitTorrent and whether that constituted a risk.

The second relates to P2P research carried out by Britain’s GCHQ spy agency. It details DIRTY RAT, a web application which gave the government to “the capability to identify users sharing/downloading files of interest on the eMule (Kademlia) and BitTorrent networks.”

The SIDToday document detailing the FAVA program can be viewed here

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Entire Kim Dotcom Spying Operation Was Illegal, High Court Rules

Post Syndicated from Andy original https://torrentfreak.com/entire-kim-dotcom-spying-operation-was-illegal-high-court-rules-170825/

In the months that preceded the January 2012 raid on file-storage site Megaupload, authorities in New Zealand used the Government Communications Security Bureau (GCSB) spy agency to monitor Kim and Mona Dotcom, plus Megaupload co-defendant Bram van der Kolk.

When this fact was revealed it developed into a crisis. The GCSB was forbidden by law from conducting surveillance on its own citizens or permanent residents in the country, which led to former Prime Minister John Key later apologizing for the error.

With Dotcom determined to uncover the truth, the entrepreneur launched legal action in pursuit of the information illegally obtained by GCSB and to obtain compensation. In July, the High Court determined that Dotcom wouldn’t get access to the information but it also revealed that the scope of the spying went on much longer than previously admitted, a fact later confirmed by the police.

This raised the specter that not only did the GCSB continue to spy on Dotcom after it knew it was acting illegally, but that an earlier affidavit from a GCSB staff member was suspect.

With the saga continuing to drag on, revelations published in New Zealand this morning indicate that not only was the spying on Dotcom illegal, the entire spying operation – which included his Megaupload co-defendants – was too.

The reports are based on documents released by Lawyer Peter Spring, who is acting for Bram van der Kolk and Mathias Ortmann. Spring says that the High Court decision, which dates back to December but has only just been made available, shows that “the whole surveillance operation fell outside the authorization of the GCSB legislation as it was at the relevant time”.

Since Dotcom is a permanent resident of New Zealand, it’s long been established that the GCSB acted illegally when it spied on him. As foreigners, however, Megaupload co-defendants Finn Batato and Mathias Ortmann were previously considered valid surveillance targets.

It now transpires that the GCSB wasn’t prepared to mount a defense or reveal its methods concerning their surveillance, something which boosted the case against it.

“The circumstances of the interceptions of Messrs Ortmann and Batato’s communications are Top Secret and it has not proved possible to plead to the allegations the plaintiffs have made without revealing information which would jeopardize the national security of New Zealand,” the Court documents read.

“As a result the GCSB is deemed to have admitted the allegations in the statement of claim which relate to the manner in which the interceptions were effected.”

Speaking with RadioNZ, Grant Illingworth, a lawyer representing Ortmann and van der Kolk, said the decision calls the entire GCSB operation into doubt.

“The GCSB has now admitted that the unlawfulness was not just dependent upon residency issues, it went further. The reason it went further was because it didn’t have authorization to carry out the kind of surveillance that it was carrying out under the legislation, as it was at that time,” Illingworth said.

In comments to NZHerald, Illingworth added that the decision meant that the damages case for Ortmann and van der Kolk had come to an end. He refused to respond to questions of whether damages had been paid or a settlement reached.

He did indicate, however, that there could be implications for the battle underway to have Dotcom, Batato, Ortmann and van der Kolk extradited to the United States.

“If there was illegality in the arrest and search phase and that illegality has not previously been made known in the extradition context then it could be relevant to the extradition,” Illingworth said.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Basic API Rate-Limiting

Post Syndicated from Bozho original https://techblog.bozho.net/basic-api-rate-limiting/

It is likely that you are developing some form of (web/RESTful) API, and in case it is publicly-facing (or even when it’s internal), you normally want to rate-limit it somehow. That is, to limit the number of requests performed over a period of time, in order to save resources and protect from abuse.

This can probably be achieved on web-server/load balancer level with some clever configurations, but usually you want the rate limiter to be client-specific (i.e. each client of your API sohuld have a separate rate limit), and the way the client is identified varies. It’s probably still possible to do it on the load balancer, but I think it makes sense to have it on the application level.

I’ll use spring-mvc for the example, but any web framework has a good way to plug an interceptor.

So here’s an example of a spring-mvc interceptor:

@Component
public class RateLimitingInterceptor extends HandlerInterceptorAdapter {

    private static final Logger logger = LoggerFactory.getLogger(RateLimitingInterceptor.class);
    
    @Value("${rate.limit.enabled}")
    private boolean enabled;
    
    @Value("${rate.limit.hourly.limit}")
    private int hourlyLimit;

    private Map<String, Optional<SimpleRateLimiter>> limiters = new ConcurrentHashMap<>();
    
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler)
            throws Exception {
        if (!enabled) {
            return true;
        }
        String clientId = request.getHeader("Client-Id");
        // let non-API requests pass
        if (clientId == null) {
            return true;
        }
        SimpleRateLimiter rateLimiter = getRateLimiter(clientId);
        boolean allowRequest = limiter.tryAcquire();
    
        if (!allowRequest) {
            response.setStatus(HttpStatus.TOO_MANY_REQUESTS.value());
        }
        response.addHeader("X-RateLimit-Limit", String.valueOf(hourlyLimit));
        return allowRequest;
    }
    
    private SimpleRateLimiter getRateLimiter(String clientId) {
        return limiters.computeIfAbsent(clientId, clientId -> {
            return Optional.of(createRateLimiter(clientId));
        });
    }

	
    @PreDestroy
    public void destroy() {
        // loop and finalize all limiters
    }
}

This initializes rate-limiters per client on demand. Alternatively, on startup you could just loop through all registered API clients and create a rate limiter for each. In case the rate limiter doesn’t allow more requests (tryAcquire() returns false), then raturn “Too many requests” and abort the execution of the request (return “false” from the interceptor).

This sounds simple. But there are a few catches. You may wonder where the SimpleRateLimiter above is defined. We’ll get there, but first let’s see what options do we have for rate limiter implementations.

The most recommended one seems to be the guava RateLimiter. It has a straightforward factory method that gives you a rate limiter for a specified rate (permits per second). However, it doesn’t accomodate web APIs very well, as you can’t initilize the RateLimiter with pre-existing number of permits. That means a period of time should elapse before the limiter would allow requests. There’s another issue – if you have less than one permits per second (e.g. if your desired rate limit is “200 requests per hour”), you can pass a fraction (hourlyLimit / secondsInHour), but it still won’t work the way you expect it to, as internally there’s a “maxPermits” field that would cap the number of permits to much less than you want it to. Also, the rate limiter doesn’t allow bursts – you have exactly X permits per second, but you cannot spread them over a long period of time, e.g. have 5 requests in one second, and then no requests for the next few seconds. In fact, all of the above can be solved, but sadly, through hidden fields that you don’t have access to. Multiple feature requests exist for years now, but Guava just doesn’t update the rate limiter, making it much less applicable to API rate-limiting.

Using reflection, you can tweak the parameters and make the limiter work. However, it’s ugly, and it’s not guaranteed it will work as expected. I have shown here how to initialize a guava rate limiter with X permits per hour, with burstability and full initial permits. When I thought that would do, I saw that tryAcquire() has a synchronized(..) block. Will that mean all requests will wait for each other when simply checking whether allowed to make a request? That would be horrible.

So in fact the guava RateLimiter is not meant for (web) API rate-limiting. Maybe keeping it feature-poor is Guava’s way for discouraging people from misusing it?

That’s why I decided to implement something simple myself, based on a Java Semaphore. Here’s the naive implementation:

public class SimpleRateLimiter {
    private Semaphore semaphore;
    private int maxPermits;
    private TimeUnit timePeriod;
    private ScheduledExecutorService scheduler;

    public static SimpleRateLimiter create(int permits, TimeUnit timePeriod) {
        SimpleRateLimiter limiter = new SimpleRateLimiter(permits, timePeriod);
        limiter.schedulePermitReplenishment();
        return limiter;
    }

    private SimpleRateLimiter(int permits, TimeUnit timePeriod) {
        this.semaphore = new Semaphore(permits);
        this.maxPermits = permits;
        this.timePeriod = timePeriod;
    }

    public boolean tryAcquire() {
        return semaphore.tryAcquire();
    }

    public void stop() {
        scheduler.shutdownNow();
    }

    public void schedulePermitReplenishment() {
        scheduler = Executors.newScheduledThreadPool(1);
        scheduler.schedule(() -> {
            semaphore.release(maxPermits - semaphore.availablePermits());
        }, 1, timePeriod);

    }
}

It takes a number of permits (allowed number of requests) and a time period. The time period is “1 X”, where X can be second/minute/hour/daily – depending on how you want your limit to be configured – per second, per minute, hourly, daily. Every 1 X a scheduler replenishes the acquired permits (in the example above there’s one scheduler per client, which may be inefficient with large number of clients – you can pass a shared scheduler pool instead). There is no control for bursts (a client can spend all permits with a rapid succession of requests), there is no warm-up functionality, there is no gradual replenishment. Depending on what you want, this may not be ideal, but that’s just a basic rate limiter that is thread-safe and doesn’t have any blocking. I wrote a unit test to confirm that the limiter behaves properly, and also ran performance tests against a local application to make sure the limit is obeyed. So far it seems to be working.

Are there alternatives? Well, yes – there are libraries like RateLimitJ that uses Redis to implement rate-limiting. That would mean, however, that you need to setup and run Redis. Which seems like an overhead for “simply” having rate-limiting. (Note: it seems to also have an in-memory version)

On the other hand, how would rate-limiting work properly in a cluster of application nodes? Application nodes probably need some database or gossip protocol to share data about the per-client permits (requests) remaining? Not necessarily. A very simple approach to this issue would be to assume that the load balancer distributes the load equally among your nodes. That way you would just have to set the limit on each node to be equal to the total limit divided by the number of nodes. It won’t be exact, but you rarely need it to be – allowing 5-10 more requests won’t kill your application, allowing 5-10 less won’t be dramatic for the users.

That, however, would mean that you have to know the number of application nodes. If you employ auto-scaling (e.g. in AWS), the number of nodes may change depending on the load. If that is the case, instead of configuring a hard-coded number of permits, the replenishing scheduled job can calculate the “maxPermits” on the fly, by calling an AWS (or other cloud-provider) API to obtain the number of nodes in the current auto-scaling group. That would still be simpler than supporting a redis deployment just for that.

Overall, I’m surprised there isn’t a “canonical” way to implement rate-limiting (in Java). Maybe the need for rate-limiting is not as common as it may seem. Or it’s implemented manually – by temporarily banning API clients that use “too much resources”.

Update: someone pointed out the bucket4j project, which seems nice and worth taking a look at.

The post Basic API Rate-Limiting appeared first on Bozho's tech blog.

NSA Document Outlining Russian Attempts to Hack Voter Rolls

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_document_ou.html

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the Intercept published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational ­– and there’s no evidence that they had any actual effect ­– they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia’s military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company’s network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA’s analysis ends. We don’t know whether those 122 targeted attacks were successful, or what their effects were if so. We don’t know whether other election software companies besides VR Systems were targeted, or what the GRU’s overall plan was — if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect — anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC ­– one of the states that VR Systems supports ­– but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don’t know what happened next, if anything. VR Systems isn’t commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn’t much of a smoking gun, it’s yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the Intercept anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The Intercept sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI’s affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the Intercept. It is unclear whether the e-mail evidence was from Winner’s NSA account or her personal account, but in either case, it’s incredibly sloppy tradecraft.

With President Trump’s election, the issue of Russian interference in last year’s campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It’s interesting that this document was reported by the Intercept, which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It’s easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there’s a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don’t know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won’t be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don’t have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­– by extension ­– our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what’s historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the Washington Post.

What about other leaked printed documents?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/what-about-other-leaked-printed.html

So nat-sec pundit/expert Marci Wheeler (@emptywheel) asks about those DIOG docs leaked last year. They were leaked in printed form, then scanned in an published by The Intercept. Did they have these nasty yellow dots that track the source? If not, why not?

The answer is that the scanned images of the DIOG doc don’t have dots. I don’t know why. One reason might be that the scanner didn’t pick them up, as it’s much lower quality than the scanner for the Russian hacking docs. Another reason is that the printer used my not have printed them — while most printers do print such dots, some printers don’t. A third possibility is that somebody used a tool to strip the dots from scanned images. I don’t think such a tool exists, but it wouldn’t be hard to write.

Scanner quality

The printed docs are here. They are full of whitespace where it should be easy to see these dots, but they appear not to be there. If we reverse the image, we see something like the following from the first page of the DIOG doc:

Compare this to the first page of the Russian hacking doc which shows the blue dots:

What we see in the difference is that the scan of the Russian doc is much better. We see that in the background, which is much noisier, able to pick small things like the blue dots. In contrast, the DIOG scan is worse. We don’t see much detail in the background.

Looking closer, we can see the lack of detail. We also see banding, which indicates other defects of the scanner.

Thus, one theory is that the scanner just didn’t pick up the dots from the page.

Not all printers

The EFF has a page where they document which printers produce these dots. Samsung and Okidata don’t, virtually all the other printers do.

The person who printed these might’ve gotten lucky. Or, they may have carefully chosen a printer that does not produce these dots.

The reason Reality Winner exfiltrated these documents by printing them is that the NSA had probably clamped down on USB thumb drives for secure facilities. Walking through the metal detector with a chip hidden in a Rubic’s Cube (as shown in the Snowden movie) will not work anymore.

But, presumably, the FBI is not so strict, and a person would be able to exfiltrate the digital docs from FBI facilities, and print elsewhere.

Conclusion

By pure chance, those DIOG docs should’ve had visible tracking dots. Either the person leaking the docs knew about this and avoided it, or they got lucky.

How The Intercept Outed Reality Winner

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/how-intercept-outed-reality-winner.html

Today, The Intercept released documents on election tampering from an NSA leaker. Later, the arrest warrant request for an NSA contractor named “Reality Winner” was published, showing how they tracked her down because she had printed out the documents and sent them to The Intercept. The document posted by the Intercept isn’t the original PDF file, but a PDF containing the pictures of the printed version that was then later scanned in.

As the warrant says, she confessed while interviewed by the FBI. Had she not confessed, the documents still contained enough evidence to convict her: the printed document was digitally watermarked.

The problem is that most new printers print nearly invisibly yellow dots that track down exactly when and where documents, any document, is printed. Because the NSA logs all printing jobs on its printers, it can use this to match up precisely who printed the document.

In this post, I show how.

You can download the document from the original article here. You can then open it in a PDF viewer, such as the normal “Preview” app on macOS. Zoom into some whitespace on the document, and take a screenshot of this. On macOS, hit [Command-Shift-3] to take a screenshot of a window. There are yellow dots in this image, but you can barely see them, especially if your screen is dirty.

We need to highlight the yellow dots. Open the screenshot in an image editor, such as the “Paintbrush” program built into macOS. Now use the option to “Invert Colors” in the image, to get something like this. You should see a roughly rectangular pattern checkerboard in the whitespace.

It’s upside down, so we need to rotate it 180 degrees, or flip-horizontal and flip-vertical:

Now we go to the EFF page and manually click on the pattern so that their tool can decode the meaning:

This produces the following result:

The document leaked by the Intercept was from a printer with model number 54, serial number 29535218. The document was printed on May 9, 2017 at 6:20. The NSA almost certainly has a record of who used the printer at that time.

The situation is similar to how Vice outed the location of John McAfee, by publishing JPEG photographs of him with the EXIF GPS coordinates still hidden in the file. Or it’s how PDFs are often redacted by adding a black bar on top of image, leaving the underlying contents still in the file for people to read, such as in this NYTime accident with a Snowden document. Or how opening a Microsoft Office document, then accidentally saving it, leaves fingerprints identifying you behind, as repeatedly happened with the Wikileaks election leaks. These sorts of failures are common with leaks. To fix this yellow-dot problem, use a black-and-white printer, black-and-white scanner, or convert to black-and-white with an image editor.

Copiers/printers have two features put in there by the government to be evil to you. The first is that scanners/copiers (when using scanner feature) recognize a barely visible pattern on currency, so that they can’t be used to counterfeit money, as shown on this $20 below:

The second is that when they print things out, they includes these invisible dots, so documents can be tracked. In other words, those dots on bills prevent them from being scanned in, and the dots produced by printers help the government track what was printed out.

Yes, this code the government forces into our printers is a violation of our 3rd Amendment rights.


While I was writing up this post, these tweets appeared first:


Comments:
https://news.ycombinator.com/item?id=14494818