Tag Archives: Privacy

Amazon Cognito Your User Pools is Now Generally Available

Post Syndicated from Vikram Madan original https://blogs.aws.amazon.com/security/post/Tx13NVD4AWG9QK9/Amazon-Cognito-Your-User-Pools-is-Now-Generally-Available

Amazon Cognito makes it easy for developers to add sign-up, sign-in, and enhanced security functionality to mobile and web apps. With Amazon Cognito Your User Pools, you get a simple, fully managed service for creating and maintaining your own user directory that can scale to hundreds of millions of users.

With today’s launch, user pools adds:

  • Device remembering – Amazon Cognito can remember the devices from which each user signs in.
  • User search – Search for users in a user pool based on an attribute.
  • Customizable email addresses – Customize the "from" email address of emails you send to users in a user pool.
  • Attribute permissions – Set fine-grained permissions for each user attribute.
  • Custom authentication flow – Use new APIs and AWS Lambda triggers to customize the sign-in flow.
  • Admin sign-in – Your app can now sign in users from back-end servers or Lambda functions. 
  • Global sign-out – Allow a user to sign out from all signed-in devices or browsers.
  • Custom expiration period – Set an expiration period for refresh tokens.
  • Amazon API Gateway integration – Allow user pool authentications to authorize Amazon API Gateway requests.

You benefit from the security and privacy best practices of AWS, and retain full control of your user data.

Amazon Cognito is now also available in the US West (Oregon) Region in addition to the US East (N. Virginia), Asia Pacific (Tokyo), and EU (Ireland) Regions. To begin using this new feature of Amazon Cognito, see the Amazon Cognito page.

To learn more, see the AWS Blog and the related documentation.

– Vikram 

Real-World Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/real-world_secu.html

Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.

Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).

So far, Internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by — presumptively — China in 2015.

On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door — or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.

With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the Internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.

Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.

The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn:

Software Control. The Internet of Things is a result of everything turning into a computer. This gives us enormous power and flexibility, but it brings insecurities with it as well. As more things come under software control, they become vulnerable to all the attacks we’ve seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won’t work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won’t work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never. A recent Princeton survey found 500,000 insecure devices on the Internet. That number is about to explode.

Interconnections. As these systems become interconnected, vulnerabilities in one lead to attacks against others. Already we’ve seen Gmail accounts compromised through vulnerabilities in Samsung smart refrigerators, hospital IT networks compromised through vulnerabilities in medical devices, and Target Corporation hacked through a vulnerability in its HVAC system. Systems are filled with externalities that affect other systems in unforeseen and potentially harmful ways. What might seem benign to the designers of a particular system becomes harmful when it’s combined with some other system. Vulnerabilities on one system cascade into other systems, and the result is a vulnerability that no one saw coming and no one bears responsibility for fixing. The Internet of Things will make exploitable vulnerabilities much more common. It’s simple mathematics. If 100 systems are all interacting with each other, that’s about 5,000 interactions and 5,000 potential vulnerabilities resulting from those interactions. If 300 systems are all interacting with each other, that’s 45,000 interactions. 1,000 systems: 12.5 million interactions. Most of them will be benign or uninteresting, but some of them will be very damaging.

Autonomy. Increasingly, our computer systems are autonomous. They buy and sell stocks, turn the furnace on and off, regulate electricity flow through the grid, and — in the case of driverless cars — automatically pilot multi-ton vehicles to their destinations. Autonomy is great for all sorts of reasons, but from a security perspective it means that the effects of attacks can take effect immediately, automatically, and ubiquitously. The more we remove humans from the loop, faster attacks can do their damage and the more we lose our ability to rely on actual smarts to notice something is wrong before it’s too late.

We’re building systems that are increasingly powerful, and increasingly useful. The necessary side effect is that they are increasingly dangerous. A single vulnerability forced Chrysler to recall 1.4 million vehicles in 2015. We’re used to computers being attacked at scale — think of the large-scale virus infections from the last decade — but we’re not prepared for this happening to everything else in our world.

Governments are taking notice. Last year, both Director of National Intelligence James Clapper and NSA Director Mike Rogers testified before Congress, warning of these threats. They both believe we’re vulnerable.

This is how it was phrased in the DNI’s 2015 Worldwide Threat Assessment: “Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decision-making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”

The DNI 2016 threat assessment included something similar: “Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decision making, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI — in settings such as public utilities and healthcare — will only exacerbate these potential effects.”

Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people.

Governments need to play a larger role: setting standards, policing compliance, and implementing solutions across companies and networks. And while the White House Cybersecurity National Action Plan says some of the right things, it doesn’t nearly go far enough, because so many of us are phobic of any government-led solution to anything.

The next president will probably be forced to deal with a large-scale Internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen.

This essay previously appeared on Vice Motherboard.

BoingBoing post.

The NSA and "Intelligence Legalism"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/the_nsa_and_int.html

Interesting law journal paper: “Intelligence Legalism and the National Security Agency’s Civil Liberties Gap,” by Margo Schlanger:

Abstract: This paper examines the National Security Agency, its compliance with legal constraints and its respect for civil liberties. But even if perfect compliance could be achieved, it is too paltry a goal. A good oversight system needs its institutions not just to support and enforce compliance but also to design good rules. Yet as will become evident, the offices that make up the NSA’s compliance system are nearly entirely compliance offices, not policy offices; they work to improve compliance with existing rules, but not to consider the pros and cons of more individually-protective rules and try to increase privacy or civil liberties where the cost of doing so is acceptable. The NSA and the administration in which it sits have thought of civil liberties and privacy only in compliance terms. That is, they have asked only “Can we (legally) do X?” and not “Should we do X?” This preference for the can question over the should question is part and parcel, I argue, of a phenomenon I label “intelligence legalism,” whose three crucial and simultaneous features are imposition of substantive rules given the status of law rather than policy; some limited court enforcement of those rules; and empowerment of lawyers. Intelligence legalism has been a useful corrective to the lawlessness that characterized surveillance prior to intelligence reform, in the late 1970s. But I argue that it gives systematically insufficient weight to individual liberty, and that its relentless focus on rights, and compliance, and law has obscured the absence of what should be an additional focus on interests, or balancing, or policy. More is needed; additional attention should be directed both within the NSA and by its overseers to surveillance policy, weighing the security gains from surveillance against the privacy and civil liberties risks and costs. That attention will not be a panacea, but it can play a useful role in filling the civil liberties gap intelligence legalism creates.

This is similar to what I wrote in Data and Goliath:

There are two levels of oversight. The first is strategic: are the rules we’re imposing the correct ones? For example, the NSA can implement its own procedures to ensure that it’s following the rules, but it should not get to decide what rules it should follow….

The other kind of oversight is tactical: are the rules being followed? Mechanisms for this kind of oversight include procedures, audits, approvals, troubleshooting protocols, and so on. The NSA, for example, trains its analysts in the regulations governing their work, audits systems to ensure that those regulations are actually followed, and has instituted reporting and disciplinary procedures for occasions when they’re not.

It’s not enough that the NSA makes sure there is a colorable legal interpretation that authorizes what they do. We need to make sure that their understanding of the law is shared with the outside world, and that what they’re doing is a good idea.

isoHunt Founder Settles with Music Industry for $66 Million

Post Syndicated from Ernesto original https://torrentfreak.com/isohunt-founder-settles-cria-66-million/

isohunt-fredomAfter years of legal battles, isoHunt and its founder Gary Fung are free at last.

Today, Fung announced that he has settled the last remaining lawsuit with Music Canada, formerly known as the Canadian Recording Industry Association (CRIA).

“After 10 long years, I’m happy to announce the end of isoHunt’s and my lawsuits,” Fung says, noting that he now owes the Canadian music group $66 million.

The multi-million dollar agreement follows an earlier settlement with the MPAA, for $110 million, on paper. While most site owners would be devastated, Fung has long moved beyond that phase and responds rather sarcastically.

“And I want to congratulate both Hollywood and CRIA on their victories, in letting me off with fines of $110m and $66m, respectively. Thank you!” he notes, adding that he’s “free at last”.

The consent order (pdf) signed by the Supreme Court of British Columbia prohibits isoHunt’s founder from operating any file-sharing site in the future.

It further requires Fung to pay damages of $55 million and another $10 million in aggravated punitive damages. The final million dollars is issued to cover the costs of the lawsuit.

Although isoHunt shut down 2013, it took more than two years for the last case to be finalized. The dispute initially began in the last decennium, when the Canadian music industry went after several prominent torrent sites.

In May 2008, isoHunt received a Cease and Desist letter from the CRIA in which they demanded that isoHunt founder Gary Fung should take the site offline. If Fung didn’t comply, the CRIA said it would pursue legal action, and demand $20,000 for each sound recording the site has infringed.

A similar tactic worked against Demonoid, but the isoHunt founder didn’t back down so easily. Instead, he himself filed a lawsuit against the CRIA asking the court to declare the site legal.

That didn’t work out as isoHunt’s founder had planned, and several years later the tables have been turned entirely, with the defeat now becoming final.

While the outcome won’t change anything about isoHunt’s demise, Fung is proud that he was always able to shield its users from the various copyright groups attacking it. No identifiable user data was shared at any point.

Fung is also happy for the support the site’s users have given him over the years.

“I can proudly conclude that I’ve kept my word regarding users’ privacy above. To isoHunt’s avid users, it’s worth repeating since I shutdown isoHunt in 2013, that you have my sincerest thanks for your continued support,” Fung notes.

“Me and my staff could not have done it for more than 10 years without you, and that’s an eternity in internet time. It was an interesting and challenging journey for me to say the least, and the most profound business learning experience I could not expect.”

The Canadian entrepreneur can now close the isoHunt book for good and move on to new ventures. One of the projects he just announced is a mobile search tool called “App to Automate Googling” AAG for which he invites alpha testers.

The original isoHunt site now redirects to MPAA’s “legal” search engine WhereToWatch. However, the name and design lives on via the clone site IsoHunt.to, which still draws millions of visitors per month – frustrating for the MPAA and Music Canada.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Рецепта за бисквитки

Post Syndicated from Илия Горанов original http://9ini.babailiica.com/cookies/

Не става дума за захарни изделия, а за информационни технологии. Терминът е заимстван с директен превод от английския език, където се използва думата cookies (въпреки, че буквалният превод е курабийки, а не бисквитки).

И все пак, каква е целта на бисквитките? Бисквитките представляват порции структурирана информация, която съдържа различни параметри. Те се създават от сървърите, които предоставят достъп до уеб страници. Предава се чрез служебните части на HTTP протокола (т. нар. HTTP headers) – това е трансферен протокол, който се използва от браузърите за обмен на информация със сървърите. Бисквитките са добавени в спецификацията на HTTP протокола във версия 1.0 в началото на 90те години. По това време Интернет не беше толкова развит, колкото е в момента и затова HTTP протоколът има някои специфични особености. Протоколът е базиран на заявки (от клиента) и отговори (от сървъра), като всяка двойка заявка и отговор се правят в отделна връзка (socket) към между клиента и сървъра. Тази схема на работа е изключително удобна, тъй като не изисква постоянна и стабилна връзка с Интернет, тъй като всъщност връзката се използва само за кратък момент. За съжаление, заради тази особеност често HTTP протоколът е наричан state less протокол (протокол без запазване на състоянието). А именно – сървърът няма как да знае, че редица от последователни заявки са изпратени от един и същи клиент. За разлика от IRC, SMTP, FTP и други протоколи създадени през същите години, които отварят една връзка и предават и приемат данни двупосочно. При такива протоколи комуникацията започва с hand shake или аутентикация между участниците в комуникацията, след което и за двете страни е ясно, че докато връзката остава отворена, комуникацията тече с конкретен участник.

За да бъде преодолян този недостатък на протокола, след версия 0.9 (първата версия навлязла в реална експлоатация), във версия 1.0 създават механизма на бисквитките. Кръщават технологията cookies заради приказката на братя Грим за Хенцел и Гретел, които маркират пътя, по който са минали пре тъмната гора, като поръсват трохи. В повечето български преводи в приказката се използват трохи от хляб (bread crumbs) термин, който намира друго място в IT сферата и уеб, но в действителност приказката е германска народна приказка с много различни версии през годините. Сравнението е очевидно – cookies позволяват да се проследи пътя на потребителя в тъмната гора на state less протокола HTTP.

Как работят бисквитките? Бисквитките се създават от сървърите и се изпращат на потребителите. При всяка следваща заявка от потребителя към същия сървър, той трябва да изпраща обратно копие на получените от сървъра бисквитки. По този начин, сървърът получава механизъм по който да проследи потребителя по пътя му, т.е. да знае, когато един и същи потребител е направил поредица от множество, иначе несвързани, заявки.

Представете си, че изпращате първа заявка към сървъра и той ви връща отговор, че иска да се аутентикирате – да посочите потребителско име и парола. Вие ги изпращате и сървърът верифицира, че това наистина сте вие. Но заради особеностите на HTTP протокола, след като изпратите името и паролата, връзката между вас и сървъра се прекъсва. По-късно изпращате нова заявка. Сървърът няма как да знае, че това отново сте вие, тъй като за новата заявка е направена нова връзка.

Сега си представете същата схема, но с добавена технологията на бисквитите. Изпращате заявката към сървъра, той ви връща отговор, че иска име и парола. Изпращате име и парола и сървърът ви връща бисквитка в която записва че собственикът на тази бисквитка вече е дал име и парола. Връзката прекъсва. По-късно изпращате нова заявка и тъй като тя е към същия сървър, клиентът трябва да върне обратно копие на получените от сървъра бисквитки. Тогава новата заявка съдържа и бисквитката, в която пише, че по-рано сте предоставили име и парола, които са били валидни.

И така, бисквитките се създават от сървърите, като данните се изпращат от уеб сървъра заедно с отговора на заявка за достъп до ресурс (например отделна уеб страница, текст, картинка или друг файл). Бисквитката се връща като част от служебната информация на протокола. Когато клиент (клиентският софтуер – браузър) получи отговор от сървър, който съдържа бисквитка, той следва да я обработи. Ако клиентът вече разполага с подобна бисквитка – следва да си обнови информацията за нея, ако не разполага с нея, следва да я създаде. Ако срокът на бисквитката е изтекъл – следва да я унищожи и т.н.

Често се споменава, че бисквитките са малки текстови файлове, които се съхраняват на компютъра на потребителя. Това не винаги е вярно – бисквитките представляваха отделни текстови файлове в ранните версии на някои браузъри (например в Internet Explorer и Netscape Navigator), повечето съвременни браузъри съхраняват бисквитките по различен начин. Например съвременните версии на браузърите Mozilla Firefox и Google Chrome съхранява всички бисквитки в един файл, който представлява sqlite база данни. Подходът с база данни е доста подходящ, тъй като бисквитките представляват структурирана информация, която е удобно да се съхранява в база, а достъпът до информацията е доста по ефективен. Въпреки това, браузърът Microsoft Edge продължава да съхранява бисквитките във вид на текстови файлове в AppData директорията.

Какви параметри съдържат бисквитките, които сървърите изпращат? Всяка бисквитка може да съдържа: име, стойност, домейн, адрес (път), срок на варидност и някои параметри за сигурност (да важат само по https и да бъдат достъпни само от трансфертия протокол, т.е. на не бъдат достъпни за javascript и други приложни слоеве при клиента). Името и стойността са задължителни за всяка бисквитка – те задават името на променливата, в която ще се съхранява информацията и съответната стойност – съхранената информация.

Домейнът е основната част от адреса на интернет сайта, който изпраща бисквитката – частта, която идентифицира машината (сървъра) в мрежата. Според техническите спецификации домейнът на бисквитката задължително трябва да съвпада с домейна на сървъра, който ги изпраща, но има някои изключения. Първото изключение е, чекогато се използват няколко нива домейни, те могат да създават бисквитки за различни части от нивата. Например отговорът на заявка към домейна example.com може да създаде бисквитка само за домейна example.com, както и бисквитка валидна за същия домейн и всички негови поддомейни. Ако бисквитката е валидна за всички поддомейни, изписването става с точка пред името на домейна или .example.com. Второто изключение е валидно, когато бисквитката не се създава от сървъра, а от приложен слой при клиента (например от javascript). Тогава е възможно js файлът да е зареден от един домейн, но в html страница от друг домейн – сървърът, който изпраща бисквитка може да я изпрати от името на домейна където е js файлът, но самият js, докато е зареден в страница от друг домейн може да създаде (и прочете) бисквитка от домейна на html страницата.

Адресът (или пътят) е останалата част от URL адреса. По подризбиране бисквитките са валидни за път / (т.е. за корена на уеб сайта). Това означава, че бисквитката е валидна за всички възможни адреси на сървъра. Въпреки това, има възможност изрично да се укаже конкретен адрес, за който да бъдат валидни бисквитките.По идея, адресите са репрезентация на път в йерархична файлова структура върху сървъра (въпреки, че не е задължително). Затова и адресите на бисквитките представят мястото на тяхното създаване в йерархична файлова система. Например ако пътят на бисквитката е /dir/ – това означава, че тя е валидна в директорията с име dir, включително и всички нейни поддиректории.

Да дадем малко по-реалистичен пример, ако имаме бисквитки, които използваме за съпраняване на информация за аутентикацията на потребителите в администраторски панел на уеб сайт, който е разположен в директория /admin/ – можем да посочим, че дадената бисквитка е валидна само за адрес /admin/, по този начин бисквитките създадени от сървъра за нуждите на администраторския панел няма да се използват при заявки за други ресурси от същия сървър.

Срокът на валидност определя колко време потребителят трябва да пази бисквитката при себе си и да я връща с всяка следваща заявка към същия сървър и адрес (път). Когато сървърът иска да изтрие бисквитка при потребителя, той трябва да я изпрати със срок на валидност в миналото, по този начин предизвиква изтичане и автоматично изтриване на бисквитката при потребителя.

Бисквитките имат и параметри, които имат грижата да осигурят сигурността на предаваните данни. Това включва два булеви параметъра – единият определя, дали бисквитката да бъде достъпна (както за четене, така и за писане) само от http протокола или да бъде достъпна и за приложния слой при клиента (например за javascript). Вторият параметър определя, дали бисквитката да се предава по всички протоколи или само по https (защитен http).

Както се досещате, в рамките на една и съща комуникация може да има повече от една бисквитка. Както сървърът може да създаде повече от една бисквитка едновременно, така и клиентът може да върне повече от една бисквитка обратно към сървъра. Именно затова освен домейни и адреси, бисквитките имат и имена.

В допълнение бисквитките имат и редица ограничения. Повечето браузъри не позволяват да има повече от 20 едновременно валидни бисквитки за един и същи домейн. Във Mozilla Firefox това ограничение е 50 броя, а в Opera 30 броя. Също така е ограничен и размерът на всяка отделна бисквитка – не повече от 4KB (4096 Bytes). В спецификациите за бисквитки RFC2109 от 1997 г. е посочено че клиентът може да съхранява до 300 бисквитки по 20 за един и същи домейн и всяка с размер до 4KB. В по-късната спецификация Rfc6265 от 2011 г. лимитите са увеличение до 3000 броя общо и 50 броя за един домейн. Все пак, не трябва да се забравя, че всяка бисквитка се изпраща от клиента при всяка следваща заявка към сървъра, ако чукнем тавана на лимитите и имаме 50 бисквитки по 4KB, това означава, че с всяка заявка ще пътуват близо 200KB само под формата на бисквитки, което може да се окаже сериозен товар за трафика, дори и при техническите възможности на съвременния достъп до Интернет.

Разбира се, по-рано приведеният пример, при който запазваме успешната аутентикация на потребителя в бисквитка има множество особености свързани с гарантиране на сигурността. На първо място – не е добра идея да запазим потребителското име и паролата на потребителя в бисквитка, тъй като тя се запазва на неговия компютър. Това означава, че по всяко време, ако злонамерено лице получи достъп до компютъра на потребителя, може да прочете потребителското име и паролата от записаните там бисквитки. От друга страна, ако в бисквитката съхраним само името на потребителя, без неговата парола – няма как да се предпазим от фалшиви бисквитки – всяко злонамерено лице може да създаде фалшива бисквитка, в която да посочи произволно (чуждо) потребителско име и да се представи пред сървъра от чуждо име.

Затова най-често използваният механизъм е, че при всяка аутентикация с име и парола пред сървъра, след като той ги верифицира, създава някакъв временно валиден идентификатор, който изпраща като бисквитка. В различните технологии този идентификатор може да се намери с различни имена, one time password (OTP), token, session или по друг начин. При тази схема сървърът съхранява за ограничено време (живот на сесията) информация за потребителя. Всяка такава информация (често наричана сесия) получава идентификационен номер, който се изпраща като бисквитка на потребителя. Тъй като той ще връща този идентификатор с всяка следваща заявка, сървърът ще може да възстановява съхранената информация за потребителя и тя да бъде достъпна при всяка следваща заявка. В същото време, информацията е съхранена на сървъра, а не при клиента, което не позволява на злонамерен потребител да я модифицира или фалшифицира. Освен това, идентификаторът е валиден за ограничен период от време (например за 30 минути). Дори и бисквитката с идентификатора да остане на компютъра на потребителя, заисаният в нея идентификатор няма да върши работа след половин час. Не на последно място, при натискане на бутона за изход съхранените на сървъра данни за потребителя се изтриват дори и да не е изтекъл срокът от 30 минути. Именно затова е важно винаги да се използват бутоните за изход при излизане от онлайн системи.

Какво друго се съхранява в бисквитки? На практика всичко! Много често бисквитките се използват за съхраняване на информация за индивидуални настройки на потребителя. Когато потребителят промени някоя настройка сървърът му изпраща бисквитка със съответната настройка и дълъг срок на валидност. При всяко следващо посещение на същия потребител на същия сайт, той ще изпраща запазената в бисквитка настройка, заедно със заявката към сървъра. Сървърът ще знае за желаната настройка и ще я приложи при връщането на отговор на изпратената заявка. Пример за такава настройка е броят на записите които се показват на страница. Ако веднъж промените този брой, избраната стойност може да се запази в бисквитка и при всяка следваща заявка сървърът винаги ще знае за настройката и ще връща правилен отговор.

В бисквитки се съхранява и друга информация, например поръчани стоки в пазарската кошница на онлайн магазин, статистическа информация кога сте посетили даден сайт за последно, колко пъти сте го посетили, кои страници сте посещавали, колко време сте се задържали на всяка от страниците и т.н.

Забранява ли Европейският съюз бисквитките? Бисквитеното чудовище от улица Сезам би било доста разстроено, ако разбере, че ЕС иска да ограничи използването на бисквитки. В действителност истината е доста по-различна, но информацията е масово грешно интепретирана. Да излезем от технологичната сфера и да навлезем малко в юридическата. На първо място, кои са нормативните документи в тази връзка? Масово се цитира европейската Директива 2009/136/ЕО от 25 ноември 2009 г. Истината е, че тази директива не засяга директно бисквитките. Директивата внася изменения в друга Директива 2002/22/ЕО от 7 март 2002 г. относно универсалната услуга (час от която е и достъпът до Интернет) и правата на потребителите. Изменението от директивата от 2009 г. гласи следното (чл. 5, стр. 30):

5. Член 5, параграф 3 се заменя със следния текст:

„3. Държавите-членки гарантират, че съхраняването на информация или получаването на достъп до информация, вече съхранявана в крайното оборудване на абоната или ползвателя, е позволено само при условие, че съответният абонат или ползвател е дал своето съгласие след получаване на предоставена ясна и изчерпателна информация в съответствие с Директива 95/46/ЕО, inter alia, относно целите на обработката. Това не пречи на всякакво техническо съхранение или достъп с единствена цел осъществяване на предаването на съобщение по електронна съобщителна мрежа или доколкото е строго необходимо, за да може доставчикът да предостави услуга на информационното общество, изрично поискана от абоната или ползвателя.“

Също така, в увода на същата директива се споменава още:

(66) Трети страни може да желаят да съхраняват информация върху оборудване на даден ползвател или да получат достъп до вече съхраняваната информация за различни цели, които варират от легитимни (някои видове „бисквитки“ (cookies) до такива, включващи непозволено навлизане в личната сфера (като шпионски софтуер или вируси). Следователно е от първостепенно значение ползвателите да получат ясна и всеобхватна информация при извършване на дейност, която би могла да доведе до подобно съхраняване или получаване на достъп. Методите на предоставяне на информация и на предоставяне на правото на отказ следва да се направят колкото може по-удобни за ползване. Изключенията от задължението за предоставяне на информация и на право на отказ следва да се ограничават до такива ситуации, в които техническото съхранение или достъп е стриктно необходимо за легитимната цел на даване възможност за ползване на специфична услуга, изрично поискана от абоната или ползвателя. Когато е технически възможно и ефективно, съгласно приложимите разпоредби на Директива 95/46/ЕО, съгласието на ползвателя с обработката може да бъде изразено чрез използване на съответните настройки на браузер или друго приложение. Прилагането на тези изисквания следва да се направи по-ефективно чрез разширените правомощия, дадени на съответните национални органи.

От двете цитирания следва да обърнем внамание и на още нещо – цитира се и Директива 95/46/ЕО от 24 октомври 1995 г, която пък третира защитата на физическите лица при обработването на лични данни. Разбира се, трудно е да се каже, че директивата от средата на 90те години засяга директно функционирането на Интернет, който по това време е доста слабо разпространен, а технологията на бисквитките – появила се само преди няколко години, все още изключително нова и рядко използвана.

Преди да продължим с анализа нататък, трябва да отбележим, че и трите цитирани директиви, по отношение на защитата на потребителите от бисквитки, засега не са транспонирани в националното законодателство (поне не в Закона за електронното управление, Закона за електронните съобщения и Закона за защита на личните данни). Слава богу, механизмът на директивите в Европейския съюз е така създаден, че ги прави задължителни за спазване от всички държави-членки, независимо дали има национално законодателство за съответната сфера или не.

И все пак – защо ЕС иска да ограничи използването на бисквитките? Сред изброените множество приложения на технологията – за съхраняване на аутентикация, за съхраняване на настройки, за следене на поведението на потребителя, за следене на статистика за потребителя, за маркетингови анализи, за съхраняване на данни за пазарски колички и др. (някои от изброените се припокриват или допълват), бисквитките още може да се използват и за сериозно навлизане в личното пространство на потребителите, като се извършват различни форми на следене и анализ на потреблението и поведението на потребителите в Интернет с цел предоставяне на таргетирана реклама или с други, дори и незаконни, цели.

Да разгледаме един по-реалистичен пример на базата на вече разгледаната по-рано технология. Имаме прост информационен сайт с автомобилна тематика, който няма никакви специфични функции, не предоставя услуги или нещо друго. Сайтът обаче използва Google Analytics (безплатен инструмент за събиране на статистика за посетителите, предоставят от Google), също така, собственикът на сайта, с цел да монетизира поне в някаква минимална степен събраната на сайта си информация е пуснал и Google Adwords (услуга за публикуване на рекламни банери предоставяна от Google). Също така имаме и потребител, който търси в Google информация за ремонт на спукана автомобилна гума. Потребителят открива цитирания по-горе сайт в Google, където кликва линк и отива на сайта. Същият потребител има и email в Gmail (безплатна email услуга предоставяна от Google). Както забелязвате, до момента имаме един неизменно преследващ ни по целия път общ знаменател – Google. В случая това далеч не е единствения голям играч на този пазар, просто примерът с него е най-достъпен за широката публика. Всъщност Google едновременно има достъп до писмата, които потребителят е изпращал и получавал, до това какво е търсил, до това в кой сайт е влязъл, какво е чел там (точно кои страници), колко време е прекарал на този сайт, също така и информация за всеки друг сайт, който същият потребител е посещавал в миналото, независимо дали ги е посещавал от същия компютър или от друг, достатъчно е всички сайтове да използват Google Analytics за статистиката си. Ако потребителят има служебен и домашен компютър и е влизал в Gmail пощата си и от двата компютъра, тогава Google Analytics е в състояние да направи връзка, че сайтовете, които сапосещавани на двата компютъра, които иначе нямат никаква друга връзка помежду си, са посещавани от един и същи потребител. Тогава, потребителят не трябва да се учудва, ако отиде на трето място, несвързано по никакъв начин с предишните две (домашния и служебния компютър), влезе си в пощата и по-късно посети произволен сайт, на който види реклама за продажба на нови автомобилни гуми.

Проследяването на всичко описано до момента е възможно именно чрез механизмите на бисквитките. Неслучайно те се наричат механизъм за запазване на състоянието и са създадени за „следене на потребителите на протокола”. Разбира се – следене в позитивния и чисто технологичен смисъл на термина, но все пак следене, което в ръцете на недобронамерени лица може да придобие съвсем различни мащаби и да се извършва със съвсем различни цели.

Всичко това може (и се) комбинира и с други данни, които се събират за потребителите – IP гео локация, информация за интернет връзката, използваното устройство, размер на дисплея, операционна система, използван софтуер и много други данни, които се предоставят автоматизирано, докато браузваме в мрежата.

Отново, сама по себе си, технологията на функциониране на бисквитките има предвидени защитни механизми. Например бисквитките от един уеб сайт не могат да бъдат прочетени от друг сайт. Но тук цялата схема се чупи поради факта, че бисквитките са достъпни за приложния слой на клиента (javascript), като в същото време милиони сайтове по света се възползват от иначе безплатната услуга за събиране на статистика за посещенията от един и същи доставчик. Именно този доставчик е свързващото звено в цялата схема. Всеки от сайтовете и всеки от потребителите поотделно не разполагат с особено полезна информация, но свързващото звено разполага с цялата информация и при добро желание, а повярвайте ми, за да бъдат безплатни всички тези услуги, желанието е преминало в нужда, тази натрупана информация може да бъде анализирана, обработена и използвана за всякакви нужди.

И тъй като често тези действия могат да навлязат доста надълбоко в личния живот на хората, Европейският съюз е предприел необходимите мерки, да задължи доставчиците на услуги в Интернет (собствениците на уеб сайтове), да информират потребителите за това какви данни се съхраняват за тях на крайните устройства (т.е. на самите компютри на клиентите) и за какви цели се използват тези данни.

Тук е много важно да уточним няколко аспекта. На първо място – използването на бисквитки, както и проследяването като процес не са забранени, просто се изисква потребителите да бъдат информирани какво се прави и с каква цел, както и потребителят изрично да е дал съгласието си за това. Другата важна подробност е, че бисквитките не са единственят механизъм за съхраняване на информация на крайните устройства и европейското законодателство не се ограничава до използването именно на бисквитки. Local Storage е съвременна алтернатива на бисквитките и въпреки, че функционира по различен начин и предоставя съвсем различни възможности, но също съхранява информация на крайните устройства и реално може да бъде използвана за следене на потребителите и може да засегне правата им по отношение на обработка на личните им данни. В този смисъл европейските директиви засягат всяка форма на съхраняване на информация на устройствата на потребителите, а не само бисквитките. Също така – директивите разглеждат съхраняването на данни за проследяване на потребителите отделно от бисквитките необходими за технологичното функциониране на системите в Интернет. Също така се прави и разлика между бисквитки от трети страни и бисквитки от собствениците на сайтовете, като в примера с бисквитките оставяни от Google Analytics, Google е трета страна.

Когато съхраняването на данните (били те бисквитки или други) се извършва от трети страни (различни от доставчика на услугата и крайния потребител), това не отменя ангажиментите на доставчика на услугата да информира потребителите, както и да им иска съгласието. Казано накратко, ако имам сайт, който използва Google Analytics, ангажиментът да информирам потребителите на сайта, както и да поискам тяхното съгласие, си остава мой (т.е. на собственика на сайта – лицето, което предоставя услугата), а не на третата страна (т.е. не е ангажимент на Google.

Също интересен факт е, че директивата разглежда като допустимо, съгласието на потребителя да бъде получено и чрез настройка на браузъра или друго приложение. Тук можем обратно да се върнем към технологиите. Преди време имаше P3P (пи три пи като Platform for Privacy Preferences, не ръ зъ ръ)– технология, която започна обещаващо, но в последствие беше имплементирана единствено от Internet Explorer и в крайна сметка разработката на спецификацията беше прекратена от W3C. Една от сочените причини е, че планираната технология беше относително сложна за имплементиране. Към днешна дата повечето браузъри поддържат Do Not Track (DNT), което представлява един HTTP header с име DNT, който ако присъства в заявката на клиента със стойност 1, посочва, че потребителят не е съгласен да бъде проследяван. Разбира се, проследяването, което се визира от DNT и запазването и достъпването на информация на крайните устройства на потребителите, което се визира в европейските директиви на се едно и също. Може да записваш и четеш информация на клиента, без да го проследяваш, което би нарушило европейските директиви, както и можеш да проследиш клиента, без да му записваш и четеш данни локално (например чрез Browser Fingerprint и с дънни съхранявани изцяло на сървъра).

Накрая, нека обобщим:

  1. Европейският съюз не забранява бисквитките;
  2. Европейският съюз предвижда мерки за защита на личните данни, като налага правила за искане на позволение от потребителите, когато се записват и достъпват данни на крайните им устройства, когато тези данни;
  3. Изискването за информирано съгласие не се ограничава единствено до бисквитките, а покрива всички съществуващи и евентуално бъдещи технологии позволяващи записване и достъпване на информация на крайните устройства на потребителите;
  4. Информиране на потребителите следва да има, както и трябва да им се поиска, независимо дали данните се записват или достъпват директно от доставчика на услугата или чрез услугите на трета страна. Или по-просто казано, ако използваме Google Analytics, ние трябва да предупредим потребителя и да му поискаме съгласието, а не Google. В този случай доставчикът на услугата (самият сайт) се явява като оператор на лични данни (не по смисъла на българския ЗЗЛД, но по смисъла на европейските директиви, а Google се явява трета страна оторизирана от администратора да управлява тези данни от негово име и за негова сметка – ерго отговорността е на администратора;
  5. Информиране на потребителя и искане на съгласие не е необходимо, когато записваните и четените данни се използват за технологични нужди и това е свързано с предоставянето на услугата, която потребителят изрично е поискал да използва. Бих казал, че когато случаят е такъв, лично аз бих избрал да информирам потребителя какво и защо се записва и чете, без обаче да му искам съгласието;
  6. Според Европейските директиви съгласието може да се изрази от потребителите и чрез специфични технологии създадени за тази цел, но използването на технологиите не отменя необходимостта от информиране на потребителя относно това какви данни се съхраняват и с каква цел се обработват;
  7. Европейската нормативна уредба в това отношение изглежда не е траспонирана в националното законодателство на България, което не означава, че може да не се спазва;

VPN Provider PIA Exits Russia After Server Seizures

Post Syndicated from Andy original https://torrentfreak.com/vpn-provider-pia-exits-russia-server-seizures-160712/

In a digital world where surveillance and privacy invasions are becoming more commonplace, increasing numbers of Internet users are improving their online security.

As a result, in recent years there has been an explosion in people deploying privacy-enhancing tools such as VPNs, which enable anyone to add an extra layer of protection against online snoops.

One of the most successful companies in this field is London Trust Media, the makers of the popular Private Internet Access (PIA) service. The company prides itself on its dedication to security and is possibly the only operator to have its strict no-logging claims tested in public.

But while a no-logging policy is an essential requirement for thousands of VPN customers, authorities in some regions see them as a threat. This morning, PIA is reporting a development in Russia which has left it with no other option than to leave the country.

In an email sent out to its users, PIA explains that due to the passing of a new law last year which requires Internet providers to hold logs of Internet traffic for up to a year, it has become a target for Russian authorities.

“We believe that due to the enforcement regime surrounding this new law, some of our Russian Servers (RU) were recently seized by Russian Authorities, without notice or any type of due process. We think it’s because we are the most outspoken and only verified no-log VPN provider,” PIA announced.

The law to which PIA refers was passed by Russia’s State Duma in July 2014 and enacted September 2015. It requires that all web services store the user data of Russians within the country. This means that international companies could be forced to have a physical local presence, to which Russian authorities potentially have access.

While the deadline for compliance is technically September 2016, Private Internet Access says that given the server seizure and future privacy implications, it will no longer be doing business in the region.

“Upon learning of the [seizures], we immediately discontinued our Russian gateways and will no longer be doing business in the region,” the company says.

“Luckily, since we do not log any traffic or session data, period, no data has been compromised. Our users are, and will always be, private and secure.”

Even though PIA has assured its users that there is nothing to fear, some remain concerned over the seizures. To those individuals, PIA is offering additional assurances that it’s going the extra mile to ensure total security.

“To make it clear, the privacy and security of our users is our number one priority,” the company says.

“For preventative reasons, we are rotating all of our certificates. Furthermore, we’re updating our client applications with improved security measures to mitigate circumstances like this in the future, on top of what is already in place.”

If they haven’t already done so, users should update their PIA desktop clients and Android apps to get the new upgrades.

In response to the Russian incident, PIA says it will take the opportunity to evaluate other countries and their policies.

“In any event, we are aware that there may be times that notice and due process are forgone. However, we do not log and are default secure against seizure,” the company concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Serial Swatter, Stalker and Doxer Mir Islam Gets Just 1 Year in Jail

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/07/serial-swatter-stalker-and-doxer-mir-islam-gets-just-1-year-in-jail/

Mir Islam, a 21-year-old Brooklyn man who pleaded guilty to an impressive array of cybercrimes including cyberstalking, “doxing” and “swatting” celebrities and public officials (as well as this author), was sentenced in federal court today to two years in prison. Unfortunately, thanks to time served in this and other cases, Islam will only see a year of jail time in connection with some fairly heinous assaults that are becoming all too common.

While Islam’s sentence fell well short of the government’s request for punishment, the case raises novel legal issues as to how federal investigators intend to prosecute ongoing cases involving swatting — an extremely dangerous prank in which police are tricked into responding with deadly force to a phony hostage crisis or bomb scare at a residence or business.

Mir Islam, at his sentencing hearing today. Sketches copyright by Hennessy / CourtroomArt.com

Mir Islam, at his sentencing hearing today. Sketches copyright by Hennessy / CourtroomArt.com. Yours Truly is pictured in the blue shirt behind Islam.

On March 14, 2014, Islam and a group of as-yet-unnamed co-conspirators used a text-to-speech (TTY) service for the deaf to relay a message to our local police department stating that there was an active hostage situation going on at our modest town home in Annandale, Va. Nearly a dozen heavily-armed officers responded to the call, forcing me out of my home at gunpoint and putting me in handcuffs before the officer in charge realized it was all a hoax.

At the time, Islam and his pals were operating a Web site called Exposed[dot]su, which sought to “dox” public officials and celebrities by listing the name, birthday, address, previous address, phone number and Social Security number of at least 50 public figures and celebrities, including First Lady Michelle Obama, then-FBI director Robert Mueller, and then Central Intelligence Agency Director John Brennan.

Exposed.su also documented which of these celebrities and public figures had been swatted, including a raft of California celebrities and public figures, such as former California Governor Arnold Schwartzenegger, actor Ashton Kutcher, and performer Jay Z.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

At the time, most media outlets covering the sheer amount of celebrity exposure at Exposed[dot]su focused on the apparently starling revelation that “if they can get this sensitive information on these people, they can get it on anyone.” But for my part, I was more interested in how they were obtaining this data in the first place.

On March 13, 2013 KrebsOnSecurity featured a story — Credit Reports Sold for Cheap in the Underweb –which sought to explain how the proprietors of Exposed[dot]su had obtained the records for the public officials and celebrities from a Russian online identity theft service called sssndob[dot]ru.

I noted in that story that sources close to the investigation said the assailants were using data gleaned from the ssndob[dot]ru ID theft service to gather enough information so that they could pull credit reports on targets directly from annualcreditreport.com, a site mandated by Congress to provide consumers a free copy of their credit report annually from each of the three major credit bureaus.

Peeved that I’d outed his methods for doxing public officials, Islam helped orchestrate my swatting the very next day. Within the span of 45 minutes, KrebsOnSecurity.com came under a sustained denial-of-service attack which briefly knocked my site offline.

At the same time, my hosting provider received a phony letter from the FBI stating my site was hosting illegal content and needed to be taken offline. And, then there was the swatting which occurred minutes after that phony communique was sent.

All told, the government alleges that Islam swatted at least 19 other people, although only seven of the victims (or their representatives) showed up in court today to tell similarly harrowing stories (I was asked to but did not testify).

Officers responding to my 2013 swatting incident.

Security camera footage of Fairfax County police officers responding to my 2013 swatting incident.

Going into today’s sentencing hearing, the court advised that under the government’s sentencing guidelines Islam was facing between 37 and 46 months in prison for the crimes to which he’d pleaded guilty. But U.S. District Court Judge Randolph Moss seemed especially curious about the government’s rationale for charging Islam with conspiracy to transmit a threat to kidnap or harm using a deadly weapon.

Judge Moss said the claim raises a somewhat novel legal question: Can the government allege the use of deadly force when the perpetrator of a swatting incident did not actually possess a weapon?

Corbin Weiss, an assistant US attorney and a cybercrime coordinator with the U.S. Department of Justice, argued that in most of the swatting attacks Islam perpetrated he expressed to emergency responders that any responding officers would be shot or blown up. Thus, the government argued, Islam was using police officers as a proxy for assault with a deadly weapon by ensuring that responding officers would be primed to expect a suspect who was armed and openly hostile to police.

Islam’s lawyer argued that his client suffered from multiple psychological disorders, and that he and his co-conspirators orchestrated the swattings and the creation of exposed[dot]su out of a sense of “anarchic libertarianism,” bent on exposing government overreach on consumer privacy and use of force issues.

As if to illustrate his point, a swatting victim identified by the court only as Victim #4 was represented by Fairfax, Va. lawyer Mark Dycio. That particular victim did not wish to be named or show up in court, but follow-up interviews confirmed that Dycio was representing Wayne LaPierre, the executive vice president of the National Rifle Association.

According to Dycio, police responded to reports of a hostage situation at the NRA boss’s home just days after my swatting in March 2013. Impersonating LaPierre, Islam told police he had killed his wife and that he would shoot any officers responding to the scene. Dycio said police initially had difficulty identifying the object in LaPierre’s hand when he answered the door. It turned out to be a cell phone, but Dycio said police assumed it was a weapon and stripped the cell phone from his hands when entering his residence. The police could have easily mistaken the mobile phone for a weapon, Dycio said.

Another victim that spoke at today’s hearing was Stephen P. Heymann, an assistant U.S. attorney in Boston. Heymann was swatted because he helped prosecute the much-maligned case against the late Aaron Swartz, a computer programmer who committed suicide after the government by most estimations overstepped its bounds by charging him with hacking for figuring out an automated way to download academic journals from the Massachusetts Institute of Technology (MIT).

Heymann, whose disability requires him to walk with a cane, recounted the early morning hours of April 1, 2013, when police officers surrounded his home in response to a swatting attack launched by Islam on his residence. Heymann recalled worrying that officers responding to the phony claim might confuse his cane with a deadly weapon.

One of the victims represented by a proxy witness in today’s hearings was the wife of a SWAT team member in Arizona who recounted several tense hours hunkered down at the University of Arizona, while her husband joined a group of heavily-armed police officers who were responding to a phony threat about a shooter on the campus.

Not everyone had nightmare swatting stories that aligned neatly with Islam’s claims. A woman representing an anonymous “Victim #3” of Islam’s was appearing in lieu of a cheerleader at the University of Arizona that Islam admitted to cyberstalking for several months. When the victim stopped responding to Islam’s overtures, he phoned in an active shooter threat to the local police there that a crazed gunman was on the loose at the University of Arizona campus.

According to Robert Sommerfeld, police commander for the University of Arizona, that 2013 swatting incident involved 54 responding officers, all of whom were prevented from responding to a real emergency as they moved from building to building and room to room at the university, searching for a fictitious assailant. Sommerfeld estimates that Islam’s stunt cost local responders almost $40,000, and virtually brought the business district surrounding the university to a standstill for the better part of the day.

Toward the end of today’s sentencing hearing, Islam — bearded, dressed in a blue jumpsuit and admittedly 75 pounds lighter than at the time of his arrest — addressed the court. Those in attendance who were hoping for an apology or some show of remorse from the accused were left wanting as the defendant proceeded to blame his crimes on multiple psychological disorders which he claimed were not being adequately addressed by the U.S. prison system. Not once did Islam offer an apology to his victims, nor did he express remorse for his actions.

“I didn’t expect to go as far as I did, but because of these disorders I felt I was invincible,” Islam told the court. “The mistakes I made before, I have to pay for that. I understand that.”

Sentences that noticeably depart from the government’s sentencing guidelines are grounds for appeal by the defendant, and Judge Moss today seemed reluctant to imprison Islam for the maximum 46 months allowed under the criminals statutes to which Islam had admitted to violating. Judge Moss also seemed to ignore the fact that Islam expressed exactly zero remorse for his crimes.

Central to the judge’s reluctance to sentence Islam to the statutory maximum penalty was Islam’s 2012 arrest in connection with a separate cybercrime sting orchestrated by the FBI called Operation Card Shop, in which federal agents created a fake cybercrime forum dedicated to credit card fraud called CarderProfit[dot]biz.

U.S. law enforcement officials in Washington, D.C. involved in prosecuting Islam for his swatting, doxing and stalking crimes were confident that Islam would be sentenced to at least two years in prison for trying to sell and buy stolen credit cards from federal agents in the New York case, thanks to a law that imposes a mandatory two-year sentence for crimes involving what the government terms as “aggravated identity theft.”

Much to the government’s chagrin, however, the New York judge in that case sentenced Islam to just one day in jail. But by his own admission, even while Islam was cooperating with federal prosecutors in New York he was busy orchestrating his swatting attacks and administering the Exposed[dot]su Web site.

Islam was re-arrested in September 2013 for violating the terms of his parole, and for the swatting and doxing attacks to which he pleaded guilty. But the government didn’t detain Islam in connection with those crimes until July 2015. Since Islam has been in federal detention since then, and Judge Moss seemed eager to ensure that this would count as time served against Islam’s sentence, meaning that Islam will serve just 12 months of his 24-month sentence before being released.

There is absolutely no question that we need to have a serious, national conversation about excessive use of force by police officers, as well as the over-militarization of local police forces nationwide.

However, no one should be excused for perpetrating these potentially deadly swatting hoaxes, regardless of the rationale. Judge Moss, in explaining his brief deliberation on arriving at Islam’s two-year (attenuated) sentence, said he hoped to send a message to others who would endeavor to engage in swatting attacks. In my estimation, today’s sentence sent the wrong message, and missed that mark by a mile.

Anonymization and the Law

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/anonymization_a.html

Interesting paper: “Anonymization and Risk,” by Ira S. Rubinstein and Woodrow Hartzog:

Abstract: Perfect anonymization of data sets has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the better locus of data release policy is on the process of minimizing the risk of reidentification and sensitive attribute disclosure. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk.

The Difficulty of Routing around Internet Surveillance States

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/the_difficulty_.html

Interesting research: “Characterizing and Avoiding Routing Detours Through Surveillance States,” by Anne Edmundson, Roya Ensafi, Nick Feamster, and Jennifer Rexford.

Abstract: An increasing number of countries are passing laws that facilitate the mass surveillance of Internet traffic. In response, governments and citizens are increasingly paying attention to the countries that their Internet traffic traverses. In some cases, countries are taking extreme steps, such as building new Internet Exchange Points (IXPs), which allow networks to interconnect directly, and encouraging local interconnection to keep local traffic local. We find that although many of these efforts are extensive, they are often futile, due to the inherent lack of hosting and route diversity for many popular sites. By measuring the country-level paths to popular domains, we characterize transnational routing detours. We find that traffic is traversing known surveillance states, even when the traffic originates and ends in a country that does not conduct mass surveillance. Then, we investigate how clients can use overlay network relays and the open DNS resolver infrastructure to prevent their traffic from traversing certain jurisdictions. We find that 84% of paths originating in Brazil traverse the United States, but when relays are used for country avoidance, only 37% of Brazilian paths traverse the United States. Using the open DNS resolver infrastructure allows Kenyan clients to avoid the United States on 17% more paths. Unfortunately, we find that some of the more prominent surveillance states (e.g., the U.S.) are also some of the least avoidable countries.

Kim Dotcom Hints at Second Coming of Megaupload

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-hints-at-second-coming-of-megaupload-160706/

dotcom-laptopWith multiple legal cases underway in several jurisdictions, Kim Dotcom is undoubtedly a man with things on his mind.

In New Zealand, he’s fighting extradition to the United States. And in the United States he’s fighting a government that wants to bring him to justice on charges of copyright infringement, conspiracy, money laundering and racketeering.

After dramatically launching and then leaving his Mega file-hosting site following what appears to have been an acrimonious split, many believed that Dotcom had left the file-sharing space for good. But after a period of quiet, it now transpires that the lure of storing data has proven too much of a temptation for the businessman.

In a follow-up to previous criticism of his former company, earlier today Dotcom took another shot at Mega. That was quickly followed by a somewhat surprising announcement.

“A new site is in the making. 100gb free cloud storage,” Dotcom said.

Intrigued, TorrentFreak spoke with Dotcom to find out more. Was he really planning to launch another file-sharing platform?

“I can say that this year I have set things in motion and a new cloud storage site is currently under development,” Dotcom confirmed.

“I’m excited about the new innovations the site will contain.”

When pressed on specific features for the new platform, Dotcom said it was too early to go into details. However, we do know that the site will enable users to sync all of their devices and there will be no data transfer limits.

For the privacy-conscious, Dotcom also threw out another small bone, noting that the site will also feature on-the-fly encryption. Given the German’s attention to security in recent years, it wouldn’t be a surprise if additional features are added before launch.

“Eight years of knowledge and a long planning period went into this. It will be my best creation yet,” Dotcom told TF.

A potential launch date for the site hasn’t been confirmed but the Megaupload and Mega founder is currently teasing the hashtag #5thRaidAnniversary, suggesting that his new project will launch in January 2017, five years after the Megaupload raids.

Of course, we also asked Dotcom if he’d decided on a name for his new cloud-storage site. Typically he’s playing his cards close to his chest and leaving us to fill in the blanks, but he hinted that an old name with a big reputation might be making a comeback.

“The name of the new site will make people happy,” he told us.

TF will be getting a sneak peek at the site when it’s ready for launch but in the meantime, readers might be wondering what has happened to Dotcom’s censorship-resistant MegaNet project.

“Mobile networks and devices still have to catch up with my vision for MegaNet and it will probably not be before 2018 until a beta goes live,” Dotcom concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DMCA Notices Nuke 8,268 Projects on Github

Post Syndicated from Andy original https://torrentfreak.com/dmca-notices-nuke-8268-projects-on-github-160629/

githubWithout a doubt, Github is a huge player in the world of coding. The platform is the largest of its type in the world with the company currently reporting 15 million users collaborating across 38 million repositories.

As the development platform used by file-sharing projects including the infamous Popcorn Time, Github appears a few times a year here on TF. Those appearances are often due to various types of copyright disputes, from allegedly infringing projects to allegedly stolen code.

When it comes to Github copyright complaints, those reported here are the tip of a vast iceberg but thanks to the transparency report just published by the company, we now have a much clearer idea of the numbers involved.

“In 2015, we received significantly more takedown notices, and took down significantly more content, than we did in 2014,” Github reports.

Last year the company received just 258 DMCA notices, with 17 of those responded to with a counter-notice or retraction. In 2015, that number jumped to 505 takedown notices, with just 62 the subject of counters or withdrawals.

githubdmca1

But while tracking and reporting the numbers of DMCA notices is useful, the numbers shown above obscure a more serious situation. Copyright holders are not limited to reporting one URL or location per DMCA notice. In fact, each notice filed can target tens, hundreds, or even thousands of allegedly infringing locations.

“Often, a single takedown notice can encompass more than one project. We wanted to look at the total number of projects, such as repositories, Gists, and Pages sites, that we had taken down due to DMCA takedown requests in 2015,” Github writes.

When processed, a much bigger picture was revealed.

githubdmca2

By any measure, September 2015 was a particularly active month and this naturally raised alarm bells at Github. Upon investigation, it became clear that the company had received DMCA notices that targeted many repositories all at once.

“Usually, the DMCA reports we receive are from people or organizations reporting a single potentially infringing repository. However, every now and then we receive a single notice asking us to take down many repositories,” Github explains.

“We classified ‘Mass Removals’ as any takedown notice asking us to remove content from more than one hundred repositories, counting each fork separately, in a single takedown notice.”

When these type of notices are withdrawn from the report, Github says that DMCA notice frequency normalizes across the year, but nevertheless they still represent a significant proportion of the notices received. (A ‘Frequent Noticer’ is someone who sends more than four notices in a year)

githubdmca3

“While 83% of our 505 DMCA takedown notices came in from individuals and organizations sending requests to take down small numbers of repositories, the remaining 17% of notices accounted for the overwhelming majority of the content we actually removed,” Github says.

“In all, fewer than twenty individual notice senders requested removal of over 90% of the content GitHub took down in 2015.”

Finally, Github provides detail on important issues surrounding user privacy, which mainly affects those who maintain projects that are likely to attract legal attention.

In 2014, Github received a total of 10 subpoenas relating to projects it hosts. Last year that grew to 12 and in total the company handed over information in 83% of cases.

However, due to gagging orders, affected users were only given notice in just 30% of cases. Github received seven gag orders in 2015, up from four in 2014.

Finally, Github touches on the issue of National Security Orders.

“We are not allowed to say much about this last category of legal disclosure requests, including national security letters from law enforcement and orders from the Foreign Intelligence Surveillance Court,” the company writes.

“If one of these requests comes with a gag order — and they usually do — that not only prevents us from talking about the specifics of the request but even the existence of the request itself.”

To that end, Github ‘reveals’ that it received somewhere between zero and 249 National Security Orders in 2015. The full report is available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Facebook Using Physical Location to Suggest Friends

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/06/facebook_using_.html

This could go badly:

People You May Know are people on Facebook that you might know,” a Facebook spokesperson said. “We show you people based on mutual friends, work and education information, networks you’re part of, contacts you’ve imported and many other factors.”

One of those factors is smartphone location. A Facebook spokesperson said though that shared location alone would not result in a friend suggestion, saying that the two parents must have had something else in common, such as overlapping networks.

“Location information by itself doesn’t indicate that two people might be friends,” said the Facebook spokesperson. “That’s why location is only one of the factors we use to suggest people you may know.”

The article goes on to describe situations where you don’t want Facebook to do this: Alcoholics Anonymous meetings, singles bars, some Tinder dates, and so on. But this is part of Facebook’s aggressive use of location data in many of its services.

BoingBoing post.

EDITED TO ADD: Facebook backtracks.

Now Open – AWS Asia Pacific (Mumbai) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-asia-pacific-mumbai-region/

We are expanding the AWS footprint again, this time with a new region in Mumbai, India. AWS customers in the area can use the new Asia Pacific (Mumbai) Region to better serve end users in India.

New Region
The new Mumbai region has two Availability Zones, raising the global total to 35. It supports Amazon Elastic Compute Cloud (EC2) (C4, M4, T2, D2, I2, and R3 instances are available) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and  Elastic Load Balancing.

It also supports the following services:

There are now three edge locations (Mumbai, Chennai, and New Delhi) in India. The locations support Amazon Route 53, Amazon CloudFront, and S3 Transfer Acceleration. AWS Direct Connect support is available via our Direct Connect Partners (listed below).

This is our thirteenth region (see the AWS Global Infrastructure map for more information). As usual, you can see the list of regions in the region menu of the Console:

Customers
There are over 75,000 active AWS customers in India, representing a diverse base of industries. In the time leading up to today’s launch, we have provided some of these customers with access to the new region in preview form. Two of them (Ola Cabs and NDTV) were kind enough to share some of their experience and observations with us:

Ola Cabs’ mobile app leverages AWS to redefine point-to-point transportation in more than 100 cities across India. AWS allows OLA to constantly innovate faster with new features and services for their customers, without compromising on availability or the customer experience of their service. Ankit Bhati (CTO and Co-Founder) told us:

We are using technology to create mobility for a billion Indians, by giving them convenience and access to transportation of their choice. Technology is a key enabler, where we use AWS to drive supreme customer experience, and innovate faster on new features & services for our customers. This has helped us reach 100+ cities & 550K driver partners across India. We do petabyte scale analytics using various AWS big data services and deep learning techniques, allowing us to bring our driver-partners close to our customers when they need them. AWS allows us to make 30+ changes a day to our highly scalable micro-services based platform consisting of 100s of low latency APIs, serving millions of requests a day. We have tried the AWS India region. It is great and should help us further enhance the experience for our customers.


NDTV, India’s leading media house is watched by millions of people across the world. NDTV has been using AWS since 2009 to run their video platform and all their web properties. During the Indian general elections in May 2014, NDTV fielded an unprecedented amount of web traffic that scaled 26X from 500 million hits per day to 13 billion hits on Election Day (regularly peaking at 400K hits per second), all running on AWS.  According to Kawaljit Singh Bedi (CTO of NDTV Convergence):

NDTV is pleased to report very promising results in terms of reliability and stability of AWS’ infrastructure in India in our preview tests. Based on tests that our technical teams have run in India, we have determined that the network latency from the AWS India infrastructure Region are far superior compared to other alternatives. Our web and mobile traffic has jumped by over 30% in the last year and as we expand to new territories like eCommerce and platform-integration we are very excited on the new AWS India region launch. With the portfolio of services AWS will offer at launch, low latency, great reliability, and the ability to meet regulatory requirements within India, NDTV has decided to move these critical applications and IT infrastructure all-in to the AWS India region from our current set-up.

 


Here are some of our other customers in the region:

Tata Motors Limited, a leading Indian multinational automotive manufacturing company runs its telematics systems on AWS. Fleet owners use this solution to monitor all vehicles in their fleet on a real time basis. AWS has helped Tata Motors become to more agile and has increased their speed of experimentation and innovation.

redBus is India’s leading bus ticketing platform that sells their tickets via web, mobile, and bus agents. They now cover over 67K routes in India with over 1,800 bus operators. redBus has scaled to sell more than 40 million bus tickets annually, up from just 2 million in 2010. At peak season, there are over 100 bus ticketing transactions every minute. The company also recently developed a new SaaS app on AWS that gives bus operators the option of handling their own ticketing and managing seat inventories. redBus has gone global expanding to new geographic locations such as Singapore and Peru using AWS.

Hotstar is India’s largest premium streaming platform with more than 85K hours of drama and movies and coverage of every major global sporting event. Launched in February 2015, Hotstar quickly became one of the fastest adopted new apps anywhere in the world. It has now been downloaded by more than 68M users and has attracted followers on the back of a highly evolved video streaming technology and high attention to quality of experience across devices and platforms.

Macmillan India has provided publishing services to the education market in India for more than 120 years. Prior to using AWS, Macmillan India has moved its core enterprise applications — Business Intelligence (BI), Sales and Distribution, Materials Management, Financial Accounting and Controlling, Human Resources and a customer relationship management (CRM) system from an existing data center in Chennai to AWS. By moving to AWS, Macmillan India has boosted SAP system availability to almost 100 percent and reduced the time it takes them to provision infrastructure from 6 weeks to 30 minutes.

Partners
We are pleased to be working with a broad selection of partners in India. Here’s a sampling:

  • AWS Premier Consulting Partners – BlazeClan Technologies Pvt. Limited, Minjar Cloud Solutions Pvt Ltd, and Wipro.
  • AWS Consulting Partners – Accenture, BluePi, Cloudcover, Frontier, HCL, Powerupcloud, TCS, and Wipro.
  • AWS Technology Partners – Freshdesk, Druva, Indusface, Leadsquared, Manthan, Mithi, Nucleus Software, Newgen, Ramco Systems, Sanovi, and Vinculum.
  • AWS Managed Service Providers – Progressive Infotech and Spruha Technologies.
  • AWS Direct Connect Partners – AirTel, Colt Technology Services,  Global Cloud Xchange, GPX, Hutchison Global Communications, Sify, and Tata Communications.

Amazon Offices in India
We have opened six offices in India since 2011 – Delhi, Mumbai, Hyderabad, Bengaluru, Pune, and Chennai. These offices support our diverse customer base in India including enterprises, government agencies, academic institutions, small-to-mid-size companies, startups, and developers.

Support
The full range of AWS Support options (Basic, Developer, Business, and Enterprise) is also available for the Mumbai Region. All AWS support plans include an unlimited number of account and billing support cases, with no long-term contracts.

Compliance
Every AWS region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, and PCI DSS Level 1 (to name a few). AWS implements an information Security Management System (ISMS) that is independently assessed by qualified third parties. These assessments address a wide variety of requirements which are communicated to customers by making certifications and audit reports available, either on our public-facing website or upon request.

To learn more; take a look at the AWS Cloud Compliance page and our Data Privacy FAQ.

Use it Now
This new region is now open for business and you can start using it today! You can find additional information about the new region, documentation on how to migrate, customer use cases, information on training and other events, and a list of AWS Partners in India on the AWS site.

We have set up a seller of record in India (known as AISPL); please see the AISPL customer agreement for details.


Jeff;

 

Help! My VPN Provider Is Compromised By a Gag Order!

Post Syndicated from Ernesto original https://torrentfreak.com/vpn-provider-proxy-sh-compromised-gag-order-160626/

proxyshMillions of Internet users around the world use a VPN to protect their privacy online. One of the key benefits is that it hides one’s true IP-address from third-party monitoring outfits, countering a lot of unwanted snooping.

However, law enforcement is not always happy with these services and in extreme cases can compel VPN providers to start logging internal connections to catch a perpetrator.

This is what appears to have happened to Seychelles-based VPN service Proxy.sh. Earlier this month the company excluded one of its nodes from its warrant canary.

“We would like to inform our users that we do not wish any longer to mention France 8 (85.236.153.236) in our warrant canary until further notice,” the company announced on its website, and via email to its customers.

Proxy.sh’s warning

proxycana

The warrant canary states that no warrants, searches or seizures of any kind have been received, but this is no longer true for the French node. The fact that this has been announced indirectly suggests that the company is not allowed to communicate about it publicly.

TorrentFreak reached out to Proxy.sh hoping to get some additional information. While no further details were provided, the VPN provider strongly advises its users not to connect to the ‘compromised’ node.

“We recommend our users to no longer connect to it. We are striving to do whatever it takes to include that node into our warrant canary again,” Proxy.sh says.

“The warrant canary has been particularly designed to make sure we could still move without being legally able to answer questions in a more detailed manner. We are happy to see it put to use after all and that our users are made aware of it,” they add.

The announcement will come as a shock to most Proxy.sh users and many will be wondering what they should do next. A good question, but unfortunately not one with an easy answer.

Leave or stay?

Some users may be inclined to leave. Why stay with a VPN provider that’s partly compromised if there are many other alternatives out there? This is a logical and understandable response.

On the other hand, one can also value Proxy.sh’s transparency in the matter. The company takes its warrant canary seriously where other VPN providers, with or without a warrant canary, may have stayed quiet.

Ironically, the fact that Proxy.sh received a gag order increases the trustworthiness of the company itself, although that comes at a price.

We suspect that there are only a few VPN providers that would suspend their operations “Lavabit style” on receipt of a narrowly targeted gag order that doesn’t compromise its service as a whole. Considering the fact that only one node is in question, the request does appear to be rather targeted in this case.

It’s also worth keeping in mind that many large Internet companies including Google and Facebook receive gag orders on a regular basis. Most users have no clue that this is happening, and others simply don’t care.

Trust?

VPN users who would prefer their VPN provider to shut down instead of complying with a gag order should leave, that much is clear. But how do you know that the next choice will be as transparent as Proxy.sh?

As is often the case it all boils down to trust. Do you trust your VPN provider to handle your private communications carefully, and to what degree does a gag order on one of the nodes change this?

How one answers this question is a matter of personal preference.

Most of our questions to Proxy.sh remained unanswered, presumably due to the court order, but the company was able to provide some additional details on their compliance with orders from various jurisdictions.

While the company is incorporated in the Seychelles, it also complies with orders from other jurisdictions it operates from.

“Our company respects the law everywhere it operates, but it still has the option to cooperate fully while ceasing any further operations in any specific jurisdiction,” Proxy.sh says.

“Depending on the level of threat to our users’ privacy and according to our legal advisers, we take the decision to bring updates to our warrant canary either for a specific node or for a whole country.”

So what would you do in this situation?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Earns Department of Defense Impact Level 4 Provisional Authorization

Post Syndicated from Chris Gile original https://blogs.aws.amazon.com/security/post/Tx958PD4LBSXN5/AWS-Earns-Department-of-Defense-Impact-Level-4-Provisional-Authorization

I am pleased to share that, for our AWS GovCloud (US) Region, AWS has received a Defense Information Systems Agency (DISA) Provisional Authorization (PA) at Impact Level 4 (IL4). This will allow Department of Defense (DoD) agencies to use the AWS Cloud for production workloads with export-controlled data, privacy information, and protected health information as well as other controlled unclassified information. This new authorization continues to demonstrate our advanced work in the public sector space; you might recall AWS was the first cloud service provider to obtain an Impact Level 4 PA in August 2014, paving the way for DoD pilot workloads and applications in the cloud. Additionally, we recently achieved a FedRAMP High provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB), also for AWS GovCloud (US), and today’s announcement allows DoD mission owners to continue to leverage AWS for critical production applications.

DISA is a support agency of the DoD, providing, operating, and assuring information-sharing capabilities and a globally accessible enterprise information infrastructure in direct support of mission and coalition partners. DISA will leverage AWS GovCloud (US) continuous monitoring reports managed by the FedRAMP program.

Cloud computing technology and services provide the DoD with the opportunity to deploy an Enterprise Cloud Environment aligned with Federal Department-wide Information Technology (IT) strategies and efficiency initiatives, including federal data center consolidation.

“Naturally, we’re excited to extend our critical, secure cloud capabilities to our Defense customers and the effort we pour into that support is demonstrated by this significant achievement,” said Chad Woolf, AWS Director of Risk & Compliance. “Our DoD IL4 authorization gives Defense agencies a definitive path to leverage the agile and secure capabilities of the cloud for highly sensitive Defense workloads.”

For a list of frequently asked questions, please visit our AWS DoD Compliance page. DoD agencies can request the AWS GovCloud (US) IL4 Security Package by submitting a Compliance Support Request to the AWS public sector sales and business development team. For more information on AWS security and compliance, see the AWS Security Center and the AWS Compliance Center.

– Chris Gile, Senior Manager, AWS Public Sector Risk & Compliance

Up1 – Client Side Encrypted Image Host

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/zT_dwx8Lw-o/

Up1 is a client side encrypted image host that that can also encrypt text, and other data, and then store them, with the server knowing nothing about the contents. It has the ability to view images, text with syntax highlighting, short videos, and arbitrary binaries as downloadables. How it Works Before an image is uploaded, […]

The post…

Read the full post at darknet.org.uk

Researchers Crack ‘Social DRM’ EBook Watermarks

Post Syndicated from Andy original https://torrentfreak.com/researchers-crack-social-drm-ebook-watermarks-160625/

The unauthorized copying and distribution of copyrighted content is a multi-billion dollar puzzle that entertainment industry companies are desperate to solve.

As such, anti-piracy companies are always trying to come up with new ways to stop people from sharing that material online. With that an almost impossible task, some have taken to watermarking instead, with the aim of tracking content and providing a trail back to the source.

What watermarking (so-called ‘Social DRM’) offers over more traditional DRM mechanisms is that it limits inconvenience to the end user and doesn’t hinder file compatibility across devices. However, it does have serious privacy implications for those using ‘infected’ files.

This problem has become a thorn in the side of a group of researchers calling themselves the Institute for Biblio-Immunology. In an email sent to TorrentFreak this week, the group detailed its work against the BooXtream watermarking system offered by Dutch company Icontact.

It all began when publisher Verso Books published an eBook version of Aaron Swartz’s ‘The Boy Who Could Change the World’. This edition of the book prompted an angry response from some quarters and the addition of BooXtream watermarks only made matters worse.

The problem is that BooXtream embeds the personal details of the eBook buyer into the book itself, and this stays with the file forever. If that book turns up anywhere where it shouldn’t, that purchaser can be held responsible.

Sean B. Palmer, the “virtual executor” of Aaron Swartz, subsequently asked Verso to remove the watermarks. They refused and it lit a fire under the Institute for Biblio-Immunology (IBI).

After a long process dissecting BooXtream’s ‘Social DRM’ the researchers have now published a lengthy communique which reveals how the watermarking system works and can be defeated.

Speaking with TorrentFreak, IBI says its motivation is clear. Books should inform buyers, not breach their privacy.

“Books should be used as tools for disseminating knowledge and information. What ‘social DRM’ watermarking systems do instead is turn books into tools of surveillance and oppression by monitoring who shares what knowledge, where,” IBI explain.

“We don’t like this, and because the publisher Verso has refused to remove the watermarks themselves, we decided to do it for them, and to show everyone how these systems work.”

But there are bigger issues at stake. While people in the West take the freedom to read books of their choosing for granted, not everyone has that luxury.

“Imagine if a watermarked ebook contains someone’s name (as many do). Suppose that someone is reading that watermarked ebook under a regime that bans the particular kind of material covered in that book,” IBI add.

“If the operatives of the regime see the watermark, they would then be able to arrest and perhaps even execute the purchaser of the ebook if they too are living under the same regime.”

But matters of life and death aside, IBI say they believe that people should not only be able to read whatever they want, they should also be able to share that knowledge with others.

“That’s how information spreads across cultures, through unrestrained, free propagation of knowledge. Watermarking systems attempt to corrupt these vectors of knowledge transmission by identifying and then filing legal action against some readers,” they conclude.

The lengthy report can be found here. Much of it is fairly technical but in a follow-up email, IBI pointed TF to a Github page containing a script to automate the processes detailed in their communique.

It’s likely that BooXtream will respond to this provocation so the war for free access to information and privacy isn’t over just yet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Defending Our Brand (Let’s Encrypt)

Post Syndicated from jake original http://lwn.net/Articles/692555/rss

It seems that the Comodo TLS certificate authority (CA) has filed for three trademarks using variations of “Let’s Encrypt”. As might be guessed, the Let’s Encrypt project is less than pleased by Comodo trying to coopt its name. “Since March of 2016 we have repeatedly asked Comodo to abandon their “Let’s Encrypt” applications, directly and through our attorneys, but they have refused to do so. We are clearly the first and senior user of “Let’s Encrypt” in relation to Internet security, including SSL/TLS certificates – both in terms of length of use and in terms of the widespread public association of that brand with our organization.

If necessary, we will vigorously defend the Let’s Encrypt brand we’ve worked so hard to build. That said, our organization has limited resources and a protracted dispute with Comodo regarding its improper registration of our trademarks would significantly and unnecessarily distract both organizations from the core mission they should share: creating a more secure and privacy-respecting Web. We urge Comodo to do the right thing and abandon its “Let’s Encrypt” trademark applications so we can focus all of our energy on improving the Web.”

[Thanks to Paul Wise.]

Defending Our Brand

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2016/06/23/defending-our-brand.html

Some months ago, it came to our attention that Comodo Group, Inc., is attempting to register at least three trademarks for the term “Let’s Encrypt,” for a variety of CA-related services [1][2][3]. These trademark applications were filed long after the Internet Security Research Group (ISRG) started using the name Let’s Encrypt publicly in November of 2014, and despite the fact Comodo’s “intent to use” trademark filings acknowledge that it has never used “Let’s Encrypt” as a brand.

We’ve forged relationships with millions of websites and users under the name Let’s Encrypt, furthering our mission to make encryption free, easy, and accessible to everyone. We’ve also worked hard to build our unique identity within the community and to make that identity a reliable indicator of quality. We take it very seriously when we see the potential for our users to be confused, or worse, the potential for a third party to damage the trust our users have placed in us by intentionally creating such confusion. By attempting to register trademarks for our name, Comodo is actively attempting to do just that.

Since March of 2016 we have repeatedly asked Comodo to abandon their “Let’s Encrypt” applications, directly and through our attorneys, but they have refused to do so. We are clearly the first and senior user of “Let’s Encrypt” in relation to Internet security, including SSL/TLS certificates – both in terms of length of use and in terms of the widespread public association of that brand with our organization.

If necessary, we will vigorously defend the Let’s Encrypt brand we’ve worked so hard to build. That said, our organization has limited resources and a protracted dispute with Comodo regarding its improper registration of our trademarks would significantly and unnecessarily distract both organizations from the core mission they should share: creating a more secure and privacy-respecting Web. We urge Comodo to do the right thing and abandon its “Let’s Encrypt” trademark applications so we can focus all of our energy on improving the Web.

[1] “Let’s Encrypt” Trademark Registration Application

[2] “Let’s Encrypt With Comodo” Trademark Registration Application

[3] “Comodo Let’s Encrypt” Trademark Registration Application