Tag Archives: phone

OMG The Stupid It Burns

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/omg-stupid-it-burns.html

This article, pointed out by @TheGrugq, is stupid enough that it’s worth rebutting.

The article starts with the question “Why did the lessons of Stuxnet, Wannacry, Heartbleed and Shamoon go unheeded?“. It then proceeds to ignore the lessons of those things.
Some of the actual lessons should be things like how Stuxnet crossed air gaps, how Wannacry spread through flat Windows networking, how Heartbleed comes from technical debt, and how Shamoon furthers state aims by causing damage.
But this article doesn’t cover the technical lessons. Instead, it thinks the lesson should be the moral lesson, that we should take these things more seriously. But that’s stupid. It’s the sort of lesson people teach you that know nothing about the topic. When you have nothing of value to contribute to a topic you can always take the moral high road and criticize everyone for being morally weak for not taking it more seriously. Obviously, since doctors haven’t cured cancer yet, it’s because they don’t take the problem seriously.
The article continues to ignore the lesson of these cyber attacks and instead regales us with a list of military lessons from WW I and WW II. This makes the same flaw that many in the military make, trying to understand cyber through analogies with the real world. It’s not that such lessons could have no value, it’s that this article contains a poor list of them. It seems to consist of a random list of events that appeal to the author rather than events that have bearing on cybersecurity.
Then, in case we don’t get the point, the article bullies us with hyperbole, cliches, buzzwords, bombastic language, famous quotes, and citations. It’s hard to see how most of them actually apply to the text. Rather, it seems like they are included simply because he really really likes them.
The article invests much effort in discussing the buzzword “OODA loop”. Most attacks in cyberspace don’t have one. Instead, attackers flail around, trying lots of random things, overcoming defense with brute-force rather than an understanding of what’s going on. That’s obviously the case with Wannacry: it was an accident, with the perpetrator experimenting with what would happen if they added the ETERNALBLUE exploit to their existing ransomware code. The consequence was beyond anybody’s ability to predict.
You might claim that this is just the first stage, that they’ll loop around, observe Wannacry’s effects, orient themselves, decide, then act upon what they learned. Nope. Wannacry burned the exploit. It’s essentially removed any vulnerable systems from the public Internet, thereby making it impossible to use what they learned. It’s still active a year later, with infected systems behind firewalls busily scanning the Internet so that if you put a new system online that’s vulnerable, it’ll be taken offline within a few hours, before any other evildoer can take advantage of it.
See what I’m doing here? Learning the actual lessons of things like Wannacry? The thing the above article fails to do??
The article has a humorous paragraph on “defense in depth”, misunderstanding the term. To be fair, it’s the cybersecurity industry’s fault: they adopted then redefined the term. That’s why there’s two separate articles on Wikipedia: one for the old military term (as used in this article) and one for the new cybersecurity term.
As used in the cybersecurity industry, “defense in depth” means having multiple layers of security. Many organizations put all their defensive efforts on the perimeter, and none inside a network. The idea of “defense in depth” is to put more defenses inside the network. For example, instead of just one firewall at the edge of the network, put firewalls inside the network to segment different subnetworks from each other, so that a ransomware infection in the customer support computers doesn’t spread to sales and marketing computers.
The article talks about exploiting WiFi chips to bypass the defense in depth measures like browser sandboxes. This is conflating different types of attacks. A WiFi attack is usually considered a local attack, from somebody next to you in bar, rather than a remote attack from a server in Russia. Moreover, far from disproving “defense in depth” such WiFi attacks highlight the need for it. Namely, phones need to be designed so that successful exploitation of other microprocessors (namely, the WiFi, Bluetooth, and cellular baseband chips) can’t directly compromise the host system. In other words, once exploited with “Broadpwn”, a hacker would need to extend the exploit chain with another vulnerability in the hosts Broadcom WiFi driver rather than immediately exploiting a DMA attack across PCIe. This suggests that if PCIe is used to interface to peripherals in the phone that an IOMMU be used, for “defense in depth”.
Cybersecurity is a young field. There are lots of useful things that outsider non-techies can teach us. Lessons from military history would be well-received.
But that’s not this story. Instead, this story is by an outsider telling us we don’t know what we are doing, that they do, and then proceeds to prove they don’t know what they are doing. Their argument is based on a moral suasion and bullying us with what appears on the surface to be intellectual rigor, but which is in fact devoid of anything smart.
My fear, here, is that I’m going to be in a meeting where somebody has read this pretentious garbage, explaining to me why “defense in depth” is wrong and how we need to OODA faster. I’d rather nip this in the bud, pointing out if you found anything interesting from that article, you are wrong.

Welcome Victoria — Sales Development Representative

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-victoria-sales-development-representative/

Ever since we introduced our Groups feature, Backblaze for Business has been growing at a rapid rate! We’ve been staffing up in order to support the product and the newest addition to the sales team, Victoria, joins us as a Sales Development Representative! Let’s learn a bit more about Victoria, shall we?

What is your Backblaze Title?
Sales Development Representative.

Where are you originally from?
Harrisburg, North Carolina.

What attracted you to Backblaze?
The leaders and family-style culture.

What do you expect to learn while being at Backblaze?
How to sell, sell, sell!

Where else have you worked?
The North Carolina Autism Society, an ophthalmologist’s office, home health care, and another tech startup.

Where did you go to school?
The University of North Carolina Chapel Hill and Duke University’s Fuqua School of Business.

What’s your dream job?
Fighter pilot, professional snowboarder or killer whale trainer.

Favorite place you’ve traveled?
Hawaii and Banff.

Favorite hobby?
Basketball and cars.

Of what achievement are you most proud?
Missionary work and helping patients feel better.

Star Trek or Star Wars?
Neither, but probably Star Wars.

Coke or Pepsi?
Neither, bubble tea.

Favorite food?
Snow crab legs.

Why do you like certain things?
Because God made me that way.

Anything else you’d like you’d like to tell us?
I’m a germophobe, drink a lot of water and unfortunately, am introverted.

Being on the phones all day is a good way to build up those extroversion skills! Welcome to the team and we hope you enjoy learning how to sell, sell, sell!

The post Welcome Victoria — Sales Development Representative appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Lifting a Fingerprint from a Photo

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/lifting_a_finge.html

Police in the UK were able to read a fingerprint from a photo of a hand:

Staff from the unit’s specialist imaging team were able to enhance a picture of a hand holding a number of tablets, which was taken from a mobile phone, before fingerprint experts were able to positively identify that the hand was that of Elliott Morris.

[…]

Speaking about the pioneering techniques used in the case, Dave Thomas, forensic operations manager at the Scientific Support Unit, added: “Specialist staff within the JSIU fully utilised their expert image-enhancing skills which enabled them to provide something that the unit’s fingerprint identification experts could work. Despite being provided with only a very small section of the fingerprint which was visible in the photograph, the team were able to successfully identify the individual.”

IsoHunt Founder Returns With New Search Tool

Post Syndicated from Ernesto original https://torrentfreak.com/isohunt-founder-returns-with-new-search-tool-180419/

Of all the major torrent sites that dominated the Internet at the beginning of this decade, only a few remain.

One of the sites that fell prey to ever-increasing pressure from the entertainment industry was isoHunt.

Founded by the Canadian entrepreneur Gary Fung, the site was one of the early pioneers in the world of torrents, paving the way for many others. However, this spotlight also caught the attention of the major movie studios.

After a lengthy legal battle isoHunt’s founder eventually shut down the site late 2013. This happened after Fung signed a settlement agreement with Hollywood for no less than $110 million, on paper at least.

Launching a new torrent search engine was never really an option, but Fung decided not to let his expertise go to waste. He focused his time and efforts on a new search project instead, which was unveiled to the public this week.

The new app called “WonderSwipe” has just been added to Apple’s iOS store. It’s a mobile search app that ties into Google’s backend, but with a different user interface. While it has nothing to do with file-sharing, we decided to reach out to isoHunt’s founder to find out more.

Fung tells us that he got the idea for the app because he was frustrated with Google’s default search options on the mobile platform.

“I find myself barely do any search on the smartphone, most of the time waiting until I get to my desktop. I ask why?” Fung tells us.

One of the main issues he identified is the fact that swiping is not an option. Instead, people end up browsing through dozens of mobile browser tabs. So, Fung took Google’s infrastructure and search power, making it swipeable.

“From a UI design perspective, I find swiping through photos on the first iPhone one of the most extraordinary advances in computing. It’s so easy that babies would be doing it before they even learn how to flip open a book!

“Bringing that ease of use to the central way of conducting mobile search and research is the initial eureka I had in starting work on WonderSwipe,” Fung adds.

That was roughly three years ago, and a few hours ago WonderSwipe finally made its way into the App store. Android users will have to wait for now, but the application will eventually be available on that platform as well.

In addition to swiping through search results, the app also promises faster article loading and browsing, a reader mode with condensed search results, and a hands-free mode with automated browsing where summaries are read out loud.

WonderwSwipe


Of course, WonderSwipe is nothing like isoHunt ever was, apart from the fact that Google is a search engine that also links to torrents, indirectly.

This similarity was also brought up during the lawsuit with the MPAA, when Fung’s legal team likened isoHunt to Google in court. However, the Canadian entrepreneur doesn’t expect that Hollywood will have an issue with WonderSwipe in particular.

“isoHunt was similar to Google in how it worked as a search engine, but not in scope. Torrents are a small subset of all the webpages Google indexes,” Fung says.

“WonderSwipe’s aim is to find answers in all webpages, powered by Google search results. It presents results in extracted text and summaries with no connection to BitTorrent clients. As such, WonderSwipe can be bigger than isoHunt has ever been.”

Ironically, in recent years Hollywood has often criticized Google for linking to pirated content in its search results. These results will also be available through WonderSwipe.

However, Fung says that any copyright issues with WonderSwipe will have to be dealt with on the search engine level, by Google.

“If there are links to pirated content, tell search engines so they can take them down!” he says.

WonderSwipe is totally free and Fung tells us that he plans to monetize it with in-app purchases for pro features, and non-intrusive advertising that won’t slow down swiping or search results. More details on the future plans for the app are available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AIY Projects 2: Google’s AIY Projects Kits get an upgrade

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/google-aiy-projects-2/

After the outstanding success of their AIY Projects Voice and Vision Kits, Google has announced the release of upgraded kits, complete with Raspberry Pi Zero WH, Camera Module, and preloaded SD card.

Google AIY Projects Vision Kit 2 Raspberry Pi

Google’s AIY Projects Kits

Google launched the AIY Projects Voice Kit last year, first as a cover gift with The MagPi magazine and later as a standalone product.

Makers needed to provide their own Raspberry Pi for the original kit. The new kits include everything you need, from Pi to SD card.

Within a DIY cardboard box, makers were able to assemble their own voice-activated AI assistant akin to the Amazon Alexa, Apple’s Siri, and Google’s own Google Home Assistant. The Voice Kit was an instant hit that spurred no end of maker videos and tutorials, including our own free tutorial for controlling a robot using voice commands.

Later in the year, the team followed up the success of the Voice Kit with the AIY Projects Vision Kit — the same cardboard box hosting a camera perfect for some pretty nifty image recognition projects.

For more on the AIY Voice Kit, here’s our release video hosted by the rather delightful Rob Zwetsloot.

AIY Projects adds natural human interaction to your Raspberry Pi

Check out the exclusive Google AIY Projects Kit that comes free with The MagPi 57! Grab yourself a copy in stores or online now: http://magpi.cc/2pI6IiQ This first AIY Projects kit taps into the Google Assistant SDK and Cloud Speech API using the AIY Projects Voice HAT (Hardware Accessory on Top) board, stereo microphone, and speaker (included free with the magazine).

AIY Projects 2

So what’s new with version 2 of the AIY Projects Voice Kit? The kit now includes the recently released Raspberry Pi Zero WH, our Zero W with added pre-soldered header pins for instant digital making accessibility. Purchasers of the kits will also get a micro SD card with preloaded OS to help them get started without having to set the card up themselves.

Google AIY Projects Vision Kit 2 Raspberry Pi

Everything you need to build your own Raspberry Pi-powered Google voice assistant

In the newly upgraded AIY Projects Vision Kit v1.2, makers are also treated to an official Raspberry Pi Camera Module v2, the latest model of our add-on camera.

Google AIY Projects Vision Kit 2 Raspberry Pi

“Everything you need to get started is right there in the box,” explains Billy Rutledge, Google’s Director of AIY Projects. “We knew from our research that even though makers are interested in AI, many felt that adding it to their projects was too difficult or required expensive hardware.”

Google AIY Projects Vision Kit 2 Raspberry Pi
Google AIY Projects Vision Kit 2 Raspberry Pi
Google AIY Projects Vision Kit 2 Raspberry Pi

Google is also hard at work producing AIY Projects companion apps for Android, iOS, and Chrome. The Android app is available now to coincide with the launch of the upgraded kits, with the other two due for release soon. The app supports wireless setup of the AIY Kit, though avid coders will still be able to hack theirs to better suit their projects.

Google has also updated the AIY Projects website with an AIY Models section highlighting a range of neural network projects for the kits.

Get your kit

The updated Voice and Vision Kits were announced last night, and in the US they are available now from Target. UK-based makers should be able to get their hands on them this summer — keep an eye on our social channels for updates and links.

The post AIY Projects 2: Google’s AIY Projects Kits get an upgrade appeared first on Raspberry Pi.

postmarketOS Low-Level

Post Syndicated from ris original https://lwn.net/Articles/751951/rss

Alpine Linux-based postmarketOS is touch-optimized and pre-configured for
installation on smartphones and other mobile devices. The postmarketOS
blog introduces
postmarketOS-lowlevel
which is a community project aimed at creating
free bootloaders and cellular modem firmware, currently focused on MediaTek
phones. “But before we get started, please keep in mind that these
are moon shots. So while there is some little progress, it’s mostly about
letting fellow hackers know what we’ve tried and what we’re up to, in the
hopes of attracting more interested talent to our cause. After all, our
philosophy is to keep the community informed and engaged during the
development phase!

How Pirates Use New Technologies for Old Sharing Habits

Post Syndicated from Ernesto original https://torrentfreak.com/how-pirates-use-new-technologies-for-old-sharing-habits-180415/

While piracy today is more widespread than ever, the urge to share content online has been around for several decades.

The first generation used relatively primitive tools, such as a bulletin board systems (BBS), newsgroups or IRC. Nothing too fancy, but they worked well for those who got over the initial learning curve.

When Napster came along things started to change. More content became available and with just a few clicks anyone could get an MP3 transferred from one corner of the world to another. The same was true for Kazaa and Limewire, which further popularized online piracy.

After this initial boom of piracy applications, BitTorrent came along, shaking up the sharing landscape even further. As torrent sites are web-based, pirated media became even more public and easy to find.

At the same time, BitTorrent brought back the smaller and more organized sharing culture of the early days through private trackers.

These communities often focused on a specific type of content and put strict rules and guidelines in place. They promoted sharing and avoided the spam that plagued their public counterparts.

That was fifteen years ago.

Today the piracy landscape is more diverse than ever. Private torrent trackers are still around and so are IRC and newsgroups. However, most piracy today takes place in public. Streaming sites and devices are booming, with central hosting platforms offering the majority of the underlying content.

That said, there is still an urge for some pirates to band together and some use newer technologies to do so.

This week The Outline ran an interesting piece on the use of Telegram channels to share pirated media. These groups use the encrypted communication platform to share copies of movies, TV shows, and a wide range of other material.

Telegram allows users to upload files up to 1.5GB in size, but larger ones can be split, in common with the good old newsgroups.

These type of sharing groups are not new. On social media platforms such as Facebook and VK, there are hundreds or thousands of dedicated communities that do the same. Both public and private. And Reddit has similar groups, relying on external links.

According to an administrator of a piracy-focused Telegram channel, the appeal of the platform is that the groups are not shut down so easily. While that may be the case with hyper-private groups, Telegram will still pull the plug if it receives enough complaints about a channel.

The same is true for Discord, another application that can be used to share content in ‘private’ communities. Discord is particularly popular among gamers, but pirates have also found their way to the platform.

While smaller communities are able to thrive, once the word gets out to copyright holders, the party can soon be over. This is also what the /r/piracy subreddit community found out a few days ago when its Discord server was pulled offline.

This triggered a discussion about possible alternatives. Telegram was mentioned by some, although not everyone liked the idea of connecting their phone number to a pirate group. Others mentioned Slack, Weechat, Hexchat and Riot.im.

None of these tools are revolutionary. At least, not for the intended use by this group. Some may be harder to take down than others, but they are all means to share files, directly or through external links.

What really caught our eye, however, were several mentions of an ancient application layer protocol that, apparently, hasn’t lost its use to pirates.

“I’ll make an IRC server and host that,” one user said, with others suggesting the same.

And so we have come full circle…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Pirate Site-Blocking? Music Biz Wants App Blocking Too

Post Syndicated from Andy original https://torrentfreak.com/pirate-site-blocking-music-biz-wants-app-blocking-too-180415/

In some way, shape or form, Internet piracy has always been carried out through some kind of application. Whether that’s a peer-to-peer client utilizing BitTorrent or eD2K, or a Usenet or FTP tool taking things back to their roots, software has always played a crucial role.

Of course, the nature of the Internet beast means that software usage is unavoidable but in recent years piracy has swung more towards the regular web browser, meaning that sites and services offering pirated content are largely easy to locate, identify and block, if authorities so choose.

As revealed this week by the MPA, thousands of platforms around the world are now targeted for blocking, with 1,800 sites and 5,300 domains blocked in Europe alone.

However, as the Kodi phenomenon has shown, web-based content doesn’t always have to be accessed via a standard web browser. Clever but potentially illegal addons and third-party apps are able to scrape web-based resources and present links to content on a wide range of devices, from mobile phones and tablets to set-top boxes.

While it’s still possible to block the resources upon which these addons rely, the scattered nature of the content makes the process much more difficult. One can’t simply block a whole platform because a few movies are illegally hosted there and even Google has found itself hosting thousands of infringing titles, a situation that’s ruthlessly exploited by addon and app developers alike.

Needless to say, the situation hasn’t gone unnoticed. The Alliance for Creativity and Entertainment has spent the last year (1,2,3) targeting many people involved in the addon and app scene, hoping they’ll take their tools and run, rather than further develop a rapidly evolving piracy ecosystem.

Over in Russia, a country that will happily block hundreds or millions of IP addresses if it suits them, the topic of infringing apps was raised this week. It happened during the International Strategic Forum on Intellectual Property, a gathering of 500 experts from more than 30 countries. There were strong calls for yet more tools and measures to deal with films and music being made available via ‘pirate’ apps.

The forum heard that in response to widespread website blocking, people behind pirate sites have begun creating applications for mobile devices to achieve the same ends – the provision of illegal content. This, key players in the music industry say, means that the law needs to be further tightened to tackle the rising threat.

“Consumption of content is now going into the mobile sector and due to this we plan to prevent mass migration of ‘pirates’ to the mobile sector,” said Leonid Agronov, general director of the National Federation of the Music Industry.

The same concerns were echoed by Alexander Blinov, CEO of Warner Music Russia. According to TASS, the powerful industry player said that while recent revenues had been positively affected by site-blocking, it’s now time to start taking more action against apps.

“I agree with all speakers that we can not stop at what has been achieved so far. The music industry has a fight against illegal content in mobile applications on the agenda,” Blinov said.

And if Blinov is to be believed, music in Russia is doing particularly well at the moment. Attributing successes to efforts by parliament, the Ministry of Communications, and copyright holders, Blinov said the local music market has doubled in the past two years.

“We are now in the top three fastest growing markets in the world, behind only China and South Korea,” Blinov said.

While some apps can work in the same manner as a basic web interface, others rely on more complex mechanisms, ‘scraping’ content from diverse sources that can be easily and readily changed if mitigation measures kick in. It will be very interesting to see how Russia deals with this threat and whether it will opt for highly technical solutions or the nuclear options demonstrated recently.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

WHOIS Limits Under GDPR Will Make Pirates Harder to Catch, Groups Fear

Post Syndicated from Andy original https://torrentfreak.com/whois-limits-under-gdpr-will-make-pirates-harder-to-catch-groups-fear-180413/

The General Data Protection Regulation (GDPR) is a regulation in EU law covering data protection and privacy for all individuals within the European Union.

As more and more personal data is gathered, stored and (ab)used online, the aim of the GDPR is to protect EU citizens from breaches of privacy. The regulation applies to all companies processing the personal data of subjects residing in the Union, no matter where in the world the company is located.

Penalties for non-compliance can be severe. While there is a tiered approach according to severity, organizations can be fined up to 4% of annual global turnover or €20 million, whichever is greater. Needless to say, the regulations will need to be taken seriously.

Among those affected are domain name registries and registrars who publish the personal details of domain name owners in the public WHOIS database. In a full entry, a person or organization’s name, address, telephone numbers and email addresses can often be found.

This raises a serious issue. While registries and registrars are instructed and contractually obliged to publish data in the WHOIS database by global domain name authority ICANN, in millions of cases this conflicts with the requirements of the GDPR, which prevents the details of private individuals being made freely available on the Internet.

As explained in detail by the EFF, ICANN has been trying to resolve this clash. Its proposed interim model for GDPR compliance (pdf) envisions registrars continuing to collect full WHOIS data but not necessarily publishing it, to “allow the existing data
to be preserved while the community discussions continue on the next generation of WHOIS.”

But the proposed changes that will inevitably restrict free access to WHOIS information has plenty of people spooked, including thousands of companies belonging to entertainment industry groups such as the MPAA, IFPI, RIAA and the Copyright Alliance.

In a letter sent to Vice President Andrus Ansip of the European Commission, these groups and dozens of others warn that restricted access to WHOIS will have a serious effect on their ability to protect their intellectual property rights from “cybercriminals” which pose a threat to their businesses.

Signed by 50 organizations involved in IP protection and other areas of online security, the letter expresses concern that in attempting to comply with the GDPR, ICANN is on a course to “over-correct” while disregarding proportionality, accountability and transparency.

A small sample of the groups calling on ICANN

“We strongly assert that this model does not properly account for the critical public and legitimate interests served by maintaining a sufficient amount of data publicly available while respecting privacy interests of registrants by instituting a tiered or layered access system for the vast majority of personal data as defined by the GDPR,” the groups write.

The letter focuses on two aspects of “over-correction”, the first being ICANN’s proposal that no personal data whatsoever of a domain name registrant will be made available “without appropriate consideration or balancing of the countervailing interests in public disclosure of a limited amount of such data.”

In response to ICANN’s proposal that only the province/state and country of a domain name registrant be made publicly available, the groups advise the organization that publishing “a natural person registrant’s e-mail address” in a publicly accessible WHOIS directory will not constitute a breach of the GDPR.

“[W]e strongly believe that the continued public availability of the registrant’s e-mail address – specifically the e-mail address that the registrant supplies to the registrar at the time the domain name is purchased and which e-mail address the registrar is required to validate – is critical for several reasons,” the groups write.

“First, it is the data element that is typically the most important to have readily available for law enforcement, consumer protection, particularly child protection, intellectual property enforcement and cybersecurity/anti-malware purposes.

“Second, the public accessibility of the registrant’s e-mail address permits a broad array of threats and illegal activities to be addressed quickly and the damage from such threats mitigated and contained in a timely manner, particularly where the abusive/illegal activity may be spawned from a variety of different domain names on different generic Top Level Domains,” they add.

The groups also argue that since making email addresses is effectively required in light of Article 5.1(c) ECD, “there is no legitimate justification to discontinue public availability of the registrant’s e-mail address in the WHOIS directory and especially not in light of other legitimate purposes.”

The EFF, on the other hand, says that being able to contact a domain owner wouldn’t necessarily require an email address to be made public.

“There are other cases in which it makes sense to allow members of the public to contact the owner of a domain, without having to obtain a court order,” EFF writes.

“But this could be achieved very simply if ICANN were simply to provide something like a CAPTCHA-protected contact form, which would deliver email to the appropriate contact point with no need to reveal the registrant’s actual email address.”

The groups’ second main concern is that ICANN reportedly makes no distinction between name registrants that are “natural persons versus those that are legal entities” and intends to treat them all as if they are subject to the GDPR, despite the fact that the regulation only applies to data associated with an “identified or identifiable natural person”.

They say it is imperative that EU Data Protection Authorities are made to understand that when registrants obtain a domain for illegal purposes, they often only register it as a “natural person” when registering as a legal person (legal entity) would be more appropriate, despite that granting them less privacy.

“Consequently, the test for differentiating between a legal and natural person should not merely be the legal status of the registrant, but also whether the registrant is, in fact, acting as a legal or natural person vis a vis the use of the domain name,” the groups note.

“We therefore urge that ICANN be given appropriate guidance as to the importance of maintaining a distinction between natural person and legal person registrants and keeping as much data about legal person domain name registrants as publicly accessible as possible,” they conclude.

What will happen with WHOIS on May 25 still isn’t clear. It wasn’t until October 2017 that ICANN finally determined that it would be affected by the GDPR, meaning that it’s been scrambling ever since to meet the compliance date. And it still is, according to the latest available documentation (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Using AWS Lambda and Amazon Comprehend for sentiment analysis

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/using-aws-lambda-and-amazon-comprehend-for-sentiment-analysis/

This post courtesy of Giedrius Praspaliauskas, AWS Solutions Architect

Even with best IVR systems, customers get frustrated. What if you knew that 10 callers in your Amazon Connect contact flow were likely to say “Agent!” in frustration in the next 30 seconds? Would you like to get to them before that happens? What if your bot was smart enough to admit, “I’m sorry this isn’t helping. Let me find someone for you.”?

In this post, I show you how to use AWS Lambda and Amazon Comprehend for sentiment analysis to make your Amazon Lex bots in Amazon Connect more sympathetic.

Setting up a Lambda function for sentiment analysis

There are multiple natural language and text processing frameworks or services available to use with Lambda, including but not limited to Amazon Comprehend, TextBlob, Pattern, and NLTK. Pick one based on the nature of your system:  the type of interaction, languages supported, and so on. For this post, I picked Amazon Comprehend, which uses natural language processing (NLP) to extract insights and relationships in text.

The walkthrough in this post is just an example. In a full-scale implementation, you would likely implement a more nuanced approach. For example, you could keep the overall sentiment score through the conversation and act only when it reaches a certain threshold. It is worth noting that this Lambda function is not called for missed utterances, so there may be a gap between what is being analyzed and what was actually said.

The Lambda function is straightforward. It analyses the input transcript field of the Amazon Lex event. Based on the overall sentiment value, it generates a response message with next step instructions. When the sentiment is neutral, positive, or mixed, the response leaves it to Amazon Lex to decide what the next steps should be. It adds to the response overall sentiment value as an additional session attribute, along with slots’ values received as an input.

When the overall sentiment is negative, the function returns the dialog action, pointing to an escalation intent (specified in the environment variable ESCALATION_INTENT_NAME) or returns the fulfillment closure action with a failure state when the intent is not specified. In addition to actions or intents, the function returns a message, or prompt, to be provided to the customer before taking the next step. Based on the returned action, Amazon Connect can select the appropriate next step in a contact flow.

For this walkthrough, you create a Lambda function using the AWS Management Console:

  1. Open the Lambda console.
  2. Choose Create Function.
  3. Choose Author from scratch (no blueprint).
  4. For Runtime, choose Python 3.6.
  5. For Role, choose Create a custom role. The custom execution role allows the function to detect sentiments, create a log group, stream log events, and store the log events.
  6. Enter the following values:
    • For Role Description, enter Lambda execution role permissions.
    • For IAM Role, choose Create an IAM role.
    • For Role Name, enter LexSentimentAnalysisLambdaRole.
    • For Policy, use the following policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Action": [
                "comprehend:DetectDominantLanguage",
                "comprehend:DetectSentiment"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
    1. Choose Create function.
    2. Copy/paste the following code to the editor window
import os, boto3

ESCALATION_INTENT_MESSAGE="Seems that you are having troubles with our service. Would you like to be transferred to the associate?"
FULFILMENT_CLOSURE_MESSAGE="Seems that you are having troubles with our service. Let me transfer you to the associate."

escalation_intent_name = os.getenv('ESACALATION_INTENT_NAME', None)

client = boto3.client('comprehend')

def lambda_handler(event, context):
    sentiment=client.detect_sentiment(Text=event['inputTranscript'],LanguageCode='en')['Sentiment']
    if sentiment=='NEGATIVE':
        if escalation_intent_name:
            result = {
                "sessionAttributes": {
                    "sentiment": sentiment
                    },
                    "dialogAction": {
                        "type": "ConfirmIntent", 
                        "message": {
                            "contentType": "PlainText", 
                            "content": ESCALATION_INTENT_MESSAGE
                        }, 
                    "intentName": escalation_intent_name
                    }
            }
        else:
            result = {
                "sessionAttributes": {
                    "sentiment": sentiment
                },
                "dialogAction": {
                    "type": "Close",
                    "fulfillmentState": "Failed",
                    "message": {
                            "contentType": "PlainText",
                            "content": FULFILMENT_CLOSURE_MESSAGE
                    }
                }
            }

    else:
        result ={
            "sessionAttributes": {
                "sentiment": sentiment
            },
            "dialogAction": {
                "type": "Delegate",
                "slots" : event["currentIntent"]["slots"]
            }
        }
    return result
  1. Below the code editor specify the environment variable ESCALATION_INTENT_NAME with a value of Escalate.

  1. Click on Save in the top right of the console.

Now you can test your function.

  1. Click Test at the top of the console.
  2. Configure a new test event using the following test event JSON:
{
  "messageVersion": "1.0",
  "invocationSource": "DialogCodeHook",
  "userId": "1234567890",
  "sessionAttributes": {},
  "bot": {
    "name": "BookSomething",
    "alias": "None",
    "version": "$LATEST"
  },
  "outputDialogMode": "Text",
  "currentIntent": {
    "name": "BookSomething",
    "slots": {
      "slot1": "None",
      "slot2": "None"
    },
    "confirmationStatus": "None"
  },
  "inputTranscript": "I want something"
}
  1. Click Create
  2. Click Test on the console

This message should return a response from Lambda with a sentiment session attribute of NEUTRAL.

However, if you change the input to “This is garbage!”, Lambda changes the dialog action to the escalation intent specified in the environment variable ESCALATION_INTENT_NAME.

Setting up Amazon Lex

Now that you have your Lambda function running, it is time to create the Amazon Lex bot. Use the BookTrip sample bot and call it BookSomething. The IAM role is automatically created on your behalf. Indicate that this bot is not subject to the COPPA, and choose Create. A few minutes later, the bot is ready.

Make the following changes to the default configuration of the bot:

  1. Add an intent with no associated slots. Name it Escalate.
  2. Specify the Lambda function for initialization and validation in the existing two intents (“BookCar” and “BookHotel”), at the same time giving Amazon Lex permission to invoke it.
  3. Leave the other configuration settings as they are and save the intents.

You are ready to build and publish this bot. Set a new alias, BookSomethingWithSentimentAnalysis. When the build finishes, test it.

As you see, sentiment analysis works!

Setting up Amazon Connect

Next, provision an Amazon Connect instance.

After the instance is created, you need to integrate the Amazon Lex bot created in the previous step. For more information, see the Amazon Lex section in the Configuring Your Amazon Connect Instance topic.  You may also want to look at the excellent post by Randall Hunt, New – Amazon Connect and Amazon Lex Integration.

Create a new contact flow, “Sentiment analysis walkthrough”:

  1. Log in into the Amazon Connect instance.
  2. Choose Create contact flow, Create transfer to agent flow.
  3. Add a Get customer input block, open the icon in the top left corner, and specify your Amazon Lex bot and its intents.
  4. Select the Text to speech audio prompt type and enter text for Amazon Connect to play at the beginning of the dialog.
  5. Choose Amazon Lex, enter your Amazon Lex bot name and the alias.
  6. Specify the intents to be used as dialog branches that a customer can choose: BookHotel, BookTrip, or Escalate.
  7. Add two Play prompt blocks and connect them to the customer input block.
    • If booking hotel or car intent is returned from the bot flow, play the corresponding prompt (“OK, will book it for you”) and initiate booking (in this walkthrough, just hang up after the prompt).
    • However, if escalation intent is returned (caused by the sentiment analysis results in the bot), play the prompt (“OK, transferring to an agent”) and initiate the transfer.
  8. Save and publish the contact flow.

As a result, you have a contact flow with a single customer input step and a text-to-speech prompt that uses the Amazon Lex bot. You expect one of the three intents returned:

Edit the phone number to associate the contact flow that you just created. It is now ready for testing. Call the phone number and check how your contact flow works.

Cleanup

Don’t forget to delete all the resources created during this walkthrough to avoid incurring any more costs:

  • Amazon Connect instance
  • Amazon Lex bot
  • Lambda function
  • IAM role LexSentimentAnalysisLambdaRole

Summary

In this walkthrough, you implemented sentiment analysis with a Lambda function. The function can be integrated into Amazon Lex and, as a result, into Amazon Connect. This approach gives you the flexibility to analyze user input and then act. You may find the following potential use cases of this approach to be of interest:

  • Extend the Lambda function to identify “hot” topics in the user input even if the sentiment is not negative and take action proactively. For example, switch to an escalation intent if a user mentioned “where is my order,” which may signal potential frustration.
  • Use Amazon Connect Streams to provide agent sentiment analysis results along with call transfer. Enable service tailored towards particular customer needs and sentiments.
  • Route calls to agents based on both skill set and sentiment.
  • Prioritize calls based on sentiment using multiple Amazon Connect queues instead of transferring directly to an agent.
  • Monitor quality and flag for review contact flows that result in high overall negative sentiment.
  • Implement sentiment and AI/ML based call analysis, such as a real-time recommendation engine. For more details, see Machine Learning on AWS.

If you have questions or suggestions, please comment below.

Roku Bans Popular Social IPTV Linking Service cCloud TV

Post Syndicated from Andy original https://torrentfreak.com/roku-bans-popular-social-iptv-linking-service-ccloud-tv-180409/

Despite being one of the more popular set-top box platforms, until last year Roku managed to stay completely out of the piracy conversation.

However, due to abuse of its system by third-parties, last June the Superior Court of Justice of the City of Mexico banned the importation and distribution of Roku devices in the country.

The decision followed a complaint filed by cable TV provider Cablevision, which said that some Roku channels and their users were infringing its distribution rights.

Since then, Roku has been fighting to have the ban lifted, previously informing TF that it expressly prohibits copyright infringement of any kind. That led to several more legal processes yet last month and after considerable effort, the ban was upheld, much to Roku’s disappointment.

“It is necessary for Roku to make adjustments to its software, as other online content distribution platforms do, so that violations of copyrighted content do not take place,” Cablevision said.

Then, at the end of March, Roku suddenly banned the USTVnow channel from its platform, citing a third-party copyright complaint.

In a series of emails with TF, the company declined to offer further details but there is plenty of online speculation that the decision was a move towards the “adjustments” demanded by Cablevision. Today yet more fuel is being poured onto that same fire with Roku’s decision to ban the popular cCloud TV service from its platform.

For those unfamiliar with cCloud TV, it’s a video streaming platform that relies on users to contribute media links found on the web, whether they’re movie and TV shows or live sporting events.

“Project cCloud TV is known as the ‘Popcorn Time for Live TV’. The project started with 50 channels and has grown over time and now has over 4000 channels from all around the world,” its founder ‘Bane’ told TF back in 2016.

“The project was inspired by Popcorn Time and its simplicity for streaming torrents. The service works based on media links that can be found anywhere on the web and the cCloud project makes it easier for users to stream.”

Aside from the vast array of content cCloud offers, its versatility is almost unrivaled. In an addition to working via most modern web browsers, it’s also accessible using smartphones, tablets, Plex media server, Kodi, VLC, and (until recently at least) Roku.

But cCloud and USTVnow aren’t the only services suffering bans at Roku.

As highlighted by CordCuttersNews, other channels are also suffering similar fates, such as XTV that was previously replaced with an FBI warning.

cCloud has had problems on Kodi too. Back in September 2017, TVAddons announced that it had been forced to remove the cCloud addon from its site.

“cCloud TV has been removed from our web site due to a complaint made by Bell, Rogers, Videotron and TVA on June 12th, 2017 as part of their lawsuit against our web site,” the site announced.

“Prior to hearing of the lawsuit, we had never received a single complaint relating to the cCloud TV addon for Kodi. cCloud TV for Kodi was developed by podgod, and was basically an interface for the community-based web service that goes by the same name.”

Last week, TVAddons went on to publish an “blacklist” that lists addons that have the potential to deliver content not authorized by rightsholders. Among many others, the list contains cCloud, meaning that potential users will now have to obtain it directly from the Kodi Bae Repository on Github instead.

At the time of publication, Roku had not responded to TorrentFreak’s request for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

If YouTube-Ripping Sites Are Illegal, What About Tools That Do a Similar Job?

Post Syndicated from Andy original https://torrentfreak.com/if-youtube-ripping-sites-are-illegal-what-about-tools-that-do-a-similar-job-180407/

In 2016, the International Federation of the Phonographic Industry published research which claimed that half of 16 to 24-year-olds use stream-ripping tools to copy music from sites like YouTube.

While this might not have surprised those who regularly participate in the activity, IFPI said that volumes had become so vast that stream-ripping had overtaken pirate site music downloads. That was a big statement.

Probably not coincidentally, just two weeks later IFPI, RIAA, and BPI announced legal action against the world’s largest YouTube ripping site, YouTube-MP3.

“YTMP3 rapidly and seamlessly removes the audio tracks contained in videos streamed from YouTube that YTMP3’s users access, converts those audio tracks to an MP3 format, copies and stores them on YTMP3’s servers, and then distributes copies of the MP3 audio files from its servers to its users in the United States, enabling its users to download those MP3 files to their computers, tablets, or smartphones,” the complaint read.

The labels sued YouTube-MP3 for direct infringement, contributory infringement, vicarious infringement, inducing others to infringe, plus circumvention of technological measures on top. The case was big and one that would’ve been intriguing to watch play out in court, but that never happened.

A year later in September 2017, YouTubeMP3 settled out of court. No details were made public but YouTube-MP3 apparently took all the blame and the court was asked to rule in favor of the labels on all counts.

This certainly gave the impression that what YouTube-MP3 did was illegal and a strong message was sent out to other companies thinking of offering a similar service. However, other onlookers clearly saw the labels’ lawsuit as something to be studied and learned from.

One of those was the operator of NotMP3downloader.com, a site that offers Free MP3 Recorder for YouTube, a tool offering similar functionality to YouTube-MP3 while supposedly avoiding the same legal pitfalls.

Part of that involves audio being processed on the user’s machine – not by stream-ripping as such – but by stream-recording. A subtle difference perhaps, but the site’s operator thinks it’s important.

“After examining the claims made by the copyright holders against youtube-mp3.org, we identified that the charges were based on the three main points. [None] of them are applicable to our product,” he told TF this week.

The first point involves YouTube-MP3’s acts of conversion, storage and distribution of content it had previously culled from YouTube. Copies of unlicensed tracks were clearly held on its own servers, a potent direct infringement risk.

“We don’t have any servers to download, convert or store a copyrighted or any other content from YouTube. Therefore, we do not violate any law or prohibition implied in this part,” NotMP3downloader’s operator explains.

Then there’s the act of “stream-ripping” itself. While YouTube-MP3 downloaded digital content from YouTube using its own software, NotMP3downloader claims to do things differently.

“Our software doesn’t download any streaming content directly, but only launches a web browser with the video specified by a user. The capturing happens from a local machine’s sound card and doesn’t deal with any content streamed through a network,” its operator notes.

This part also seems quite important. YouTube-MP3 was accused of unlawfully circumventing technological measures implemented by YouTube to prevent people downloading or copying content. By opening up YouTube’s own website and viewing content in the way the site demands, NotMP3downloader says it does not “violate the website’s integrity nor performs direct download of audio or video files.”

Like the Betamax video recorder before it that enabled recording from analog TV, NotMP3downloader enables a user to record a YouTube stream on their local machine. This, its makers claim, means the software is completely legal and defeats all the claims made by the labels in the YouTube-MP3 lawsuit.

“What YouTube does is broadcasting content through the Internet. Thus, there is nothing wrong if users are allowed to watch such content later as they may want,” the NotMP3downloader team explain.

“It is worth noting that in Sony Corp. of America v. United City Studios, Inc. (464 U.S. 417) the United States Supreme Court held that such practice, also known as time-shifting, was lawful representing fair use under the US Copyright Act and causing no substantial harm to the copyright holder.”

While software that can record video and sounds locally are nothing new, the developments in the YouTube-MP3 case and this response from NotMP3downloader raises interesting questions.

We put some of them to none other than former RIAA Executive Vice President, Neil Turkewitz, who now works as President of Turkewitz Consulting Group.

Turkewitz stressed that he doesn’t speak for the industry as a whole or indeed the RIAA but it’s clear that his passion for protecting creators persists. He told us that in this instance, reliance on the Betamax decision is “misplaced”.

“The content is different, the activity is different, and the function is different,” Turkewitz told TF.

“The Sony decision must be understood in its context — the time shifting of audiovisual programming being broadcast from point to multipoint. The making available of content by a point-to-point interactive service like YouTube isn’t broadcasting — or at a minimum, is not a form of broadcasting akin to that considered by the Supreme Court in Sony.

“More fundamentally, broadcasting (right of communication to the public) is one of only several rights implicated by the service. And of course, issues of liability will be informed by considerations of purpose, effect and perceived harm. A court’s judgment will also be affected by whether it views the ‘innovation’ as an attempt to circumvent the requirements of law. The decision of the Supreme Court in ABC v. Aereo is certainly instructive in that regard.”

And there are other issues too. While YouTube itself is yet to take any legal action to deter users from downloading rather than merely streaming content, its terms of service are quite specific and seem to cover all eventualities.

“[Y]ou agree not to access Content or any reason other than your personal, non-commercial use solely as intended through and permitted by the normal functionality of the Service, and solely for Streaming,” YouTube’s ToS reads.

“‘Streaming’ means a contemporaneous digital transmission of the material by YouTube via the Internet to a user operated Internet enabled device in such a manner that the data is intended for real-time viewing and not intended to be downloaded (either permanently or temporarily), copied, stored, or redistributed by the user.

“You shall not copy, reproduce, distribute, transmit, broadcast, display, sell, license, or otherwise exploit any Content for any other purposes without the prior written consent of YouTube or the respective licensors of the Content.”

In this respect, it seems that a user doing anything but real-time streaming of YouTube content is breaching YouTube’s terms of service. The big question then, of course, is whether providing a tool specifically for that purpose represents an infringement of copyright.

The people behind Free MP3 Recorder believe that the “scope of application depends entirely on the end users’ intentions” which seems like a fair argument at first view. But, as usual, copyright law is incredibly complex and there are plenty of opposing views.

We asked the BPI, which took action against YouTubeMP3, for its take on this type of tool. The official response was “No comment” which doesn’t really clarify the position, at least for now.

Needless to say, the Betamax decision – relevant or not – doesn’t apply in the UK. But that only adds more parameters into the mix – and perhaps more opportunities for lawyers to make money arguing for and against tools like this in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

User Authentication Best Practices Checklist

Post Syndicated from Bozho original https://techblog.bozho.net/user-authentication-best-practices-checklist/

User authentication is the functionality that every web application shared. We should have perfected that a long time ago, having implemented it so many times. And yet there are so many mistakes made all the time.

Part of the reason for that is that the list of things that can go wrong is long. You can store passwords incorrectly, you can have a vulnerably password reset functionality, you can expose your session to a CSRF attack, your session can be hijacked, etc. So I’ll try to compile a list of best practices regarding user authentication. OWASP top 10 is always something you should read, every year. But that might not be enough.

So, let’s start. I’ll try to be concise, but I’ll include as much of the related pitfalls as I can cover – e.g. what could go wrong with the user session after they login:

  • Store passwords with bcrypt/scrypt/PBKDF2. No MD5 or SHA, as they are not good for password storing. Long salt (per user) is mandatory (the aforementioned algorithms have it built in). If you don’t and someone gets hold of your database, they’ll be able to extract the passwords of all your users. And then try these passwords on other websites.
  • Use HTTPS. Period. (Otherwise user credentials can leak through unprotected networks). Force HTTPS if user opens a plain-text version.
  • Mark cookies as secure. Makes cookie theft harder.
  • Use CSRF protection (e.g. CSRF one-time tokens that are verified with each request). Frameworks have such functionality built-in.
  • Disallow framing (X-Frame-Options: DENY). Otherwise your website may be included in another website in a hidden iframe and “abused” through javascript.
  • Have a same-origin policy
  • Logout – let your users logout by deleting all cookies and invalidating the session. This makes usage of shared computers safer (yes, users should ideally use private browsing sessions, but not all of them are that savvy)
  • Session expiry – don’t have forever-lasting sessions. If the user closes your website, their session should expire after a while. “A while” may still be a big number depending on the service provided. For ajax-heavy website you can have regular ajax-polling that keeps the session alive while the page stays open.
  • Remember me – implementing “remember me” (on this machine) functionality is actually hard due to the risks of a stolen persistent cookie. Spring-security uses this approach, which I think should be followed if you wish to implement more persistent logins.
  • Forgotten password flow – the forgotten password flow should rely on sending a one-time (or expiring) link to the user and asking for a new password when it’s opened. 0Auth explain it in this post and Postmark gives some best pracitces. How the link is formed is a separate discussion and there are several approaches. Store a password-reset token in the user profile table and then send it as parameter in the link. Or do not store anything in the database, but send a few params: userId:expiresTimestamp:hmac(userId+expiresTimestamp). That way you have expiring links (rather than one-time links). The HMAC relies on a secret key, so the links can’t be spoofed. It seems there’s no consensus, as the OWASP guide has a bit different approach
  • One-time login links – this is an option used by Slack, which sends one-time login links instead of asking users for passwords. It relies on the fact that your email is well guarded and you have access to it all the time. If your service is not accessed to often, you can have that approach instead of (rather than in addition to) passwords.
  • Limit login attempts – brute-force through a web UI should not be possible; therefore you should block login attempts if they become too many. One approach is to just block them based on IP. The other one is to block them based on account attempted. (Spring example here). Which one is better – I don’t know. Both can actually be combined. Instead of fully blocking the attempts, you may add a captcha after, say, the 5th attempt. But don’t add the captcha for the first attempt – it is bad user experience.
  • Don’t leak information through error messages – you shouldn’t allow attackers to figure out if an email is registered or not. If an email is not found, upon login report just “Incorrect credentials”. On passwords reset, it may be something like “If your email is registered, you should have received a password reset email”. This is often at odds with usability – people don’t often remember the email they used to register, and the ability to check a number of them before getting in might be important. So this rule is not absolute, though it’s desirable, especially for more critical systems.
  • Make sure you use JWT only if it’s really necessary and be careful of the pitfalls.
  • Consider using a 3rd party authentication – OpenID Connect, OAuth by Google/Facebook/Twitter (but be careful with OAuth flaws as well). There’s an associated risk with relying on a 3rd party identity provider, and you still have to manage cookies, logout, etc., but some of the authentication aspects are simplified.
  • For high-risk or sensitive applications use 2-factor authentication. There’s a caveat with Google Authenticator though – if you lose your phone, you lose your accounts (unless there’s a manual process to restore it). That’s why Authy seems like a good solution for storing 2FA keys.

I’m sure I’m missing something. And you see it’s complicated. Sadly we’re still at the point where the most common functionality – authenticating users – is so tricky and cumbersome, that you almost always get at least some of it wrong.

The post User Authentication Best Practices Checklist appeared first on Bozho's tech blog.

Why the crypto-backdoor side is morally corrupt

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/why-crypto-backdoor-side-is-morally.html

Crypto-backdoors for law enforcement is a reasonable position, but the side that argues for it adds things that are either outright lies or morally corrupt. Every year, the amount of digital evidence law enforcement has to solve crimes increases, yet they outrageously lie, claiming they are “going dark”, losing access to evidence. A weirder claim is that  those who oppose crypto-backdoors are nonetheless ethically required to make them work. This is morally corrupt.

That’s the point of this Lawfare post, which claims:

What I am saying is that those arguing that we should reject third-party access out of hand haven’t carried their research burden. … There are two reasons why I think there hasn’t been enough research to establish the no-third-party access position. First, research in this area is “taboo” among security researchers. … the second reason why I believe more research needs to be done: the fact that prominent non-government experts are publicly willing to try to build secure third-party-access solutions should make the information-security community question the consensus view. 

This is nonsense. It’s like claiming we haven’t cured the common cold because researchers haven’t spent enough effort at it. When researchers claim they’ve tried 10,000 ways to make something work, it’s like insisting they haven’t done enough because they haven’t tried 10,001 times.
Certainly, half the community doesn’t want to make such things work. Any solution for the “legitimate” law enforcement of the United States means a solution for illegitimate states like China and Russia which would use the feature to oppress their own people. Even if I believe it’s a net benefit to the United States, I would never attempt such research because of China and Russia.
But computer scientists notoriously ignore ethics in pursuit of developing technology. That describes the other half of the crypto community who would gladly work on the problem. The reason they haven’t come up with solutions is because the problem is hard, really hard.
The second reason the above argument is wrong: it says we should believe a solution is possible because some outsiders are willing to try. But as Yoda says, do or do not, there is no try. Our opinions on the difficulty of the problem don’t change simply because people are trying. Our opinions change when people are succeeding. People are always trying the impossible, that’s not evidence it’s possible.
The paper cherry picks things, like Intel CPU features, to make it seem like they are making forward progress. No. Intel’s SGX extensions are there for other reasons. Sure, it’s a new development, and new developments may change our opinion on the feasibility of law enforcement backdoors. But nowhere in talking about this new development have they actually proposes a solution to the backdoor problem. New developments happen all the time, and the pro-backdoor side is going to seize upon each and every one to claim that this, finally, solves the backdoor problem, without showing exactly how it solves the problem.

The Lawfare post does make one good argument, that there is no such thing as “absolute security”, and thus the argument is stupid that “crypto-backdoors would be less than absolute security”. Too often in the cybersecurity community we reject solutions that don’t provide “absolute security” while failing to acknowledge that “absolute security” is impossible.
But that’s not really what’s going on here. Cryptographers aren’t certain we’ve achieved even “adequate security” with current crypto regimes like SSL/TLS/HTTPS. Every few years we find horrible flaws in the old versions and have to develop new versions. If you steal somebody’s iPhone today, it’s so secure you can’t decrypt anything on it. But then if you hold it for 5 years, somebody will eventually figure out a hole and then you’ll be able to decrypt it — a hole that won’t affect Apple’s newer phones.
The reason we think we can’t get crypto-backdoors correct is simply because we can’t get crypto completely correct. It’s implausible that we can get the backdoors working securely when we still have so much trouble getting encryption working correctly in the first place.
Thus, we aren’t talking about “insignificantly less security”, we are talking about going from “barely adequate security” to “inadequate security”. Negotiating keys between you and a website is hard enough without simultaneously having to juggle keys with law enforcement organizations.

And finally, even if cryptographers do everything correctly law enforcement themselves haven’t proven themselves reliable. The NSA exposed its exploits (like the infamous ETERNALBLUE), and OPM lost all its security clearance records. If they can’t keep those secrets, it’s unreasonable to believe they can hold onto backdoor secrets. One of the problems cryptographers are expected to solve is partly this, to make it work in a such way that makes it unlikely law enforcement will lose its secrets.

Summary

This argument by the pro-backdoor side, that we in the crypto-community should do more to solve backdoors, it simply wrong. We’ve spent a lot of effort at this already. Many continue to work on this problem — the reason you haven’t heard much from them is because they haven’t had much success. It’s like blaming doctors for not doing more to work on interrogation drugs (truth serums). Sure, a lot of doctors won’t work on this because it’s distasteful, but at the same time, there are many drug companies who would love to profit by them. The reason they don’t exist is not because they aren’t spending enough money researching them, it’s because there is no plausible solution in sight.
Crypto-backdoors designed for law-enforcement will significantly harm your security. This may change in the future, but that’s the state of crypto today. You should trust the crypto experts on this, not lawyers.

Facebook and Cambridge Analytica

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/facebook_and_ca.html

In the wake of the Cambridge Analytica scandal, news articles and commentators have focused on what Facebook knows about us. A lot, it turns out. It collects data from our posts, our likes, our photos, things we type and delete without posting, and things we do while not on Facebook and even when we’re offline. It buys data about us from others. And it can infer even more: our sexual orientation, political beliefs, relationship status, drug use, and other personality traits — even if we didn’t take the personality test that Cambridge Analytica developed.

But for every article about Facebook’s creepy stalker behavior, thousands of other companies are breathing a collective sigh of relief that it’s Facebook and not them in the spotlight. Because while Facebook is one of the biggest players in this space, there are thousands of other companies that spy on and manipulate us for profit.

Harvard Business School professor Shoshana Zuboff calls it “surveillance capitalism.” And as creepy as Facebook is turning out to be, the entire industry is far creepier. It has existed in secret far too long, and it’s up to lawmakers to force these companies into the public spotlight, where we can all decide if this is how we want society to operate and — if not — what to do about it.

There are 2,500 to 4,000 data brokers in the United States whose business is buying and selling our personal data. Last year, Equifax was in the news when hackers stole personal information on 150 million people, including Social Security numbers, birth dates, addresses, and driver’s license numbers.

You certainly didn’t give it permission to collect any of that information. Equifax is one of those thousands of data brokers, most of them you’ve never heard of, selling your personal information without your knowledge or consent to pretty much anyone who will pay for it.

Surveillance capitalism takes this one step further. Companies like Facebook and Google offer you free services in exchange for your data. Google’s surveillance isn’t in the news, but it’s startlingly intimate. We never lie to our search engines. Our interests and curiosities, hopes and fears, desires and sexual proclivities, are all collected and saved. Add to that the websites we visit that Google tracks through its advertising network, our Gmail accounts, our movements via Google Maps, and what it can collect from our smartphones.

That phone is probably the most intimate surveillance device ever invented. It tracks our location continuously, so it knows where we live, where we work, and where we spend our time. It’s the first and last thing we check in a day, so it knows when we wake up and when we go to sleep. We all have one, so it knows who we sleep with. Uber used just some of that information to detect one-night stands; your smartphone provider and any app you allow to collect location data knows a lot more.

Surveillance capitalism drives much of the internet. It’s behind most of the “free” services, and many of the paid ones as well. Its goal is psychological manipulation, in the form of personalized advertising to persuade you to buy something or do something, like vote for a candidate. And while the individualized profile-driven manipulation exposed by Cambridge Analytica feels abhorrent, it’s really no different from what every company wants in the end. This is why all your personal information is collected, and this is why it is so valuable. Companies that can understand it can use it against you.

None of this is new. The media has been reporting on surveillance capitalism for years. In 2015, I wrote a book about it. Back in 2010, the Wall Street Journal published an award-winning two-year series about how people are tracked both online and offline, titled “What They Know.”

Surveillance capitalism is deeply embedded in our increasingly computerized society, and if the extent of it came to light there would be broad demands for limits and regulation. But because this industry can largely operate in secret, only occasionally exposed after a data breach or investigative report, we remain mostly ignorant of its reach.

This might change soon. In 2016, the European Union passed the comprehensive General Data Protection Regulation, or GDPR. The details of the law are far too complex to explain here, but some of the things it mandates are that personal data of EU citizens can only be collected and saved for “specific, explicit, and legitimate purposes,” and only with explicit consent of the user. Consent can’t be buried in the terms and conditions, nor can it be assumed unless the user opts in. This law will take effect in May, and companies worldwide are bracing for its enforcement.

Because pretty much all surveillance capitalism companies collect data on Europeans, this will expose the industry like nothing else. Here’s just one example. In preparation for this law, PayPal quietly published a list of over 600 companies it might share your personal data with. What will it be like when every company has to publish this sort of information, and explicitly explain how it’s using your personal data? We’re about to find out.

In the wake of this scandal, even Mark Zuckerberg said that his industry probably should be regulated, although he’s certainly not wishing for the sorts of comprehensive regulation the GDPR is bringing to Europe.

He’s right. Surveillance capitalism has operated without constraints for far too long. And advances in both big data analysis and artificial intelligence will make tomorrow’s applications far creepier than today’s. Regulation is the only answer.

The first step to any regulation is transparency. Who has our data? Is it accurate? What are they doing with it? Who are they selling it to? How are they securing it? Can we delete it? I don’t see any hope of Congress passing a GDPR-like data protection law anytime soon, but it’s not too far-fetched to demand laws requiring these companies to be more transparent in what they’re doing.

One of the responses to the Cambridge Analytica scandal is that people are deleting their Facebook accounts. It’s hard to do right, and doesn’t do anything about the data that Facebook collects about people who don’t use Facebook. But it’s a start. The market can put pressure on these companies to reduce their spying on us, but it can only do that if we force the industry out of its secret shadows.

This essay previously appeared on CNN.com.

EDITED TO ADD (4/2): Slashdot thread.

Streaming Joshua v Parker is Illegal But Re-Streaming is the Real Danger

Post Syndicated from Andy original https://torrentfreak.com/streaming-joshua-v-parker-is-illegal-but-re-streaming-is-the-real-danger-180329/

This Saturday evening, Anthony Joshua and Joseph Parker will string up their gloves and do battle in one of the most important heavyweight bouts of recent times.

Joshua will put an unbeaten professional record and his WBA, IBF and IBO world titles on the line. Parker – also unbeaten professionally – will put his WBO belt up for grabs. It’s a mouthwatering proposition for fight fans everywhere.

While the collision will take place at the Principality Stadium in Cardiff in front of a staggering 80,000 people, millions more will watch the fight in front of the TV at home, having paid Sky Sports Box Office up to £24.95 for the privilege.

Of course, hundreds of thousands won’t pay a penny, instead relying on streams delivered via illicit Kodi addons, Android apps, and IPTV services. While these options are often free, quality and availability on the night is far from guaranteed. Even those paying for premium ‘pirate’ access have been let down at the last minute but in the scheme of things, that’s generally unlikely.

Despite the uncertainty, this morning the Police Intellectual Property Crime Unit and Federation Against Copyright Theft took the unusual step of issuing a joint warning to people thinking of streaming the fight to their homes illegally.

“Consumers need to be aware that streaming without the right permissions or subscriptions is no longer a grey area,” PIPCU and FACT said in a statement.

“In April last year the EU Court of Justice ruled that not only was selling devices allowing access to copyrighted content illegal, but using one to stream TV, sports or films without an official subscription is also breaking the law.”

The decision, which came as part of the BREIN v Filmspeler case, found that obtaining a copyright-protected work “from a website belonging to a third party offering that work without the consent of the copyright holder” was an illegal act.

While watching the fight via illicit streams is undoubtedly illegal, tracking people who simply view content is extremely difficult and there hasn’t been a single prosecution in the UK (or indeed anywhere else that we’re aware of) against anyone doing so.

That being said, those who make content available for others to watch illegally are putting themselves at considerable risk. While professional pirate re-streamers tend to have better security, Joe Public who points his phone at his TV Saturday night to stream the fight on Facebook should take time out to consider his actions.

In January, Sky revealed that 34-year-old Craig Foster had been caught by the company after someone re-streamed the previous year’s Anthony Joshua vs Wladimir Klitschko fight on Facebook Live using Foster’s Sky account.

Foster had paid Sky for the fight but he claims that a friend used his iPad to record the screen and re-stream the fight to Facebook. Sky, almost certainly using tracking watermarks (example below), traced the ‘pirate’ stream back to Foster’s set-top box.

Watermarks during the Mayweather v McGregor fight

The end result was a technical knockout for Sky who suspended Foster’s Sky subscription and then agreed not to launch a lawsuit providing he paid the broadcaster £5,000.

“The public should be aware that misusing their TV subscriptions has serious repercussions,” said PIPCU and FACT referring to the case this morning.

“For example, customers found to be illegally sharing paid-for content can have their subscription account terminated immediately and can expect to be prosecuted and fined.”

While we know for certain this has happened at least once, TorrentFreak contacted FACT this morning for details on how many Sky subscribers have been caught, warned, and/or prosecuted by Sky in this manner. FACT told us they don’t have any figures but offered the following statement from CEO Kieron Sharp.

“Not only is FACT working closely with broadcasters and rights owners to identify the original source of illegally re-streamed content, but with support from law enforcement, government and social media platforms, we are tightening the net on digital piracy,” Sharp said.

Finally, it’s also worth keeping in mind that even when people live-stream an illegal yet non-watermarked stream to Facebook, they can still be traced by Sky.

As revelations this week have shown only too clearly, Facebook knows a staggering amount about its users so tracking an illegal stream back to a person would be child’s play for a determined rightsholder with a court order.

While someone attracting a couple of dozen viewers might not be at a major risk of repercussions, a viral stream might require the use of a calculator to assess the damages claimed by Sky. Like boxing, this kind of piracy is best left to the professionals to avoid painful and unnecessary trauma.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

EU Content Rules to Improve Access & Reduce Piracy Start April 1

Post Syndicated from Andy original https://torrentfreak.com/eu-content-rules-to-improve-access-reduce-piracy-start-april-1-180328/

Any subscriber of a service like Netflix will tell you that where you live can have a big impact on the content made available. Customers in the US enjoy large libraries while less populous countries are treated less well.

For many years and before Netflix largely closed the loophole, customers would bypass these restrictions, using VPNs to trick Netflix into thinking they were elsewhere. Some wouldn’t bother with the complication, choosing to pirate content instead.

But for citizens of the EU, things were even more complex. While the EU mandates free movement of people, the same can’t be said about licensing deals. While a viewer in the Netherlands could begin watching a movie at home, he could travel to France for a weekend break only to find that the content he paid for is not available, or only in French.

Last May, this problem was addressed by the European Parliament with an agreement to introduce new ‘Cross-border portability’ rules that will give citizens the freedom to enjoy their media wherever they are in the EU, without having to resort to piracy or VPNs – if they can find one that still works for any length of time with the service.

Now, almost 11 months on, the rules are about to come into force. From Sunday, content portability in the EU will become a reality.

“Citizens are at the core of all our digital initiatives. As of 1 April, wherever you are traveling to in the EU, you will no longer miss out on your favorite films, TV series, sports broadcasts, games or e-books, that you have digitally subscribed to at home,” European Commission Vice-President Andrus Ansip said in a statement.

“Removing the boundaries that prevented Europeans from traveling with digital media and content subscriptions is yet another success of the Digital Single Market for our citizens, following the effective abolition of roaming charges that consumers all over Europe have enjoyed since June 2017.”

This is how it will work. Consumers in the EU who buy or subscribe to films, sports broadcasts, music, e-books or games in their home Member States will now be able to access this content when they reside temporarily in another EU country.

So, if a person in the UK purchases Netflix to gain access to a TV show to watch in their home country, Netflix will have to add this content to the customer’s library so they can still access it wherever they travel in the EU, regardless of its general availability elsewhere.

“[P]roviders of paid-for online content services (such as online movie, TV or music streaming services) have to provide their subscribers with the same service wherever the subscriber is in the EU,” the Commission explains.

“The service needs to be provided in the same way in other Member States, as in the Member State of residence. So for Netflix for example, you will have access to the same selection (or catalog) anywhere in the EU, if you are temporarily abroad, just as if you were at home.”

The same should hold true for all other digital content. If it’s available at home, it must be made available elsewhere in Europe in order to comply with the regulations. In doing so, providers are allowed some freedom, provided it’s in the customer’s favor. If they want to give customers additional access to full home and overseas catalogs when they’re traveling, for example, that is fine.

There’s also a plus in there for content providers. While a company like Netflix will sometimes acquire rights on a per country basis, when a citizen travels abroad within the EU they will not be required to obtain licenses for those other territories where their subscribers stay temporarily.

There is, however, a question of what “temporarily” means since it’s not tightly defined in the regulations. The term will cover business trips and holidays, for example, but providers will be required to clearly inform their customers of their precise terms and conditions.

Providers will also need to determine a customer’s home country, something that will be established when a customer signs up or renews his contract. This can be achieved in a number of ways, including via payment details, a contract for an Internet or telephone connection, verifying a home address, or using a simple IP address check.

For providers of free online services, which are allowed to choose whether they want to be included in the new rules or not, there are special conditions in place.

“Once they opt-in and allow portability under the Regulation, all rules will apply to them in the same manner as for the paid services. This means that the subscribers will have to log-in to be able to access and use content when temporarily abroad, and service providers will have to verify the Member State of residence of the subscriber,” the Commission explains.

“If providers of free of charge online content services decide to make use of the new portability rules, they are required to inform their subscribers about this decision prior to providing the service. Such information could, for example, be announced on the providers’ websites.”

The good news for consumers is that providers will not be able to charge for offering content portability and if they don’t provide it as required, they’ll be in breach of EU rules. The EU believes that all providers are ready to meet the standard – the public will find out on Sunday.

The new rules can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

SoFi, the underwater robotic fish

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/robotic-fish/

With the Greenland shark finally caught on video for the very first time, scientists and engineers are discussing the limitations of current marine monitoring technology. One significant advance comes from the CSAIL team at Massachusetts Institute of Technology (MIT): SoFi, the robotic fish.

A Robotic Fish Swims in the Ocean

More info: http://bit.ly/SoFiRobot Paper: http://robert.katzschmann.eu/wp-content/uploads/2018/03/katzschmann2018exploration.pdf

The untethered SoFi robot

Last week, the Computer Science and Artificial Intelligence Laboratory (CSAIL) team at MIT unveiled SoFi, “a soft robotic fish that can independently swim alongside real fish in the ocean.”

MIT CSAIL underwater fish SoFi using Raspberry Pi

Directed by a Super Nintendo controller and acoustic signals, SoFi can dive untethered to a maximum of 18 feet for a total of 40 minutes. A Raspberry Pi receives input from the controller and amplifies the ultrasound signals for SoFi via a HiFiBerry. The controller, Raspberry Pi, and HiFiBerry are sealed within a waterproof, cast-moulded silicone membrane filled with non-conductive mineral oil, allowing for underwater equalisation.

MIT CSAIL underwater fish SoFi using Raspberry Pi

The ultrasound signals, received by a modem within SoFi’s head, control everything from direction, tail oscillation, pitch, and depth to the onboard camera.

As explained on MIT’s news blog, “to make the robot swim, the motor pumps water into two balloon-like chambers in the fish’s tail that operate like a set of pistons in an engine. As one chamber expands, it bends and flexes to one side; when the actuators push water to the other channel, that one bends and flexes in the other direction.”

MIT CSAIL underwater fish SoFi using Raspberry Pi

Ocean exploration

While we’ve seen many autonomous underwater vehicles (AUVs) using onboard Raspberry Pis, SoFi’s ability to roam untethered with a wireless waterproof controller is an exciting achievement.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time. We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.” – CSAIL PhD candidate Robert Katzschmann

As the MIT news post notes, SoFi’s simple, lightweight setup of a single camera, a motor, and a smartphone lithium polymer battery set it apart it from existing bulky AUVs that require large motors or support from boats.

For more in-depth information on SoFi and the onboard tech that controls it, find the CSAIL team’s paper here.

The post SoFi, the underwater robotic fish appeared first on Raspberry Pi.