Tag Archives: Facebook

Selecting and migrating a Facebook API version for Amazon Cognito

Post Syndicated from James Li original https://aws.amazon.com/blogs/security/selecting-and-migrating-a-facebook-api-version-for-amazon-cognito/

On May 1, 2020, Facebook will remove version 2.12 of the Facebook Graph API. This change impacts Amazon Cognito customers who are using version 2.12 of the Facebook Graph API in their identity federation configuration. In this post, I explain how to migrate your Amazon Cognito configuration to use the latest version of the Facebook API.

Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party, such as Facebook, Amazon, Google, or Apple.

An Amazon Cognito User Pool is a user directory that helps you manage identities. It’s also where users can sign into your web or mobile app. User pools support federation through third-party identity providers, such as Google, Facebook, and Apple, as well as Amazon’s own Login with Amazon. Additionally, federation can use identity providers that work with OpenID Connect (OIDC) or Security Assertion Markup Language (SAML) 2.0. Federating a user through the third-party identity provider streamlines the user experience, because users don’t need to sign up directly for your web or mobile app.

Amazon Cognito User Pools now enable users to select the version of the Facebook API for federated login. Previously, version 2.12 of Facebook’s Graph API was automatically used for federated login and to retrieve user attributes from Facebook. By selecting a specific version of Facebook’s API, you can now upgrade versions and test changes. This provides a mechanism to revert back to earlier versions if necessary.

To help ease this transition for our customers, we are doing two phases of mitigation. In the first phase, already underway, you can choose which Facebook version to use for federated login. You can test out the new API version and discover the impact upgrading has on your application. If you must make changes, you can revert to the older version, and you have until May 1, 2020 to perform updates. In the second phase, starting sometime in April, we will automatically migrate customers to version 5.0 if they haven’t selected an API version.

There are benefits to having access to newer versions of Facebook APIs. For instance, if customers who use version 5.0 store a Facebook access token and use it to call the Messenger API, they can use webhook events. This type of benefit is useful for users who react or reply to messages from businesses. You can also use business asset groups to manage a large number of assets with Facebook API v4.0 and the Facebook Marketing API.

How to use different Facebook API versions with Amazon Cognito

These instructions assume you’re familiar with Amazon Cognito User Pools and the User Pool clients. You also need a User Pool domain already set up with the appropriate settings for a hosted UI. If you haven’t set up a user pool yet, you can find the instructions in the Amazon Cognito Developer Guide. You need your User Pool domain information when you set up your Facebook app.

Set up the Facebook app

  1. Go to the Facebook for Developers website and sign in, or sign up if you do not have an account. Create a new Facebook app if you must, or you can reuse an existing one.
  2. Navigate to the App Dashboard and select your App.
  3. On the navigation menu, select Products, then Facebook Login, and then Settings.
  4. In the Valid OAuth Redirect URLs field, add your user pool domain with the endpoint /oauth2/idpresponse. As shown in Figure 1, it should look like https://<yourDomainPrefix>.auth.<region>.amazoncognito.com/oauth2/idpresponse.

    Figure 1

    Figure 1

  5. In the navigation menu, select Settings, then choose Basic.
  6. Note your App ID and your App Secret for the next step.

Adding your Facebook app to your Amazon Cognito user pool

Next, you need to add your Facebook app to your user pool. This can be done either through the AWS Management Console or the command line interface (CLI) and I will show you both methods.

Adding the Facebook app to a user pool through using the AWS Management Console

    1. On the AWS Management Console, navigate to Amazon Cognito, then select Manage Pools. From the list that shows up, select your user pool.
    2. On the navigation menu, select Federation, then Identity Providers.
    3. Select Facebook. Enter the Facebook App ID and App Secret from step 6 above. Then, under Authorize Scopes, enter the appropriate scopes.
    4. In the navigation menu, select Federation and go to Attributes Mapping.
    5. Now select the version of the Facebook API you want to use. By default, the highest available version (v6.0) for newly created Facebook identity providers is pre-selected for you.
    6. After choosing your API version and attribute mapping, click Save.

 

Figure 2

Figure 2

Adding the Facebook app to a user pool through the CLI

The command below adds the Facebook app configuration to your user pool. Use the values for <USER_POOL_ID>,<FACEBOOK_APP_ID> and <FACEBOOK_APP_SECRET> that you noted earlier:


aws cognito-idp create-identity-provider --cli-input-json '{
    "UserPoolId": "<USER_POOL_ID>",
    "ProviderName": "Facebook",
    "ProviderType": "Facebook",
    "ProviderDetails": {
        "client_id": "<FACEBOOK_APP_ID>",
        "client_secret": "<FACEBOOK_APP_SECRET>",
        "authorize_scopes": "email",
        "api_version": "v5.0"
    },
    "AttributeMapping": {
        "email": "email"
    }
}'

The command below updates the Facebook app configuration to your user pool. Use the values for <USER_POOL_ID>, <FACEBOOK_APP_ID> and <FACEBOOK_APP_SECRET> that you noted earlier:


aws cognito-idp update-identity-provider --cli-input-json '{
    "UserPoolId": "<USER_POOL_ID>",
    "ProviderName": "Facebook",
    "ProviderType": "Facebook",
    "ProviderDetails": {
        "client_id": "<FACEBOOK_APP_ID>",
        "client_secret": "<FACEBOOK_APP_SECRET>",
        "authorize_scopes": "email",
        "api_version": "v5.0"
    },
    "AttributeMapping": {
        "email": "email"
    }
}'

You can verify that the create or update was successful by checking the version returned in the describe-identity-provider call:


aws cognito-idp describe-identity-provider --user-pool-id "" --provider-name "Facebook"
{
    "IdentityProvider": {
        "UserPoolId": "<USER_POOL_ID>",
        "ProviderName": "Facebook",
        "ProviderType": "Facebook",
        "ProviderDetails": {
            "api_version": "v5.0",
            "attributes_url": "https://graph.facebook.com/v5.0/me?fields=",
            "attributes_url_add_attributes": "true",
            "authorize_scopes": "email",
            "authorize_url": "https://www.facebook.com/v5.0/dialog/oauth",
            "client_id": "<FACEBOOK_APP_ID>",
            "client_secret": "<FACEBOOK_APP_SECRET>",
            "token_request_method": "GET",
            "token_url": "https://graph.facebook.com/v5.0/oauth/access_token"
        },
        "AttributeMapping": {
            "email": "email",
            "username": "id"
        },
        ...
    }
}

Use the updated configuration with the Cognito Hosted UI:

  1. On the AWS Console for Amazon Cognito, navigate to your user pool and go to the navigation menu. In App Integration, go to App client settings, find your app, and check Facebook as the Enabled Identity Providers.
  2. Select Launch Hosted UI.
  3. Select Continue with Facebook.
  4. If you aren’t automatically signed in at this point, the URL displays your selected version. For example, if v5.0 was selected, the URL starts with: https://www.facebook.com/v5.0/dialog/oauth. If you would like to disable automatic sign-in, simply remove your app from Facebook so that the sign-in prompts for permissions again. Follow these instructions to learn more.
  5. The browser returns to your redirect URL with a code issued by Amazon Cognito if it was successful.

Notes on testing

Facebook will redirect your API call to a more recent version if your app is not allowed to call it. For example, if you created your Facebook app in November 2018, the latest available version at the time was version 3.2. If you were to call the Graph API using version 3.0, the call is upgraded to version 3.2. You can tell which version you are using by referring to the facebook-api-version header in Facebook’s response headers.

If an attribute was not marked as required, and the attribute is missing from Facebook, federation still succeeds, but the attribute is empty in the user pool. There have been various deprecations of fields from Facebook since Facebook federation was launched for Amazon Cognito. For instance, gender and birthday attributes have since changed to be explicitly requested on their own separate permissions rather than granted by default. The cover attribute has also been deprecated. You can confirm that your attribute has successfully federated on the user’s page in the user pools page of the AWS Management Console for Amazon Cognito. You should, as part of your migration, validate that end attributes that you are working with are passed in the way you expect.

Summary

In this post, I explained how to select the version of Facebook’s Graph API for federated login. If you already use Amazon Cognito for federated login with Facebook, you should migrate to the most recent version as soon as possible. Use this process to make sure you get all the attributes you need for your application. New customers can immediately take advantage of the latest API version.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Cognito Forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

James Li

James is a Software Development Engineer at Amazon Cognito. He values operational excellence and security. James is from Toronto, Canada, where he has worked as a software developer for 4 years.

Facebook’s Download-Your-Data Tool Is Incomplete

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/facebooks_downl.html

Privacy International has the details:

Key facts:

  • Despite Facebook claim, “Download Your Information” doesn’t provide users with a list of all advertisers who uploaded a list with their personal data.
  • As a user this means you can’t exercise your rights under GDPR because you don’t know which companies have uploaded data to Facebook.
  • Information provided about the advertisers is also very limited (just a name and no contact details), preventing users from effectively exercising their rights.
  • Recently announced Off-Facebook feature comes with similar issues, giving little insight into how advertisers collect your personal data and how to prevent such data collection.

When I teach cybersecurity tech and policy at the Harvard Kennedy School, one of the assignments is to download your Facebook and Google data and look at it. Many are surprised at what the companies know about them.

Facebook Sued Over Failure to Respond to DMCA Takedown Notices

Post Syndicated from Ernesto original https://torrentfreak.com/facebook-sued-over-failure-to-respond-to-dmca-takedown-notices-200219/

Seattle-based artist Christopher Boffoli is no stranger when it comes to suing tech companies for aiding copyright infringement of his work.

Over the years he has filed lawsuits against Cloudflare, Twitter, Google, Pinterest, Imgur, and others. All these cases were eventually dismissed, presumably after both sides resolved the matter behind the scenes.

While no settlement details have been made public, it’s likely that the photographer has been getting something in return, as he filed a similar case this week. The latest target is yet another familiar Silicon Valley name: Facebook.

In a brief complaint filed at the District Court for the Western District of Washington, Boffoli accuses the social media platform of failing to remove copyright infringing photos. This, despite the claim that the photographer reported dozens of links to unauthorized copies of his work on Facebook between August and October of last year.

Facebook initially replied to these notices stating that the content had been removed, but that wasn’t the case. After more than three months, the pirated photos were still online, the complaint says.

“As late as January 9, 2020 — more than 100 days after receiving Boffoli’s first notice — Facebook had not removed or disabled access to the Infringing Content,” Boffoli’s attorney writes.

After the attorney alerted Facebook about the problem, the material was eventually removed last month. Apparently, it remained online all this time due to a technical error.

“On or about January 30, 2020, Facebook removed or disabled access to the Infringing Content only after communication from Boffoli’s attorney. Facebook admitted it failed to previously remove the material despite notice and stated that its failure to do so was due to a technical error,” the complaint explains.

By then it was already too late, however. Instead of accepting the error, Boffoli has now taken the matter to court where he demands actual or statutory damages for the copyright infringements. With at least four photos in the lawsuit, the potential damages are more than half a million dollars.

In addition, the photographer requests an injunction to prevent future copyright infringements and wants Facebook to destroy all copies that it has in its possession.

The timing of the notices is interesting as it coincides with another incident involving the photographer. Last September we reported that Facebook had removed one of our articles, which used a meme based on a public domain image of Boffoli.

The meme in question referenced the backlash after the photographer filed a lawsuit against Imgur in 2014. When that case was made public, someone responded by uploading 20,754 of his photos to The Pirate Bay. Ironically, Facebook did remove the image and the link to our article years later, even though it was clearly fair use.

That incident shows that Facebook did respond to takedown notices. According to the new lawsuit, however, that wasn’t always the case.

—-

A copy of the complaint is available here (pdf) and the email exhibits can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Instagram Uses DMCA Complaint to Protect Users’ “Copyrighted Works”

Post Syndicated from Andy original https://torrentfreak.com/instagram-uses-dmca-complaint-to-protect-users-copyrighted-works-200130/

DMCA notices are sent in their millions every single week, mainly to restrict access to copyright-infringing content. These notices usually target the infringing content itself or links to the same, but there are other options too.

The anti-circumvention provisions of the DMCA allow companies that own or provide access to copyrighted works to target tools and systems that facilitate access to that content in an unauthorized manner. Recent examples can be found in the war currently being waged by the RIAA against various YouTube-ripping sites, which provide illicit access to copyright works, according to the industry group.

This week Facebook-owned Instagram entered the arena when it filed a DMCA notice against code repository Github. It targeted Instagram-API, an independent Instagram API created by a Spain-based developer known as ‘mgp25‘. Instagram claims that at least in part, the notice was filed to prevent unauthorized access to its users’ posts, which can contain copyrighted works.

“The Company maintains technological measures to control access to and protect Instagram users’ posts, which are copyrighted works. This notice relates to GitHub users offering, providing, and/or trafficking in technologies, products, and/or services primarily designed to circumvent the Company’s technological measures,” the complaint begins.

According to Instagram, Instagram-API is code that was designed to emulate the official Instagram mobile app, allowing users to send and receive data, including copyrighted content, through Instagram’s private API. It’s a description that is broadly confirmed by the tool’s creator.

“The API is more or less like a replica of the mobile app. Basically, the API mimics the requests Instagram does, so if you want to check someone’s profile, the mobile app uses a certain request, so through basic analysis we can emulate that request and be able to get the profile info too. The same happens with other functionalities,” mgp25 informs TorrentFreak.

While Instagram clearly views the tool as a problem, mgp25 says that it was originally created to solve one.

“Back in the day I wasn’t able to use Instagram on my phone, and I wanted something to upload photos and communicate with my friends. That’s why I made the API in the first place,” he explains.

There are no claims from Instagram that Instagram-API was developed using any of its copyrighted code. Indeed, the tool’s developer says that it was the product of reverse-engineering, something he believes should be protected in today’s online privacy minefield.

“I think reverse engineering should be exempt from the DMCA and should be legal. By reverse engineering we can verify whether apps are violating user privacy, stealing data, backdooring your device or doing even worse things,” he says.

“Without reverse engineering we wouldn’t know whether the software was a government spy tool. Reverse engineering should be a right every user should have, not only to provide interoperability functionalities but to assure their privacy rights are not being violated.”

While many would consider that to be a reasonable statement, Instagram isn’t happy with the broad abilities of Instagram-API. In addition to the above-mentioned features, it also enables access to “Instagram users’ copyrighted works in manners that exceed the scope of access and functionality that would be permitted by a user with a legitimate, authorized Instagram account,” the company adds.

After the filing of the complaint, it took a couple of days for Github to delete the project but it is now well and truly down. The same is true for more than 1,500 forks of Instagram-API that were all wiped out after their URLs were detailed in the same complaint.

Regardless of how mgp25 feels about the takedown, the matter will now come to a close. The developer says he has no idea how far Instagram and Facebook are prepared to go in order to neutralize his software so he won’t be filing a counter-notice to find out.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Facebook Sees Copyright Abuse as One of the Platform’s Main Challenges

Post Syndicated from Ernesto original https://torrentfreak.com/facebook-sees-copyright-abuse-as-one-of-the-platforms-main-challenges-200108/

When it comes to targeting infringement, Facebook has rolled out a few anti-piracy initiatives over recent years.

In addition to processing regular takedown requests, the company has a “Rights Manager” tool that detects infringing material automatically and allows owners to take down or monetize the content.

In a recent meeting organized by the European Commission, Facebook explained in detail how this automated system works. The meeting was organized to create a dialogue between various parties about possible solutions for the implementation of Article 17.

In Facebook’s presentation Dave Axelgard, product Manager for Rights Manager, explained how automated matching of copyrighted content takes place on the social media network. He also detailed what actions rightsholders can take in response, and how users can protest misuse and abuse of the system.

The EU meeting was attended by a wide range of parties. In addition to copyright holders, it also included various people representing digital rights organizations. Facebook made it clear that it keeps the interests of all sides in mind. It specifically highlighted, however, that abuse of Rights Manager is a serious concern.

“We spend much of our time building systems to avoid blocking legitimate content,” Axelgard mentioned during his presentation.

“The way that inappropriate blocks occur is when rightsholders gain access to Rights Manager despite our application process, who attempt to upload content to the tool that they do not own.”

Another form of overblocking that takes place is when copyright holders upload content that they don’t own. This can happen by mistake when a compilation video is added, which also includes content that’s not theirs.

Facebook works hard to catch and prevent these types of misuse and abuse, to ensure that its automated detection system doesn’t remove legitimate content. This is also something to keep in mind for the implementation of possible ‘upload filters’ with the introduction of Article 17.

“Misuse is a significant issue and after operating Rights Manager for a number of years, we can tell you it is one of the most sensitive things that need to be accounted for in a proportionate system,” Axelgard says.

Facebook tries to limit abuse through a variety of measures. The company limits access to its Rights Manager tool to a select group of verified copyright holders. In addition, it always requires playable reference files, so all claims can be properly vetted.

The social media network also limits the availability of certain automated actions, such as removal or blocking, to a subset of Rights Manager users. This is in part because some smaller rightsholders may not fully understand copyright, which can lead to errors.

Finally, Facebook points out that misuse of its Rights Manager tool constitutes a breach of its Terms of Service. This allows the company to terminate rightsholders that repeatedly make mistakes.

“If we find that Rights Manager is being misused, then under our Rights Manager terms we have the ability to terminate someone’s access to the tool. We really do want to stress how important it is that platforms have the ability to adjust access and functionality related to these powerful technologies to avoid misuse,” Axelgard notes.

The strong focus on misuse was welcomed by digital rights groups, including Communia. However, it also raised some eyebrows among rightsholders.

Mathieu Moreuil of the English Premier League, who represented the Sports Rights Owner Coalition, asked Facebook whether the abuse of Rights Manager really is the company’s main challenge.

“I think it’s definitely one of our main challenges,” Axelgard confirmed, while noting that Facebook also keeps the interests of rightsholders in mind.

Overall, Facebook carefully explained the pros and cons of its system. Whether it is an ideal tool to implement Article 17 in EU countries is another question. In its current form Rights Manager isn’t, as it doesn’t allow all copyright holders to join in.

Also, Rights Manager works with audio and video, but not with digital images, which is another major restriction.

On the other hand, there are pitfalls from a consumer perspective as well. Automated systems may be very good at detecting copyrighted content, but Facebook confirmed that they currently can’t make a determination in respect of copyright exemptions such as parody and fair use.

“Our matching system is not able to take context into account. It’s just seeking to identify whether or not two pieces of content matched to one another,” Axelgard said, responding to a question from Communia’s Paul Keller.

This shortcoming of automated filters was also confirmed by Audible Magic, the popular music matching service that’s used by dozens of large companies to detect copyright infringements.

“Copyright exceptions require a high degree of intellectual judgment and an understanding and appreciation of context. We do not represent that any technology can solve this problem in an automated fashion. Ultimately these types of determinations must be handled by human judgment,” Audible Magic CEO Vance Ikezoye said.

As noted by Communia, the most recent stakeholder meeting once again showed that automated content recognition systems are extremely powerful and very limited at the same time.

If any of these technologies become the basis of implementing Europe’s Article 17 requirements, these shortcomings should be kept in mind. Or as Facebook said, a lot of time and effort should go into preventing legitimate content being blocked.

A video of the full stakeholder meeting is available on the European Commission’s website. A copy of Facebook’s slides is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Texas Man Sued for Selling Pirate Boxes Advertised on Facebook

Post Syndicated from Ernesto original https://torrentfreak.com/texas-man-sued-for-selling-pirate-boxes-advertised-on-facebook-191210/

ABS-CBN, the largest media and entertainment company in the Philippines, is continuing its legal campaign against piracy.

Over the past several years, the company has singled out dozens of streaming sites that offer access to ‘Pinoy’ content without permission, both in the US and abroad.

While these traditional sites remain a key focus for the company, ABS-CBN is expanding its scope in the US by going after an alleged seller of pirate streaming boxes.

In a complaint filed at a federal court in Texas, the company accuses local resident Anthony Brown of selling and promoting pirate devices through the Life for Greatness website. By doing so, the man violates ABS-CBN’s rights, the company stresses.

“Defendant has been engaged in a scheme to, without authorization, sell Pirate Equipment that retransmits ABS-CBN’s programming to his customers as Pirate Services,” the complaint, filed at the Southern District of Texas Court, reads.

The media company notes that its own investigators purchased pirate equipment from Brown, which was then shipped from within Texas. These orders were likely placed at the Life for Greatness website, which remains online at the time of writing and is operated by ‘1700 Cuts Technology.’

In addition, the complaint notes that the pirate devices were advertised and promoted through various Facebook pages. This includes two personal profiles and a business page for “Lifeforgreatness.”

“Defendant has used several Facebook.com social media pages to advertise and promote the availability of the Pirate Equipment for sale by
Defendant,” the complaint notes.

The Facebook pages also remain online today. And indeed, the Lifeforgreatness account is used to advertise what appear to be pirate streaming boxes and subscriptions. This is in part carried out by utilizing footage that shows the logos of ABS-CBN and other major entertainment outfits.

In a Facebook post, the box vendor writes that cable companies overcharge customers each and every day. By switching to one of the advertised boxes, people can cut their bills and still get the same channels, the post adds.

“The box automatically updates on its own as well as provides content that you are currently paying between $4.99 to $300.00 a month for. The Smart to box have over 500,000 movies, TV shows and Live TV from every country the world including the USA,” the post adds.

This is not an isolated incident. There are hundreds of similar businesses that (re)sell pirate boxes and subscriptions while advertising them on social media. The defendant, in this case, seems to be a relatively small fish with just a few dozen Facebook likes.

However, that doesn’t mean that ABS-CBN is holding back when it comes to its demands.

The media company requests hundreds of thousands in damages for providing unauthorized access to its communication signals, which violates the Communications Act. In addition, it asks for $2 million in statutory damages for every trademark infringement.

Interestingly, there is no copyright damages claim. However, the company does want the seller to halt his infringing activities and requests the court to impound the pirate devices.

A copy of the complaint filed by ABS-CBN at the Southern District of Texas Court is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Reforming CDA 230

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/reforming_cda_2.html

There’s a serious debate on reforming Section 230 of the Communications Decency Act. I am in the process of figuring out what I believe, and this is more a place to put resources and listen to people’s comments.

The EFF has written extensively on why it is so important and dismantling it will be catastrophic for the Internet. Danielle Citron disagrees. (There’s also this law journal article by Citron and Ben Wittes.) Sarah Jeong’s op-ed. Another op-ed. Another paper.

Here are good news articles.

Reading all of this, I am reminded of this decade-old quote by Dan Geer. He’s addressing Internet service providers:

Hello, Uncle Sam here.

You can charge whatever you like based on the contents of what you are carrying, but you are responsible for that content if it is illegal; inspecting brings with it a responsibility for what you learn.

-or-

You can enjoy common carrier protections at all times, but you can neither inspect nor act on the contents of what you are carrying and can only charge for carriage itself. Bits are bits.

Choose wisely. No refunds or exchanges at this window.

We can revise this choice for the social-media age:

Hi Facebook/Twitter/YouTube/everyone else:

You can build a communications based on inspecting user content and presenting it as you want, but that business model also conveys responsibility for that content.

-or-

You can be a communications service and enjoy the protections of CDA 230, in which case you cannot inspect or control the content you deliver.

Facebook would be an example of the former. WhatsApp would be an example of the latter.

I am honestly undecided about all of this. I want CDA230 to protect things like the commenting section of this blog. But I don’t think it should protect dating apps when they are used as a conduit for abuse. And I really don’t want society to pay the cost for all the externalities inherent in Facebook’s business model.

Karl Pilkington Shares a Pirated Copy of His Own TV-Show

Post Syndicated from Ernesto original https://torrentfreak.com/karl-pilkington-shares-a-pirated-copy-of-his-own-tv-show-191119/

UK entertainment giant Sky is widely known for taking a hard line on everything piracy related.

In recent years the company has chased vendors of pirate subscriptions and hardware, both in and outside of court.

These efforts are meant to signal to the public that piracy, streaming piracy in particular, will not be tolerated. However, this message has apparently not hit home with one of the company’s own stars, Karl Pilkington.

Pilkington is an actor, comedian, and presenter who is widely known for “An Idiot Abroad,” the Sky 1 travel series with a comedic twist. He also worked with Sky on the documentary “The Moaning of Life” and more recently he ventured into the sitcom arena with the series “Sick of It”, again at Sky.

Sick of It is about to premiere its second season and to give his 1.5+ million fans on Facebook something to get in the mood, Pilkington recently decided to share an episode of the show from last year.

That usually isn’t a problem. However, Sky doesn’t share the show for free and only offers it on-demand but that didn’t prove to be too much of a hurdle for the show’s co-writer, who found a freely accessible streaming copy on Vimeo.

“For anyone who hasn’t seen it yet. Here’s an episode. Series 2 soon,” Pilkington wrote.

This clearly isn’t an official release. The tags on the video reveal that this copy was sourced from a ‘scene’ group, PLUTONiUM in this case, and reuploaded to Vimeo by someone named Gary. The same person also shared a copy of the first episode through the same account.

This means that Pilkington is effectively sharing a pirated copy of his own show with over a million people. And since Sky holds at least some of the rights, that’s not supposed to happen.

The ‘mistake’ didn’t go unnoticed. Commenters on Facebook highlighted that it was a pirate release and the same was pointed out on Reddit, where many appreciated the unusual move.

The question is, of course, whether this is indeed a mistake or some kind of PR stunt. Giving over a million people a free teaser may draw in some extra eyeballs and if that’s picked up by the news, it means even more exposure.

However, when we look closer at Pilkington’s previous engagement on Facebook we started to notice a trend. Apparently, he’s keeping a close eye on the comments. When someone said that she wasn’t familiar with Sick of It, but would like to watch it, Pilkington kindly shared a link.

And that wasn’t the first time either. The show’s co-writer has been doing this for weeks, sharing the same link to everyone who shows interest. In particular, those who don’t have access to it.

To us, it appears that Pilkington means no harm and simply wants to get people to see his show. That makes sense. As a creator, you want people to enjoy what you’ve made. The fact that he’s sharing a pirated copy may not have even entered his mind.

Whether Sky will like this is another question of course. At the time of writing all links are still online, but it wouldn’t be a massive surprise if they are soon taken down. Technically, Pilkington is now a repeat infringer so he could even lose his Facebook account.

Unless he takes action before Sky does, of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

WhatsApp Sues NSO Group

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/whatsapp_sues_n.html

WhatsApp is suing the Israeli cyberweapons arms manufacturer NSO Group in California court:

WhatsApp’s lawsuit, filed in a California court on Tuesday, has demanded a permanent injunction blocking NSO from attempting to access WhatsApp computer systems and those of its parent company, Facebook.

It has also asked the court to rule that NSO violated US federal law and California state law against computer fraud, breached their contracts with WhatsApp and “wrongfully trespassed” on Facebook’s property.

This could be interesting.

EDITED TO ADD: Citizen Lab has a research paper in the technology involved in this case. WhatsApp has an op ed on their actions. And this is a good news article on how the attack worked.

EDITED TO ADD: Facebook is deleting the accounts of NSO Group employees.

Facebook Blocks Users from Sharing Pirate Bay Links

Post Syndicated from Ernesto original https://torrentfreak.com/facebook-now-blocks-pirate-bay-links-190930/

Ten years ago Facebook reached out to The Pirate Bay, asking the torrent site to remove the ‘share’ button from its site.

At the time, the torrent site was at the center of a high profile copyright infringement lawsuit, something the social media network didn’t want to be associated with.

Perhaps unsurprisingly, The Pirate Bay wasn’t very cooperative. The request remained unanswered which left Facebook with no other option than to block Pirate Bay URLs at its end.

“The Pirate Bay has not responded and so we have blocked their torrents from being shared on Facebook,” the company told us at the time.

Today, more than a decade later, the “share” button on The Pirate Bay is long gone. Somewhere during this period, Facebook’s ban was also lifted. When the social media site started blocking several other torrent sites a few weeks ago, we noticed that TPB was not among them.

However, this changed recently. When we reviewed Facebook’s blocking efforts a few days ago we noticed that The Pirate Bay is now blocked as well. Similar to the other pirate sites, it apparently violates the platform’s “community standards.”

People who want to use Facebook to post a link to The Pirate Bay will see the following error message instead; “You can’t share this link. Your post couldn’t be shared, because this link goes against our Community Standards.”

Similarly, all Pirate Bay links are blocked in Facebook’s Messenger chats as well, returning a similar notification.

Facebook’s Community Standards and its Terms of Service allow the platform to take action against potential intellectual property infringements, which is likely what triggered this action.

While The Pirate Bay is now blocked again by Facebook, the current ban is substantially different from the previous one. Ten years ago the site only prevented people from linking to actual torrent pages, while all URLs, including the homepage, are banned today.

It is apparent that Facebook is gradually expanding its ‘piracy’ blocking efforts. In addition to adding The Pirate Bay, 1337x.to was recently added as well, and it wouldn’t be a surprise if more URLs will follow.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Facebook Takes Down TorrentFreak Post Over ‘Infringing’ Meme

Post Syndicated from Ernesto original https://torrentfreak.com/facebook-takes-down-torrentfreak-post-over-infringing-meme-190929/

When the EU Copyright Directive protests were in full swing earlier this year, many people warned that upload filters would “kill memes.”

We weren’t particularly fond of this oversimplification, but the problems with upload filters are obvious, with or without the new EU directive.

In fact, even without automated filters copyright enforcement efforts can be quite problematic. Today we present a rather unusual example, where one of the “memes” we published in the past, was effectively taken down by Facebook.

To put things in a proper context, we take you back to 2014. At the time we reported that photographer Christoffer Boffoli had filed a lawsuit against the popular image sharing site Imgur, which allegedly ignored his takedown requests.

Boffoli hoped to protect his copyrights, but this effort soon backfired. A few weeks after he filed the complaint someone uploaded an archive of 20,754 of his photos to The Pirate Bay, specifically mentioning the lawsuit against Imgur. The torrent in question remains online today.

In recent years we haven’t heard much from the photographer, until this week, when someone alerted us to a rather unusual issue. The person in question, who prefers not to be named, had one of his Facebook posts removed over alleged copyright infringement.

The post in question was a link to our news article covering the Pirate Bay ‘issue.’ At the time, this was by default shared with a portrait of Boffoli that someone turned into a meme, as can be seen below (meme text cropped).

The Facebook notice mentions that the content in question was “disabled” due to a third-party copyright complaint. While it didn’t specify what the infringing content was, our article was listed as the “source,” and the link and the associated image were indeed removed.

Since Boffoli doesn’t own any copyrights to our work, and since we didn’t link to the Pirate Bay archive, we assume that the takedown notice is targeted at the meme image, which includes the photographer’s portrait. Whether it’s justified is another question though.

Memes are generally seen as fair use. As such, people can share them without repercussions. A photographer may contend this, and fight it out in court, but in this case that could prove difficult.

When looking into the matter, we noticed that the original portrait has been hosted by Wikipedia for more than 15 years. This shows that the photo is credited to Boffoli himself, and shared with a public domain ‘license’, allowing anyone to use it freely.

This means that creating a meme out of it is certainly not a problem. But perhaps there was another reason for the takedown?

Since Facebook doesn’t share any further details, and our own original Facebook posting is still up, we can’t be 100% sure what the alleged infringement is. However, looking through Facebook’s archive we see that another user had the meme image removed as well (TF link remains online here), suggesting that this is indeed the problem.

So there we have it. Facebook effectively ‘killed’ removed a meme. In at least once instance, it removed a link to a perfectly legitimate news article, based on a takedown request that doesn’t seem to hold water. The meme isn’t quite dead yet though, it’s on the Internet after all.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Facebook Blocks Sharing of Links to Prominent Pirate Sites

Post Syndicated from Ernesto original https://torrentfreak.com/facebook-blocks-sharing-of-links-to-prominent-pirate-sites-100904/

Similar to other sites that deal with user-generated content, Facebook has to battle against a constant stream of copyright-infringing material.

To address this, Facebook has rolled out several anti-piracy initiatives in recent years. The company has a “Rights Manager” tool, for example, that automatically detects infringing material on the platform.

In addition, it seems the company is also taking proactive measures. This week we were contacted by the operator of LimeTorrents.info, one of the most used torrent sites, who noticed that sharing links to his site is no longer permitted on the social media network.

People who want to use Facebook to post a link to the torrent site will see the following error message instead; “You can’t share this link. Your post couldn’t be shared, because this link goes against our Community Standards.”

As it turns out, LimeTorrents is not the only site that’s affected by this policy. We checked several others and found out that Facebook also blocks links that point to YTS.lt, Torrentdownloads.me and Zooqle.com. This measure applies to all URLs from these sites, including their homepages.

Facebook’s blocking notification doesn’t provide a specific reason for the blockage. We’ve reached out to the company for a comment on the blocking measures, but the company has yet to reply.

When we read through the company’s ‘community standards,’ however, we see that copyright infringement is a potential trigger.

The four sites that are blocked may just be the tip of the iceberg. At the same time, it’s also worth noting that other major pirate sites don’t get the same treatment. Whatever Facebook’s policy is, there’s no site-wide ban on all piracy sites, yet.

While the current blocking efforts are new to us, as well as the site operator we’ve spoken to, it’s not clear when they were implemented. A search for the error message that pops up suggests that it only started to appear recently.

That doesn’t mean that Facebook has never blocked pirate sites in the past. Ten years ago the company already prevented users from posting links to The Pirate Bay, after the torrent site refused to disable its ‘share’ function voluntarily.

“Given the controversy surrounding The Pirate Bay and the pending lawsuit against them, we’ve reached out to The Pirate Bay and asked them to remove the ‘Share on Facebook’ links from their site. The Pirate Bay has not responded and so we have blocked their torrents from being shared on Facebook,” the company told us at the time.

Interestingly, in the years that followed, The Pirate Bay was unbanned again and Facebook users can freely share links to the site today.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Phone Pharming for Ad Fraud

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/phone_farming_f.html

Interesting article on people using banks of smartphones to commit ad fraud for profit.

No one knows how prevalent ad fraud is on the Internet. I believe it is surprisingly high — here’s an article that places losses between $6.5 and $19 billion annually — and something companies like Google and Facebook would prefer remain unresearched.

More on Backdooring (or Not) WhatsApp

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/more_on_backdoo.html

Yesterday, I blogged about a Facebook plan to backdoor WhatsApp by adding client-side scanning and filtering. It seems that I was wrong, and there are no such plans.

The only source for that post was a Forbes essay by Kalev Leetaru, which links to a previous Forbes essay by him, which links to a video presentation from a Facebook developers conference.

Leetaru extrapolated a lot out of very little. I watched the video (the relevant section is at the 23:00 mark), and it doesn’t talk about client-side scanning of messages. It doesn’t talk about messaging apps at all. It discusses using AI techniques to find bad content on Facebook, and the difficulties that arise from dynamic content:

So far, we have been keeping this fight [against bad actors and harmful content] on familiar grounds. And that is, we have been training our AI models on the server and making inferences on the server when all the data are flooding into our data centers.

While this works for most scenarios, it is not the ideal setup for some unique integrity challenges. URL masking is one such problem which is very hard to do. We have the traditional way of server-side inference. What is URL masking? Let us imagine that a user sees a link on the app and decides to click on it. When they click on it, Facebook actually logs the URL to crawl it at a later date. But…the publisher can dynamically change the content of the webpage to make it look more legitimate [to Facebook]. But then our users click on the same link, they see something completely different — oftentimes it is disturbing; oftentimes it violates our policy standards. Of course, this creates a bad experience for our community that we would like to avoid. This and similar integrity problems are best solved with AI on the device.

That might be true, but it also would hand whatever secret-AI sauce Facebook has to every one of its users to reverse engineer — which means it’s probably not going to happen. And it is a dumb idea, for reasons Steve Bellovin has pointed out.

Facebook’s first published response was a comment on the Hacker News website from a user named “wcathcart,” which Cardozo assures me is Will Cathcart, the vice president of WhatsApp. (I have no reason to doubt his identity, but surely there is a more official news channel that Facebook could have chosen to use if they wanted to.) Cathcart wrote:

We haven’t added a backdoor to WhatsApp. The Forbes contributor referred to a technical talk about client side AI in general to conclude that we might do client side scanning of content on WhatsApp for anti-abuse purposes.

To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.

Facebook’s second published response was a comment on my original blog post, which has been confirmed to me by the WhatsApp people as authentic. It’s more of the same.

So, this was a false alarm. And, to be fair, Alec Muffet called foul on the first Forbes piece:

So, here’s my pre-emptive finger wag: Civil Society’s pack mentality can make us our own worst enemies. If we go around repeating one man’s Germanic conspiracy theory, we may doom ourselves to precisely what we fear. Instead, we should ­ we must ­ take steps to constructively demand what we actually want: End to End Encryption which is worthy of the name.

Blame accepted. But in general, this is the sort of thing we need to watch for. End-to-end encryption only secures data in transit. The data has to be in the clear on the device where it is created, and it has to be in the clear on the device where it is consumed. Those are the obvious places for an eavesdropper to get a copy.

This has been a long process. Facebook desperately wanted to convince me to correct the record, while at the same time not wanting to write something on their own letterhead (just a couple of comments, so far). I spoke at length with Privacy Policy Manager Nate Cardozo, whom Facebook hired last December from EFF. (Back then, I remember thinking of him — and the two other new privacy hires — as basically human warrant canaries. If they ever leave Facebook under non-obvious circumstances, we know that things are bad.) He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this. I am trusting him, while also reminding everyone that Facebook has broken so many privacy promises that they really can’t be trusted.

Final note: If they want to be trusted, Adam Shostack and I gave them a road map.

Hacker News thread.

EDITED TO ADD (8/4): Slashdot covered my retraction.

Facebook Plans on Backdooring WhatsApp

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/facebook_plans_.html

This article points out that Facebook’s planned content moderation scheme will result in an encryption backdoor into WhatsApp:

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Once this is in place, it’s easy for the government to demand that Facebook add another filter — one that searches for communications that they care about — and alert them when it gets triggered.

Of course alternatives like Signal will exist for those who don’t want to be subject to Facebook’s content moderation, but what happens when this filtering technology is built into operating systems?

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

I don’t think this will happen — why does AT&T care about content moderation — but it is something to watch?

EDITED TO ADD (8/2): This story is wrong. Read my correction.

BREIN Obtains Court Order to Stop Pirate eBook Sharing on Facebook

Post Syndicated from Andy original https://torrentfreak.com/brein-obtains-court-order-to-stop-pirate-ebook-sharing-on-facebook-190704/

Pirated eBooks can be downloaded from dozens if not hundreds of places online. From torrent sites like The Pirate Bay and RARBG to so-called DDL (direct download) platforms, eBooks are both quick and easy to obtain.

Of course, with the rise of social media, it’s now easier than ever for like-minded individuals to meet up for all kinds of activities, eBook sharing included. This hasn’t gone unnoticed by Dutch anti-piracy outfit BREIN, which says it has recently targeted a prolific group of sharers.

Acting on an anonymous tip-off, BREIN says it was able to infiltrate two “private and secret” Facebook groups that were dedicated to the uploading and sharing of unlicensed eBooks. More than 8,000 titles were made available by the groups’ members – a total of 3,000 people across the two groups.

Armed with its evidence, BREIN said it went to court and obtained an ex parte order, i.e one that didn’t involve both sides of the dispute to be heard. It subsequently made an agreement with the four managers of the groups, which requires them to cease-and-desist from their activities and pay a settlement to BREIN.

“They signed a declaration of abstention and have now paid more than 6,000 euros to BREIN. If they go wrong again, this amount goes up to 10,000 euros plus 500 euros per illegally offered e-book,” BREIN says.

According to the anti-piracy group, the managers of the Facebook groups acknowledged that their activities and those of their users are illegal via the published rules of the groups.

“Sharing e-books is and remains illegal, that is a choice you make,” the managers reportedly said. BREIN says that one of the managers, a 49-year-old woman, was a prolific sharer in her own right, having personally upload 1,000 eBooks for download.

While BREIN clearly takes this kind of unlawful sharing seriously, the anti-piracy group does point out that not every illegal download represents a lost sale. Instead, it highlights the existence of studies which indicate that the “so-called substitution” rate is around one lost sale per three illegal downloads.

However, BREIN also points out that legal eBook platforms give potential purchasers the ability to sample parts of books before committing to buying them, so lost sales in the eBook sector are “probably higher” given the absence of the “sampling effect”.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Judging Facebook’s Privacy Shift

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/judging_faceboo.html

Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that “frankly we don’t currently have a strong reputation for building privacy protective services.”

There is ample reason to question Zuckerberg’s pronouncement: The company has made — and broken — many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook’s surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.

In security and privacy, the devil is always in the details — and Zuckerberg’s post provides none. But we’ll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy.

How Facebook treats people on its platform

Increased transparency over advertiser and app accesses to user data. Today, Facebook users can download and view much of the data the company has about them. This is important, but it doesn’t go far enough. The company could be more transparent about what data it shares with advertisers and others and how it allows advertisers to select users they show ads to. Facebook could use its substantial skills in usability testing to help people understand the mechanisms advertisers use to show them ads or the reasoning behind what it chooses to show in user timelines. It could deliver on promises in this area.

Better — and more usable — privacy options. Facebook users have limited control over how their data is shared with other Facebook users and almost no control over how it is shared with Facebook’s advertisers, which are the company’s real customers. Moreover, the controls are buried deep behind complex and confusing menu options. To be fair, some of this is because privacy is complex, and it’s hard to understand the results of different options. But much of this is deliberate; Facebook doesn’t want its users to make their data private from other users.

The company could give people better control over how — and whether — their data is used, shared, and sold. For example, it could allow users to turn off individually targeted news and advertising. By this, we don’t mean simply making those advertisements invisible; we mean turning off the data flows into those tailoring systems. Finally, since most users stick to the default options when it comes to configuring their apps, a changing Facebook could tilt those defaults toward more privacy, requiring less tailoring most of the time.

More user protection from stalking. “Facebook stalking” is often thought of as “stalking light,” or “harmless.” But stalkers are rarely harmless. Facebook should acknowledge this class of misuse and work with experts to build tools that protect all of its users, especially its most vulnerable ones. Such tools should guide normal people away from creepiness and give victims power and flexibility to enlist aid from sources ranging from advocates to police.

Fully ending real-name enforcement. Facebook’s real-names policy, requiring people to use their actual legal names on the platform, hurts people such as activists, victims of intimate partner violence, police officers whose work makes them targets, and anyone with a public persona who wishes to have control over how they identify to the public. There are many ways Facebook can improve on this, from ending enforcement to allowing verifying pseudonyms for everyone­ — not just celebrities like Lady Gaga. Doing so would mark a clear shift.

How Facebook runs its platform

Increased transparency of Facebook’s business practices. One of the hard things about evaluating Facebook is the effort needed to get good information about its business practices. When violations are exposed by the media, as they regularly are, we are all surprised at the different ways Facebook violates user privacy. Most recently, the company used phone numbers provided for two-factor authentication for advertising and networking purposes. Facebook needs to be both explicit and detailed about how and when it shares user data. In fact, a move from discussing “sharing” to discussing “transfers,” “access to raw information,” and “access to derived information” would be a visible improvement.

Increased transparency regarding censorship rules. Facebook makes choices about what content is acceptable on its site. Those choices are controversial, implemented by thousands of low-paid workers quickly implementing unclear rules. These are tremendously hard problems without clear solutions. Even obvious rules like banning hateful words run into challenges when people try to legitimately discuss certain important topics. Whatever Facebook does in this regard, the company needs be more transparent about its processes. It should allow regulators and the public to audit the company’s practices. Moreover, Facebook should share any innovative engineering solutions with the world, much as it currently shares its data center engineering.

Better security for collected user data. There have been numerous examples of attackers targeting cloud service platforms to gain access to user data. Facebook has a large and skilled product security team that says some of the right things. That team needs to be involved in the design trade-offs for features and not just review the near-final designs for flaws. Shutting down a feature based on internal security analysis would be a clear message.

Better data security so Facebook sees less. Facebook eavesdrops on almost every aspect of its users’ lives. On the other hand, WhatsApp — purchased by Facebook in 2014 — provides users with end-to-end encrypted messaging. While Facebook knows who is messaging whom and how often, Facebook has no way of learning the contents of those messages. Recently, Facebook announced plans to combine WhatsApp, Facebook Messenger, and Instagram, extending WhatsApp’s security to the consolidated system. Changing course here would be a dramatic and negative signal.

Collecting less data from outside of Facebook. Facebook doesn’t just collect data about you when you’re on the platform. Because its “like” button is on so many other pages, the company can collect data about you when you’re not on Facebook. It even collects what it calls “shadow profiles” — data about you even if you’re not a Facebook user. This data is combined with other surveillance data the company buys, including health and financial data. Collecting and saving less of this data would be a strong indicator of a new direction for the company.

Better use of Facebook data to prevent violence. There is a trade-off between Facebook seeing less and Facebook doing more to prevent hateful and inflammatory speech. Dozens of people have been killed by mob violence because of fake news spread on WhatsApp. If Facebook were doing a convincing job of controlling fake news without end-to-end encryption, then we would expect to hear how it could use patterns in metadata to handle encrypted fake news.

How Facebook manages for privacy

Create a team measured on privacy and trust. Where companies spend their money tells you what matters to them. Facebook has a large and important growth team, but what team, if any, is responsible for privacy, not as a matter of compliance or pushing the rules, but for engineering? Transparency in how it is staffed relative to other teams would be telling.

Hire a senior executive responsible for trust. Facebook’s current team has been focused on growth and revenue. Its one chief security officer, Alex Stamos, was not replaced when he left in 2018, which may indicate that having an advocate for security on the leadership team led to debate and disagreement. Retaining a voice for security and privacy issues at the executive level, before those issues affected users, was a good thing. Now that responsibility is diffuse. It’s unclear how Facebook measures and assesses its own progress and who might be held accountable for failings. Facebook can begin the process of fixing this by designating a senior executive who is responsible for trust.

Engage with regulators. Much of Facebook’s posturing seems to be an attempt to forestall regulation. Facebook sends lobbyists to Washington and other capitals, and until recently the company sent support staff to politician’s offices. It has secret lobbying campaigns against privacy laws. And Facebook has repeatedly violated a 2011 Federal Trade Commission consent order regarding user privacy. Regulating big technical projects is not easy. Most of the people who understand how these systems work understand them because they build them. Societies will regulate Facebook, and the quality of that regulation requires real education of legislators and their staffs. While businesses often want to avoid regulation, any focus on privacy will require strong government oversight. If Facebook is serious about privacy being a real interest, it will accept both government regulation and community input.

User privacy is traditionally against Facebook’s core business interests. Advertising is its business model, and targeted ads sell better and more profitably — and that requires users to engage with the platform as much as possible. Increased pressure on Facebook to manage propaganda and hate speech could easily lead to more surveillance. But there is pressure in the other direction as well, as users equate privacy with increased control over how they present themselves on the platform.

We don’t expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook’s leadership realizes that they are in the best long-term interest of the company.

Facebook talks about community and bringing people together. These are admirable goals, and there’s plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.

This essay was co-authored with Adam Shostack, and originally appeared on Medium OneZero. We wrote a similar essay in 2002 about judging Microsoft’s then newfound commitment to security.

Facebook’s New Privacy Hires

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/02/facebooks_new_p.html

The Wired headline sums it up nicely — “Facebook Hires Up Three of Its Biggest Privacy Critics“:

In December, Facebook hired Nathan White away from the digital rights nonprofit Access Now, and put him in the role of privacy policy manager. On Tuesday of this week, lawyers Nate Cardozo, of the privacy watchdog Electronic Frontier Foundation, and Robyn Greene, of New America’s Open Technology Institute, announced they also are going in-house at Facebook. Cardozo will be the privacy policy manager of WhatsApp, while Greene will be Facebook’s new privacy policy manager for law enforcement and data protection.

I know these people. They’re ethical, and they’re on the right side. I hope they continue to do their good work from inside Facebook.

The Effects of GDPR’s 72-Hour Notification Rule

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/the_effects_of_5.html

The EU’s GDPR regulation requires companies to report a breach within 72 hours. Alex Stamos, former Facebook CISO now at Stanford University, points out how this can be a problem:

Interesting impact of the GDPR 72-hour deadline: companies announcing breaches before investigations are complete.

1) Announce & cop to max possible impacted users.
2) Everybody is confused on actual impact, lots of rumors.
3) A month later truth is included in official filing.

Last week’s Facebook hack is his example.

The Twitter conversation continues as various people try to figure out if the European law allows a delay in order to work with law enforcement to catch the hackers, or if a company can report the breach privately with some assurance that it won’t accidentally leak to the public.

The other interesting impact is the foreclosing of any possible coordination with law enforcement. I once ran response for a breach of a financial institution, which wasn’t disclosed for months as the company was working with the USSS to lure the attackers into a trap. It worked.

[…]

The assumption that anything you share with an EU DPA stays confidential in the current media environment has been disproven by my personal experience.

This is a perennial problem: we can get information quickly, or we can get accurate information. It’s hard to get both at the same time.