Fake: DMCA Notice Targeting Apple Jailbreaks on Reddit Was Fraudulent

Post Syndicated from Andy original https://torrentfreak.com/fake-dmca-notice-targeting-apple-jailbreaks-on-reddit-was-fraudulent-191213/

Earlier this week, black clouds began to form over the passionate iOS jailbreaking community. Tolerated by Apple through gritted teeth due to legal protection under the DMCA, the company took the unusual step of sending a DMCA notice targeting a developer’s tweet containing an encryption key.

While that tweet was later restored, the takedown came as a complete surprise and the knock-on effect from this unsettling act would set the scene for the company getting blamed for additional similar acts, this time on Reddit.

In the wake of the Twitter action, a moderator of the /r/jailbreak sub-Reddit revealed that Reddit’s legal team had removed five posts detailing iOS jailbreak releases checkra1n and unc0ver. All of the posts were deleted by Reddit’s admins after receiving a DMCA notice, ostensibly sent by Apple.

What followed was an hours-long information blackout, during which /r/jailbreak’s moderators sought but failed to obtain information from Reddit’s admins. With a credible fear that more notices could be filed and as a result label /r/jailbreak as a repeat offender under the DMCA, its moderators put the forum into lockdown.

Right from the very beginning there was no clear proof that Apple had sent any DMCA notices to Reddit, despite news headlines blaming the tech company for going to war against jailbreakers. It now transpires that waiting for proof would’ve been a more prudent option.

As revealed by checkra1n development team member ‘qwertyoruiopz’, the notice that targeted his project was actually a fake.

And, according to fellow developer ‘axi0mX’, the fake notice wasn’t particularly well constructed either.

“We reviewed it and confirmed that it was someone impersonating Apple. It was not sent from their law firm, which is Kilpatrick Townsend. There are issues with grammar and spelling,” he revealed.

“This notice was obviously not submitted in good faith, and it was not done by someone authorized to represent Apple. Not cool. They could be sued for damages or face criminal charges for perjury.”

Being sued for sending a fake notice sounds like a reasonable solution in practice but history tells us, one particularly notable case aside, that is unlikely to happen. However, it’s clear that more can be done to mitigate the effects of malicious takedowns, starting with more transparency from Reddit’s admins.

While the moderators of /r/jailbreak knew about the complaints early on, they were given no information about who sent them or on what basis. This meant that the people against whom the complaints were made weren’t in a position to counter them, at least with knowledge on their side.

“My personal take on all this is that this should provide plenty of food of thought about the state of copyright laws in the US. A site like Reddit risks losing legal safe harbor protections if they don’t immediately act on such notices,” qwertyoruiopz says.

“Not sharing the notices by default is however very bad policy on Reddit’s end; I would even call this a vulnerability. It allows for nefarious parties to create false-flag takedowns that spark can infighting and has chilling effects (albeit temporary) on non-infringing content.”

There can be little doubt that Reddit takes its DMCA obligations very seriously, so it could be argued that taking down the posts in response to a complaint was the safest legal option. However, if a cursory review of the notices by those targeted revealed clear fraud within minutes, there is a very good case for those notices being shared quickly to ensure that the fraudulent notices don’t have the desired effect.

While Reddit has shown no signs of sharing DMCA notices with the Lumen Database recently, quickly sharing them with those who have allegedly infringed would be a good first step.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Integrating SonarQube as a pull request approver on AWS CodeCommit

Post Syndicated from David Jackson original https://aws.amazon.com/blogs/devops/integrating-sonarqube-as-a-pull-request-approver-on-aws-codecommit/

Integrating SonarQube as a pull request approver on AWS CodeCommit

On Nov 25th, AWS CodeCommit launched a new feature that allows customers to configure approval rules on pull requests. Approval rules act as a gate on your source code changes. Pull requests which fail to satisfy the required approvals cannot be merged into your important branches. Additionally, CodeCommit launched the ability to create approval rule templates, which are rulesets that can automatically be applied to all pull requests created for one or more repositories in your AWS account. With templates, it becomes simple to create rules like “require one approver from my team” for any number of repositories in your AWS account.

A common problem for software developers is accidentally or unintentionally merging code with bugs, defects, or security vulnerabilities into important master branches. Once bad code is merged into a master branch, it can be difficult to remove. It’s also potentially costly if the code is deployed into production environments and causes outages or other serious issues. Using CodeCommit’s new features, adding required approvers to your repository pull requests can help identify and mitigate those issues before they are merged into your master branches.

The most rudimentary use of required approvers is to require at least one team member to approve each pull request. While adding human team members as approvers is an important part of the pull request workflow, this feature can also be used to require ‘robot’ approvers of your pull requests, and you can trigger them automatically on each new or updated pull request. Robotic approvers can help find issues that humans miss and enforce best practices regarding code style, test coverage, and more.

Customers have been asking us how we can integrate code review tools with AWS CodeCommit pull requests. I encourage you to check out Amazon CodeGuru Reviewer, which is a service that uses program analysis and machine learning to detect potential defects that are difficult for developers to find and recommends fixes in your Java code, and was launched in preview at the AWS Re:Invent 2019 conference. Another popular tool is SonarQube, which is an open-source platform for performing code quality analysis. It helps detect defects, bugs, and security vulnerabilities in your pull requests. This blog post shows you how to integrate SonarQube into the pull requests workflow.

This post shows…

Time to read10 minutes
Time to complete20 minutes
Cost to complete (estimated)$0.40/month for secret, ~$0.02 per build on CodeBuild. $0-1 for CodeCommit user depending on current free tier status. (at publication time)
Learning levelIntermediate (200)
Services usedAWS CodeCommit, AWS CodeBuild, AWS CloudFormation, Amazon Elastic Compute Cloud (EC2), AWS CloudWatch Events, AWS Identity and Access Management, AWS Secrets Manager

Solution overview

In this solution, you create a CodeCommit repository that requires a successful SonarQube quality analysis before pull requests can be merged. You can create the required AWS resources in your account by using the provided AWS CloudFormation template. This template creates the following resources:

  • A new CodeCommit repository, containing a starter Java project that uses the Apache Maven build system, as well as a custom buildspec.yml file to facilitate communication with SonarQube and CodeCommit.
  • An AWS CodeBuild project which invokes your SonarQube instance on build, then reports the status of the analysis back to CodeCommit.
  • An Amazon CloudWatch Events Rule, which listens for pullRequestCreated and pullRequestSourceBranchUpdated events from CodeCommit, and invokes your CodeBuild project.
  • An AWS Secrets Manager secret, which securely stores and provides the username and password of your SonarQube user to the CodeBuild project on-demand.
  • IAM roles for CodeBuild and CloudWatch events.

Although this tutorial showcases a Java project with Maven, the design principles should also apply for other languages and build systems with SonarQube integrations.

Design

The following diagram shows the flow of data, starting with a new or updated pull request on CodeCommit. CloudWatch Events listens for these events and invokes your CodeBuild project. The CodeBuild container clones your repository source commit, performs a Maven install, and invokes the quality analysis on SonarQube, using the credentials obtained from AWS Secrets Manager. When finished, CodeBuild leaves a comment on your pull request, and potentially approves your pull request.

 

Diagram showing the flow of data between the AWS service components, as well as the SonarQube.

Prerequisites

For this walkthrough, you require:

  • An AWS account
  • A SonarQube server instance (Optional setup instructions included if you don’t have one already)

SonarQube instance setup (Optional)

This tutorial shows a basic setup of SonarQube on Amazon EC2 for informational purposes only. It does not include details about securing your Amazon EC2 instance or SonarQube installation. Please be sure you have secured your environments before placing sensitive data on them.

  1. To start, get a SonarQube server instance up and running. If you are already using SonarQube, feel free to skip these instructions and just note down your host URL and port number for later. If you don’t have one already, I recommend using a fresh Amazon EC2 instance for the job. You can get up and running quickly in just a few commands. I’ve selected an Amazon Linux 2 AMI for my EC2 instance.
  2. Download and install the latest JDK 11 module. Because I am using an Amazon Linux 2 EC2 instance, I can directly install Amazon Corretto 11 with yum.

$ sudo yum install java-11-amazon-corretto-headless

  1. After it’s installed, verify you’re using this version of Java:

$ sudo alternatives --config java

  1. Choose the Java 11 version you just installed.
  2. Download the latest SonarQube installation.
  3. Copy the zip-file onto your Amazon EC2 instance.
  4. Unzip the file into your home directory:

$ unzip sonarqube-8.0.zip -d ~/

This will copy the files into a directory like /home/ec2-user/sonarqube-8.0.

Now, start the server!

$ ~/sonarqube-8.0/bin/linux-x86-64/sonar.sh start

This should start a SonarQube server running on an address like http://<instance-address>:9000. It may take a few moments for the server to start.

Steps

Follow these steps to create automated pull request approvals.

Create a SonarQube User

Get started by creating a SonarQube user from your SonarQube webpage. This user is the identity used by the robot caller to your SonarQube for this workflow.

  1. Go to the Administration tab on your SonarQube instance.
  2. Choose Security, then Users, as shown in the following screenshot.Screenshot showing where to find the user management options inside SonarQube.
  3. Choose Create User. Fill in the form, and note down the Login and Password You will need to provide these values when creating the following AWS resources.
  4. Choose Create.

Create AWS resources

For this integration, you need to create some AWS resources:

  • AWS CodeCommit repository
  • AWS CodeBuild project
  • Amazon CloudWatch Events rule (to trigger builds when pull requests are created or updated)
  • IAM role (for CodeBuild to assume)
  • IAM role (for CloudWatch Events to assume and invoke CodeBuild)
  • AWS Secrets Manager secret (to store and manage your SonarQube user credentials)

I have created an AWS CloudFormation template to provision these resources for you. You can download the template from the sample repository on GitHub for this blog demo. This repository also contains the sample code which will be uploaded to your CodeCommit repository. The contents of this GitHub repository will automatically be copied into your new CodeCommit repository for you when you create this CloudFormation stack. This is because I’ve conveniently uploaded a zip-file of the contents into a publicly-readable S3 bucket, and am using it within this CloudFormation template.

  1. Download or copy the CloudFormation template from GitHub and save it as template.yaml on your local computer.
  2. At the CloudFormation console, choose Create Stack (with new resources).
  3. Choose Upload a template file.
  4. Choose Choose file and select the template.yaml file you just saved.
  5. Choose Next.
  6. Give your stack a name, optionally update the CodeCommit repository name and description, and paste in the username and password of the SonarQube user you created.
  7. Choose Next.
  8. Review the stack options and choose Next.
  9. On Step 4, review your stack, acknowledge the required capabilities, and choose Create Stack.
  10. Wait for the stack creation to complete before proceeding.
  11. Before leaving the AWS CloudFormation console, choose the Resources tab and note down the newly created CodeBuildRole’s Physical Id, as shown in the following screenshot. You need this in the next step. Screenshot showing the Physical Id of the CodeBuild role created through CloudFormation.

Create an Approval Rule Template

Now that your resources are created, create an Approval Rule Template in the CodeCommit console. This template allows you to define a required approver for new pull requests on specific repositories.

  1. On the CodeCommit console home page, choose Approval rule templates in the left panel. Choose Create template.
  2. Give the template a name (like Require SonarQube approval) and optionally, a description.
  3. Set the number of approvals needed as 1.
  4. Under Approval pool members, choose Add.
  5. Set the approver type to Fully qualified ARN. Since the approver will be the identity obtained by assuming the CodeBuild execution role, your approval pool ARN should be the following string:
    arn:aws:sts::<Your AccountId>:assumed-role/<Your CodeBuild IAM role name>/*
    The CodeBuild IAM role name is the Physical Id of the role you created and noted down above. You can also find the full name either in the IAM console or the AWS CloudFormation stack details. Adding this role to the approval pool allows any identity assuming your CodeBuild role to satisfy this approval rule.
  6. Under Associated repositories, find and choose your repository (PullRequestApproverBlogDemo). This ensures that any pull requests subsequently created on your repository will have this rule by default.
  7. Choose Create.

Update the repository with a SonarQube endpoint URL

For this step, you update your CodeCommit repository code to include the endpoint URL of your SonarQube instance. This allows CodeBuild to know where to go to invoke your SonarQube.

You can use the AWS Management Console to make this code change.

  1. Head back to the CodeCommit home page and choose your repository name from the Repositories list.
  2. You need a new branch on which to update the code. From the repository page, choose Branches, then Create branch.
  3. Give the new branch a name (such as update-url) and make sure you are branching from master. Choose Create branch.
  4. You should now see two branches in the table. Choose the name of your new branch (update-url) to start browsing the code on this branch. On the update-url branch, open the buildspec.yml file by choosing it.
  5. Choose Edit to make a change.
  6. In the pre_build steps, modify line 17 with your SonarQube instance url and listen port number, as shown in the following screenshot.Screenshot showing buildspec yaml code.
  7. To save, scroll down and fill out the author, email, and commit message. When you’re happy, commit this by choosing Commit changes.

Create a Pull Request

You are now ready to create a pull request!

  1. From the CodeCommit console main page, choose Repositories and PullRequestApproverBlogDemo.
  2. In the left navigation panel, choose Pull Requests.
  3. Choose Create pull request.
  4. Select master as your destination branch, and your new branch (update-url) as the source branch.
  5. Choose Compare.
  6. Give your pull request a title and description, and choose Create pull request.

It’s time to see the magic in action. Now that you’ve created your pull request, you should already see that your pull request requires one approver but is not yet approved. This rule comes from the template you created and associated earlier.

You’ll see images like the following screenshot if you browse through the tabs on your pull request:

Screenshot showing that your pull request has 0 of 1 rule satisfied, with 0 approvals. Screenshot showing a table of approval rules on this pull request which were applied by a template. Require SonarQube approval is listed but not yet satisfied.

Thanks to the CloudWatch Events Rule, CodeBuild should already be hard at work cloning your repository, performing a build, and invoking your SonarQube instance. It is able to find the SonarQube URL you provided because CodeBuild is cloning the source branch of your pull request. If you choose to peek at your project in the CodeBuild console, you should see an in-progress build.

Once the build has completed, head back over to your CodeCommit pull request page. If all went well, you’ll be able to see that SonarQube approved your pull request and left you a comment. (Or alternatively, failed and also left you a comment while not approving).

The Activity tab should resemble that in the following screenshot:

Screenshot showing that a comment was made by SonarQube through CodeBuild, and that the quality gate passed. The comment includes a link back to the SonarQube instance.

The Approvals tab should resemble that in the following screenshot:

Screenshot of Approvals tab on the pull request. The approvals table shows an approval by the SonarQube and that the rule to require SonarQube approval is satisfied.

Suppose you need to make a change to your pull request. If you perform updates to your source branch, the approval status will be reset. As your push completes, a new SonarQube analysis will begin just as it did the first time.

Once your SonarQube thresholds are satisfied and your pull request is approved, feel free to merge it!

Cleanup

To avoid incurring additional charges, you may want to delete the AWS resources you created for this project. To do this, simply navigate to the CloudFormation console, select the stack you created above, and choose Delete. If you are sure you want to delete, confirm by choosing Delete stack. CloudFormation will delete all the resources you created with this stack.

Conclusion

In this tutorial, you created a workflow to watch for pull request changes to your repository, triggered a CodeBuild project execution which invoked your SonarQube for code quality analysis, and then reported back to CodeCommit to approve your pull request.

I hope this guide illustrates the potential power of combining pull request approval rules with robotic approvers. While this example is specifically about integrating SonarQube, the same pattern can be used to invoke other robotic approvers using CodeBuild, or by invoking an AWS Lambda function instead.

This tutorial was written and tested using SonarQube Version 8.0 (build 29455).

USMCA Trade Deal Keeps DMCA-Style ‘Safe Harbor’ for ISPs

Post Syndicated from Ernesto original https://torrentfreak.com/usmca-trade-deal-keeps-dmca-style-safe-harbor-for-isps-191212/

More than a quarter-century after the United States, Canada, and Mexico approved the NAFTA trade agreement, the North American countries have now signed off on a new trade deal.

The United States-Mexico-Canada Agreement (USMCA) will accommodate changes in trade that the three countries have witnessed over the years, especially online.

The road to this final deal wasn’t without obstacles. After agreeing on the text a year ago, new demands and proposed changes were tabled, some of which were included in the Protocol of Amendments that was published this week.

The amendments don’t cover copyright issues, but the previously agreed text certainly does. For example, USMCA will require all countries to have a copyright term that continues for at least 70 years after the creator’s death.

For Canada, this means that the country’s current copyright term has to be extended by 20 years. This won’t happen instantly, as the country negotiated a transition period to consult the public on how to best meet this requirement. However, an extension seems inevitable in the long term.

Another controversial subject that was widely debated by experts and stakeholders is the DMCA-style ‘safe harbor’ text. In the US, ISPs are shielded from copyright infringement liability under the safe harbor provisions of the DMCA, and the new deal would expand this security to Mexico and Canada.

This expansion was welcomed by many large technology companies including Internet providers and hosting platforms. However, many major entertainment industry companies and rightsholder groups were not pleased with the plans, as they have been calling for safe harbor restrictions for years.

US lawmakers also raised concerns. Just a few weeks ago the House Judiciary Committee urged the US Trade Representative not to include any safe harbor language in trade deals while the Copyright Office is reviewing the effectiveness of the DMCA law.

As the USMCA negotiations reached the final stage, House Speaker Nancy Pelosi weighed in as well, trying to have safe harbor text removed from the new trade deal.

Despite this pushback, there is no mention of changes to the safe harbor section in the final amendments. This means that they will remain in the USMCA, much to the delight of major Internet companies.

That said, copyright liability protection also comes with obligations. The agreement specifies that ISPs should have legal incentives to work with ISPs to ensure that copyright infringements are properly dealt with.

This framework shall include “legal incentives for Internet Service Providers to cooperate with copyright owners to deter the unauthorized storage and transmission of copyrighted materials or, in the alternative, to take other action to deter the unauthorized storage and transmission of copyrighted materials,” the agreement reads.

The USMCA specifically mentions that ISPs must take down pirated content and implement a repeat infringer policy if they want to apply for safe harbor protection. This is largely modeled after the DMCA law.

The safe harbors for copyright infringement and the takedown requirements don’t apply to Canada as long as it continues to rely on its current notice-and-notice scheme. However, the country will enjoy safe harbors for other objectionable content, modeled after section 230 of the US Communications Decency Act.

While the three North American countries have reached an agreement, the text still has to be ratified into local law and policy. So it may take some time before it has any effect.

Commenting on the outcome, Canadian copyright professor Micheal Geist notes that the safe harbor for objectionable content is a win for freedom of expression. The additional 20-year copyright term is a setback, although the negative effects can be limited by requiring rightsholders to register for such an extension.

On the other side, rightsholders are also pleased, at least with parts of the new agreement.

“The USMCA’s provisions to strengthen copyright protections and enforcement will benefit the U.S. motion picture and television industry and support American jobs,” MPA Chairman and CEO Charles Rivkin says.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Integrating SonarCloud with AWS CodePipeline using AWS CodeBuild

Post Syndicated from Karthik Thirugnanasambandam original https://aws.amazon.com/blogs/devops/integrating-sonarcloud-with-aws-codepipeline-using-aws-codebuild/

In most development processes, common challenges include the quality of released code and the efficiency of the code review process. There are multiple tools providing insights into code quality which can easily be integrated into the daily routine of the development team. One such tool is SonarCloud, a code analysis as a service provided by SonarQube. This tool provides a defined process to enforce code control on three levels—syntax, code standards, and structure—before the code reaches the testing stage can address these challenges and help the developer release high-quality code every time.

In this blog post, we will demonstrate how SonarCloud can be integrated with AWS CodePipeline using AWS CodeBuild.

AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates. You can easily integrate AWS CodePipeline with third-party services such as GitHub or with your own custom plugin.

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.

Prerequisites:

  1. GitHub account credential to login to SonarCloud. We assume you have fair understanding of SonarCloud.
  2. AWS Account and console access. We assume you have sample project to integrate either in GitHub or AWS CodeCommit repository.
  3. For more information on CodeBuild, refer getting started documentation.

High level architecture

Here, we are going to use a simple three stage CodePipeline setup to demonstrate the integration with Sonarcloud. For source stage, we will use a sample project stored in AWS CodeCommit. For review stage, we will use AWS CodeBuild project to integrate with SonarCloud and perform code quality check. For final build stage, we will use another AWS CodeBuild project and push the built artifact to S3 bucket.

Connect your repository with SonarCloud

First, connect your repository with SonarCloud by following these steps:

  1. Sign in to GitHub through the SonarCloud site using your GitHub credentials, as shown in the following screenshot.

SonarCloud Login screen      2. Choose Create a new project in the SonarCloud portal, as shown in the following screenshot.

Welcome screen SonarCloud

 

3. Choose Choose an organization in GitHub, as shown in the following screenshot.

Analyze projects on SonarCloud4. Choose Install after selecting the required repositories, as shown in the following screenshot.

Install Sonar plugin

5. Your GitHub repository is now synchronized with SonarCloud. The GitHub repository in this example has a Java project. Bind the GitHub branch and choose Create Organization, as shown in the following screenshot.
choose plan for sonarcloud

6.  To generate a token, to go User > My Account > Security. Your existing tokens are listed here, each with a Revoke button. Enter a new Token name and Click Generate.  Store it for the succeeding steps.

 

security token for Sonarcloud access

7. Select Analyze new project.

new project setup on SonarCloud

8. Select Set up manually. Add a new Project key and click Set up.

Analyze project setup on SonarCloud

Note: We will use the Project key, Organization and token in the next step to configure CodeBuild.

Configure SecretManager

We will use AWS Secret Manager to store the sonar login credentials. By using Secrets Manager we can provide controlled access to the credentials from CodeBuild.

1.     Visit AWS Secret Manager console to setup the sonar login credentials.

2.     Select Store a new secret. And choose Other types of secret

3.     Enter secret keys and values as shown below. Enter the values based on your Organization, project and token.

4.     Enter the secret name. In this case, we will use “prod/sonar” and save with default settings.

AWS Secret Manager setup

Configuring AWS CodeBuild

A buildspec.yml file is a collection of build commands and related settings in YAML format that CodeBuild uses to run a build. To understand buildspec.yml file specification, refer to the Build Specification Reference for CodeBuild.

Create a CodeBuild Project name, such as CodeReview, for integrating with SonarCloud.

For CodeBuild Environment, use AWS managed image with Ubuntu Operating System and Standard runtime with image “aws/codebuild/standard:3.0

The buildspec.yml file in CodeBuild is structured as follows:

version: 0.2
env:
  secrets-manager:
    LOGIN: prod/sonar:sonartoken
    HOST: prod/sonar:HOST
    Organization: prod/sonar:Organization
    Project: prod/sonar:Project
phases:
  install:
    runtime-versions:
      java: openjdk8
  pre_build:
    commands:
      - apt-get update
      - apt-get install -y jq
      - wget http://www-eu.apache.org/dist/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.tar.gz
      - tar xzf apache-maven-3.5.4-bin.tar.gz
      - ln -s apache-maven-3.5.4 maven
      - wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-3.3.0.1492-linux.zip
      - unzip ./sonar-scanner-cli-3.3.0.1492-linux.zip
      - export PATH=$PATH:/sonar-scanner-3.3.0.1492-linux/bin/
  build:
    commands:
      - mvn test     
      - mvn sonar:sonar -Dsonar.login=$LOGIN -Dsonar.host.url=$HOST -Dsonar.projectKey=$Project -Dsonar.organization=$Organization
      - sleep 5
      - curl https://sonarcloud.io/api/qualitygates/project_status?projectKey=$Project >result.json
      - cat result.json
      - if [ $(jq -r '.projectStatus.status' result.json) = ERROR ] ; then $CODEBUILD_BUILD_SUCCEEDING -eq 0 ;fi

 

Note: In the pre-build phase, we have downloaded and unzipped the SonarQube Scanner CLI package. The SonarCloud CLI is used to interact with the SonarCloud service. You can also look for the latest SonarCloud CLI release. And in the build phase, we have added a command to execute SonarCloud check and get a response from the project’s quality gate.

2. The Code Review status of the project can be also be verified in the SonarCloud dashboard, as shown in the following screenshot.

SonarCloud Quality gate sample screen

Note: Quality Gate is a feature in SonarCloud that can be configured to ensure coding standards are met and regulated across projects. You can set threshold measures on your projects like code coverage, technical debt measure, number of blocker/critical issues, security rating/unit test pass rate, and more. The last step calls the Quality Gate API to check if the code is satisfying all the conditions set in Quality Gate. Refer to the Quality Gate documentation for more information.

Quality Gate can return four possible responses:

  • ERROR: The project fails the Quality Gate.
  • WARN: The project has some irregularities but is ok to be passed on to production.
  • OK: The project successfully passes the Quality Gate.
  • None: The Quality Gate is not attached to project.

AWS CodeBuild provides several environment variables that you can use in your build commands. CODEBUILD_BUILD_SUCCEEDING is a variable used to indicate whether the current build is succeeding. Setting the value to 0 indicates the build status as failure and 1 indicates the build as success.

Using the Quality Gate ERROR response, set the CODEBUILD_BUILD_SUCCEEDING variable to failure. Accordingly, the CodeBuild status can be used to provide response for the pipeline to proceed or to stop.

Set up CodePipeline to verify the SonarCloud integration.

Switch to your CodePipeline console to create a pipeline for your repository.

You can integrate SonarCloud in any stage in CodePipeline. In this example, we created a Review stage after the CodePipeline Source stage with CodeBuild used as an action provider, as shown in the following screenshot. Here, we have used a project from our CodeCommit repository to analyze it on SonarCloud. You should be able to link your projects from either GitHub, S3 or CodeCommit as appropriate using CodePipeline.

Sample AWS CodePipeline

Clean Up

  1. Visit CodePipeline console, select the created pipeline. Select the Edit and click Delete.
  2. Visit CodeBuild console, select the created project. Select the Action and click Delete.
  3. Visit Secrets Manager console, select the created secret. Select the Action and click Delete.

Conclusion

This blog demonstrated how to integrate SonarCloud with CodePipeline using CodeBuild. With this solution, you can automate static code analysis every time you have a check-in in your source code tool. Hopefully this blog post will help you integrate SonarCloud for better code quality before release. Feel free to leave suggestions or approaches on integration in the comments.

About the Authors

 

Raji Krishnamoorthy is a AWS Cloud architect working for Tata Consultancy Services.
She carries close to 16 years of experience in Microsoft .Net, SharePoint, AWS and other cloud technologies. Currently, she is leading the Public Cloud Industry Transformation Group with Tata Consultancy Services.

 

 

 

Neelam Jain is a AWS Solution Architect working for Tata Consultancy Services. She has expertise on Java and AWS DevOps technologies. Currently, she is playing the role of a Senior Developer in Public Cloud CoE group with Tata Consultancy Services.

How to Leverage a CASB for Your AWS Environment

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-leverage-a-casb-for-your-aws-environment

With the rapid rate of enterprises moving toward cloud providers, security organizations are seeking out methods to implement security controls. Attendees of this webinar will discover how to take advantage of the convenience of cloud access security brokers (CASBs) to integrate modern technologies and secure their AWS footprint with a suite of capabilities.

Attendees will learn to:

  • How a CASB can help make sense of auditing data
  • How a CASB can provide data protection and storage security
  • What common features of CASBs can be leveraged to secure AWS deployments

How we used our new GraphQL Analytics API to build Firewall Analytics

Post Syndicated from Nick Downie original https://blog.cloudflare.com/how-we-used-our-new-graphql-api-to-build-firewall-analytics/

How we used our new GraphQL Analytics API to build Firewall Analytics

How we used our new GraphQL Analytics API to build Firewall Analytics

Firewall Analytics is the first product in the Cloudflare dashboard to utilize the new GraphQL Analytics API. All Cloudflare dashboard products are built using the same public APIs that we provide to our customers, allowing us to understand the challenges they face when interfacing with our APIs. This parity helps us build and shape our products, most recently the new GraphQL Analytics API that we’re thrilled to release today.

By defining the data we want, along with the response format, our GraphQL Analytics API has enabled us to prototype new functionality and iterate quickly from our beta user feedback. It is helping us deliver more insightful analytics tools within the Cloudflare dashboard to our customers.

Our user research and testing for Firewall Analytics surfaced common use cases in our customers’ workflow:

  • Identifying spikes in firewall activity over time
  • Understanding the common attributes of threats
  • Drilling down into granular details of an individual event to identify potential false positives

We can address all of these use cases using our new GraphQL Analytics API.

GraphQL Basics

Before we look into how to address each of these use cases, let’s take a look at the format of a GraphQL query and how our schema is structured.

A GraphQL query is comprised of a structured set of fields, for which the server provides corresponding values in its response. The schema defines which fields are available and their type. You can find more information about the GraphQL query syntax and format in the official GraphQL documentation.

To run some GraphQL queries, we recommend downloading a GraphQL client, such as GraphiQL, to explore our schema and run some queries. You can find documentation on getting started with this in our developer docs.

At the top level of the schema is the viewer field. This represents the top level node of the user running the query. Within this, we can query the zones field to find zones the current user has access to, providing a filter argument, with a zoneTag of the identifier of the zone we’d like narrow down to.

{
  viewer {
    zones(filter: { zoneTag: "YOUR_ZONE_ID" }) {
      # Here is where we'll query our firewall events
    }
  }
}

Now that we have a query that finds our zone, we can start querying the firewall events which have occurred in that zone, to help solve some of the use cases we’ve identified.

Visualising spikes in firewall activity

It’s important for customers to be able to visualise and understand anomalies and spikes in their firewall activity, as these could indicate an attack or be the result of a misconfiguration.

Plotting events in a timeseries chart, by their respective action, provides users with a visual overview of the trend of their firewall events.

Within the zones field in the query we’ve created earlier, we can query our firewall event aggregates using the firewallEventsAdaptiveGroups field, providing arguments to limit the count of groups, a filter for the date range we’re looking for (combined with any user-entered filters), and a list of fields to order by; in this case, just the datetimeHour field that we’re grouping by.

Within the zones field in the query we created earlier, we can further query our firewall event aggregates using the firewallEventsAdaptiveGroups field and providing arguments for:

  • A limit for the count of groups
  • A filter for the date range we’re looking for (combined with any user-entered filters)
  • A list of fields to orderBy (in this case, just the datetimeHour field that we’re grouping by).

By adding the dimensions field, we’re querying for groups of firewall events, aggregated by the fields nested within dimensions. In this case, our query includes the action and datetimeHour fields, meaning the response will be groups of firewall events which share the same action, and fall within the same hour. We also add a count field, to get a numeric count of how many events fall within each group.

query FirewallEventsByTime($zoneTag: string, $filter: FirewallEventsAdaptiveGroupsFilter_InputObject) {
  viewer {
    zones(filter: { zoneTag: $zoneTag }) {
      firewallEventsAdaptiveGroups(
        limit: 576
        filter: $filter
        orderBy: [datetimeHour_DESC]
      ) {
        count
        dimensions {
          action
          datetimeHour
        }
      }
    }
  }
}

Note – Each of our groups queries require a limit to be set. A firewall event can have one of 8 possible actions, and we are querying over a 72 hour period. At most, we’ll end up with 567 groups, so we can set that as the limit for our query.

This query would return a response in the following format:

{
  "viewer": {
    "zones": [
      {
        "firewallEventsAdaptiveGroups": [
          {
            "count": 5,
            "dimensions": {
              "action": "jschallenge",
              "datetimeHour": "2019-09-12T18:00:00Z"
            }
          }
          ...
        ]
      }
    ]
  }
}

We can then take these groups and plot each as a point on a time series chart. Mapping over the firewallEventsAdaptiveGroups array, we can use the group’s count property on the y-axis for our chart, then use the nested fields within the dimensions object, using action as unique series and the datetimeHour as the time stamp on the x-axis.

How we used our new GraphQL Analytics API to build Firewall Analytics

Top Ns

After identifying a spike in activity, our next step is to highlight events with commonality in their attributes. For example, if a certain IP address or individual user agent is causing many firewall events, this could be a sign of an individual attacker, or could be surfacing a false positive.

Similarly to before, we can query aggregate groups of firewall events using the firewallEventsAdaptiveGroups field. However, in this case, instead of supplying action and datetimeHour to the group’s dimensions, we can add individual fields that we want to find common groups of.

By ordering by descending count, we’ll retrieve groups with the highest commonality first, limiting to the top 5 of each. We can add a single field nested within dimensions to group by it. For example, adding clientIP will give five groups with the IP addresses causing the most events.

We can also add a firewallEventsAdaptiveGroups field with no nested dimensions. This will create a single group which allows us to find the total count of events matching our filter.

query FirewallEventsTopNs($zoneTag: string, $filter: FirewallEventsAdaptiveGroupsFilter_InputObject) {
  viewer {
    zones(filter: { zoneTag: $zoneTag }) {
      topIPs: firewallEventsAdaptiveGroups(
        limit: 5
        filter: $filter
        orderBy: [count_DESC]
      ) {
        count
        dimensions {
          clientIP
        }
      }
      topUserAgents: firewallEventsAdaptiveGroups(
        limit: 5
        filter: $filter
        orderBy: [count_DESC]
      ) {
        count
        dimensions {
          userAgent
        }
      }
      total: firewallEventsAdaptiveGroups(
        limit: 1
        filter: $filter
      ) {
        count
      }
    }
  }
}

Note – we can add the firewallEventsAdaptiveGroups field multiple times within a single query, each aliased differently. This allows us to fetch multiple different groupings by different fields, or with no groupings at all. In this case, getting a list of top IP addresses, top user agents, and the total events.

How we used our new GraphQL Analytics API to build Firewall Analytics

We can then reference each of these aliases in the UI, mapping over their respective groups to render each row with its count, and a bar which represents the proportion of total events, showing the proportion of all events each row equates to.

Are these firewall events false positives?

After users have identified spikes, anomalies and common attributes, we wanted to surface more information as to whether these have been caused by malicious traffic, or are false positives.

To do this, we wanted to provide additional context on the events themselves, rather than just counts. We can do this by querying the firewallEventsAdaptive field for these events.

Our GraphQL schema uses the same filter format for both the aggregate firewallEventsAdaptiveGroups field and the raw firewallEventsAdaptive field. This allows us to use the same filters to fetch the individual events which summate to the counts and aggregates in the visualisations above.

query FirewallEventsList($zoneTag: string, $filter: FirewallEventsAdaptiveFilter_InputObject) {
  viewer {
    zones(filter: { zoneTag: $zoneTag }) {
      firewallEventsAdaptive(
        filter: $filter
        limit: 10
        orderBy: [datetime_DESC]
      ) {
        action
        clientAsn
        clientCountryName
        clientIP
        clientRequestPath
        clientRequestQuery
        datetime
        rayName
        source
        userAgent
      }
    }
  }
}

How we used our new GraphQL Analytics API to build Firewall Analytics

Once we have our individual events, we can render all of the individual fields we’ve requested, providing users the additional context on event they need to determine whether this is a false positive or not.

That’s how we used our new GraphQL Analytics API to build Firewall Analytics, helping solve some of our customers most common security workflow use cases. We’re excited to see what you build with it, and the problems you can help tackle.

You can find out how to get started querying our GraphQL Analytics API using GraphiQL in our developer documentation, or learn more about writing GraphQL queries on the official GraphQL Foundation documentation.

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Post Syndicated from Filipp Nisenzoun original https://blog.cloudflare.com/introducing-the-graphql-analytics-api-exactly-the-data-you-need-all-in-one-place/

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Today we’re excited to announce a powerful and flexible new way to explore your Cloudflare metrics and logs, with an API conforming to the industry-standard GraphQL specification. With our new GraphQL Analytics API, all of your performance, security, and reliability data is available from one endpoint, and you can select exactly what you need, whether it’s one metric for one domain or multiple metrics aggregated for all of your domains. You can ask questions like “How many cached bytes have been returned for these three domains?” Or, “How many requests have all the domains under my account received?” Or even, “What effect did changing my firewall rule an hour ago have on the responses my users were seeing?”

The GraphQL standard also has strong community resources, from extensive documentation to front-end clients, making it easy to start creating simple queries and progress to building your own sophisticated analytics dashboards.

From many APIs…

Providing insights has always been a core part of Cloudflare’s offering. After all, by using Cloudflare, you’re relying on us for key parts of your infrastructure, and so we need to make sure you have the data to manage, monitor, and troubleshoot your website, app, or service. Over time, we developed a few key data APIs, including ones providing information regarding your domain’s traffic, DNS queries, and firewall events. This multi-API approach was acceptable while we had only a few products, but we started to run into some challenges as we added more products and analytics. We couldn’t expect users to adopt a new analytics API every time they started using a new product. In fact, some of the customers and partners that were relying on many of our products were already becoming confused by the various APIs.

Following the multi-API approach was also affecting how quickly we could develop new analytics within the Cloudflare dashboard, which is used by more people for data exploration than our APIs. Each time we built a new product, our product engineering teams had to implement a corresponding analytics API, which our user interface engineering team then had to learn to use. This process could take up to several months for each new set of analytics dashboards.

…to one

Our new GraphQL Analytics API solves these problems by providing access to all Cloudflare analytics. It offers a standard, flexible syntax for describing exactly the data you need and provides predictable, matching responses. This approach makes it an ideal tool for:

  1. Data exploration. You can think of it as a way to query your own virtual data warehouse, full of metrics and logs regarding the performance, security, and reliability of your Internet property.
  2. Building amazing dashboards, which allow for flexible filtering, sorting, and drilling down or rolling up. Creating these kinds of dashboards would normally require paying thousands of dollars for a specialized analytics tool. You get them as part of our product and can customize them for yourself using the API.

In a companion post that was also published today, my colleague Nick discusses using the GraphQL Analytics API to build dashboards. So, in this post, I’ll focus on examples of how you can use the API to explore your data. To make the queries, I’ll be using GraphiQL, a popular open-source querying tool that takes advantage of GraphQL’s capabilities.

Introspection: what data is available?

The first thing you may be wondering: if the GraphQL Analytics API offers access to so much data, how do I figure out what exactly is available, and how I can ask for it? GraphQL makes this easy by offering “introspection,” meaning you can query the API itself to see the available data sets, the fields and their types, and the operations you can perform. GraphiQL uses this functionality to provide a “Documentation Explorer,” query auto-completion, and syntax validation. For example, here is how I can see all the data sets available for a zone (domain):

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

If I’m writing a query, and I’m interested in data on firewall events, auto-complete will help me quickly find relevant data sets and fields:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Querying: examples of questions you can ask

Let’s say you’ve made a major product announcement and expect a surge in requests to your blog, your application, and several other zones (domains) under your account. You can check if this surge materializes by asking for the requests aggregated under your account, in the 30 minutes after your announcement post, broken down by the minute:

{
 viewer { 
   accounts (filter: {accountTag: $accountTag}) {
     httpRequests1mGroups(limit: 30, filter: {datetime_geq: "2019-09-16T20:00:00Z", datetime_lt: "2019-09-16T20:30:00Z"}, orderBy: [datetimeMinute_ASC]) {
	  dimensions {
		datetimeMinute
	  }
	  sum {
		requests
	  }
	}
   }
 }
}

Here is the first part of the response, showing requests for your account, by the minute:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Now, let’s say you want to compare the traffic coming to your blog versus your marketing site over the last hour. You can do this in one query, asking for the number of requests to each zone:

{
 viewer {
   zones(filter: {zoneTag_in: [$zoneTag1, $zoneTag2]}) {
     httpRequests1hGroups(limit: 2, filter: {datetime_geq: "2019-09-16T20:00:00Z",
datetime_lt: "2019-09-16T21:00:00Z"}) {
       sum {
         requests
       }
     }
   }
 }
}

Here is the response:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Finally, let’s say you’re seeing an increase in error responses. Could this be correlated to an attack? You can look at error codes and firewall events over the last 15 minutes, for example:

{
 viewer {
   zones(filter: {zoneTag: $zoneTag}) {
     httpRequests1mGroups (limit: 100,
filter: {datetime_geq: "2019-09-16T21:00:00Z",
datetime_lt: "2019-09-16T21:15:00Z"}) {
       sum {
         responseStatusMap {
           edgeResponseStatus
           requests
         }
       }
     }
    firewallEventsAdaptiveGroups (limit: 100,
filter: {datetime_geq: "2019-09-16T21:00:00Z",
datetime_lt: "2019-09-16T21:15:00Z"}) {
       dimensions {
         action
       }
       count
     }
    }
  }
}

Notice that, in this query, we’re looking at multiple datasets at once, using a common zone identifier to “join” them. Here are the results:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

By examining both data sets in parallel, we can see a correlation: 31 requests were “dropped” or blocked by the Firewall, which is exactly the same as the number of “403” responses. So, the 403 responses were a result of Firewall actions.

Try it today

To learn more about the GraphQL Analytics API and start exploring your Cloudflare data, follow the “Getting started” guide in our developer documentation, which also has details regarding the current data sets and time periods available. We’ll be adding more data sets over time, so take advantage of the introspection feature to see the latest available.

Finally, to make way for the new API, the Zone Analytics API is now deprecated and will be sunset on May 31, 2020. The data that Zone Analytics provides is available from the GraphQL Analytics API. If you’re currently using the API directly, please follow our migration guide to change your API calls. If you get your analytics using the Cloudflare dashboard or our Datadog integration, you don’t need to take any action.

One more thing….

In the API examples above, if you find it helpful to get analytics aggregated for all the domains under your account, we have something else you may like: a brand new Analytics dashboard (in beta) that provides this same information. If your account has many zones, the dashboard is helpful for knowing summary information on metrics such as requests, bandwidth, cache rate, and error rate. Give it a try and let us know what you think using the feedback link above the new dashboard.

What Programming Languages Do You Need to Work in Data Science?

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/what-skills-do-you-need-to-work-in-data-science

Data scientists and software engineers who work with big data are in high demand. Thinknum Media called this field the hottest profession in 2019. Job search site Indeed earlier this year reported that job listings for data scientists jumped 31 percent between 2017 and 2018, while searches only increased 14 percent.

But what skills do you need to fill this lucrative niche?

Indeed set out to answer that question by looking at 500 tech skill terms related to data science that appeared in tech jobs posted on the site during the past five years. The analysis determined that, while Python dominates, Spark is on the fastest growth path and demand for engineers familiar with the statistical programming language R is also growing fast. Also on the radar: Hadoop, Tableau, SAS, Matlab, Redshift, and TensorFlow. [See graph, below, which omits Python because demand is literally off the charts, and because it is not strictly a data science skill.]

In terms of exactly how these skills are being applied, Indeed looked four fields that require data scientists. Machine learning came out on top—and is growing the fastest—followed by artificial intelligence, deep learning, and natural language processing. [See graph, below.]

[$] Buffered I/O without page-cache thrashing

Post Syndicated from corbet original https://lwn.net/Articles/806980/rss

Linux offers two modes for file I/O: buffered and direct. Buffered I/O
passes through the kernel’s page cache; it is relatively easy to use and
can yield significant performance benefits for data that is accessed
multiple times. Direct I/O, instead, goes straight between a user-space
buffer and the storage device. It can be much faster for situations where
caching by the operating system isn’t necessary, but it is complex to use
and contains traps for the unwary. Now, it seems, Jens Axboe has come up
with a
way to get many of the benefits of direct I/O
with a lot less bother.

Scaring People into Supporting Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/12/scaring_people_.html

Back in 1998, Tim May warned us of the “Four Horsemen of the Infocalypse”: “terrorists, pedophiles, drug dealers, and money launderers.” I tended to cast it slightly differently. This is me from 2005:

Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, kidnappers, and child pornographers. Seems like you can scare any public into allowing the government to do anything with those four.

Which particular horseman is in vogue depends on time and circumstance. Since the terrorist attacks of 9/11, the US government has been pushing the terrorist scare story. Recently, it seems to have switched to pedophiles and child exploitation. It began in September, with a long New York Times story on child sex abuse, which included this dig at encryption:

And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.

(That’s wrong, by the way. Facebook Messenger already has an encrypted option. It’s just not turned on by default, like it is in WhatsApp.)

That was followed up by a conference by the US Department of Justice: “Lawless Spaces: Warrant Proof Encryption and its Impact on Child Exploitation Cases.” US Attorney General William Barr gave a speech on the subject. Then came an open letter to Facebook from Barr and others from the UK and Australia, using “protecting children” as the basis for their demand that the company not implement strong end-to-end encryption. (I signed on to another another open letter in response.) Then, the FBI tried to get Interpol to publish a statement denouncing end-to-end encryption.

This week, the Senate Judiciary Committee held a hearing on backdoors: “Encryption and Lawful Access: Evaluating Benefits and Risks to Public Safety and Privacy.” Video, and written testimonies, are available at the link. Eric Neuenschwander from Apple was there to support strong encryption, but the other witnesses were all against it. New York District Attorney Cyrus Vance was true to form:

In fact, we were never able to view the contents of his phone because of this gift to sex traffickers that came, not from God, but from Apple.

It was a disturbing hearing. The Senators asked technical questions to people who couldn’t answer them. The result was that an adjunct law professor was able to frame the issue of strong encryption as an externality caused by corporate liability dumping, and another example of Silicon Valley’s anti-regulation stance.

Let me be clear. None of us who favor strong encryption is saying that child exploitation isn’t a serious crime, or a worldwide problem. We’re not saying that about kidnapping, international drug cartels, money laundering, or terrorism. We are saying three things. One, that strong encryption is necessary for personal and national security. Two, that weakening encryption does more harm than good. And three, law enforcement has other avenues for criminal investigation than eavesdropping on communications and stored devices (this is just one example).

So let’s have reasoned policy debates about encryption — debates that are informed by technology. And let’s stop it with the scare stories.

Really, really awesome Raspberry Pi NeoPixel LED mirror

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/awesome-neopixel-led-mirror/

Check out Super Make Something’s awesome NeoPixel LED mirror: a 576 RGB LED display that converts images via the Raspberry Pi Camera Module and Raspberry Pi 3B+ into a pixelated light show.

Neopixel LED Mirror (Python, Raspberry Pi, Arduino, 3D Printing, Laser Cutting!) DIY How To

Time to pull out all the stops for the biggest Super Make Something project to date! Using 3D printing, laser cutting, a Raspberry Pi, computer vision, Python, and nearly 600 Neopixel LEDs, I build a low resolution LED mirror that displays your reflection on a massive 3 foot by 3 foot grid made from an array of 24 by 24 RGB LEDs!

Mechanical mirrors

If you’re into cool uses of tech, you may be aware of Daniel Rozin, the creative artist building mechanical mirrors out of wooden panels, trash, and…penguins, to name but a few of his wonderful builds.

A woman standing in front of a mechanical mirror made of toy penguins

Yup, this is a mechanical mirror made of toy penguins.

A digital mechanical mirror?

Inspired by Daniel Rozin’s work, Alex, the person behind Super Make Something, put an RGB LED spin on the concept, producing this stunning mirror that thoroughly impressed visitors at Cleveland Maker Faire last month.

“Inspired by Danny Rozin’s mechanical mirrors, this 3 foot by 3 foot mirror is powered by a Raspberry Pi, and uses Python and OpenCV computer vision libraries to process captured images in real time to light up 576 individual RGB LEDs!” Alex explains on Instagram. “Also onboard are nearly 600 3D-printed squares to diffuse the light from each NeoPixel, as well as 16 laser-cut panels to hold everything in place!”

The video above gives a brilliantly detailed explanation of how Alex made the, so we highly recommend giving it a watch if you’re feeling inspired to make your own.

Seriously, we really want to make one of these for Raspberry Pi Towers!

As always, be sure to subscribe to Super Make Something on YouTube and leave a comment on the video if, like us, you love the project. Most online makers are producing content such as this with very little return on their investment, so every like and subscriber really does make a difference.

The post Really, really awesome Raspberry Pi NeoPixel LED mirror appeared first on Raspberry Pi.

Apple Hits Encryption Key With DMCA Notice, Panic Shuts Down the Jailbreak Sub-Reddit

Post Syndicated from Andy original https://torrentfreak.com/apple-hits-encryption-key-with-dmca-notice-panic-shuts-down-the-jailbreak-sub-reddit-191212/


To most users of mobile computing devices such as phones and tablets, they exist to be used however the consumer sees fit. However, the majority are restricted to prevent the adventurous from doing whatever they like with their own hardware.

To bypass these restrictions, users can utilize a so-called jailbreak tool. These unlock the digital handcuffs deployed on a device and grant additional freedoms that aren’t available as standard. As such, they are popular with modders who enjoy customizing their hardware with new features that otherwise wouldn’t exist.

Since it is viewed as one of the most restrictive manufacturers, Apple hardware and software face almost continual ‘attacks’ from people wanting to jailbreak its devices. There are many communities online dedicated to this scene, including Reddit’s 462,000-member /r/jailbreak forum.

Yesterday, however, chaos reigned after Reddit’s legal team received multiple DMCA notices against a number of threads detailing a pair of prominent jailbreak tools – Checkra1n and UNc0ver.

“Reddit Legal have removed 5 posts (all release posts) for checkra1n and unc0ver. We don’t know what exactly was the copyright about. Admins never told us, we just saw their actions in our mod log,” a moderator explained.

Perhaps unsurprisingly, many linked the issues facing /r/jailbreak to an earlier drama on Twitter when an iOS hacker called S1guza published an Apple decryption key that led to his tweet being taken down following a DMCA notice. It took a few hours but the tweet was ultimately reinstated last evening. No specific reasons were given for taking it down, and none were provided for putting it back up.

The Twitter takedown was sent by Kilpatrick Townsend & Stockton LLP, a company that has acted on Apple’s behalf in the past. The notice itself, published on the Lumen Database thanks to Twitter, also provides no useful details as to why the tweet was targeted.

Since Apple was behind the takedown on Twitter and the most obvious culprit in respect of the DMCA takedowns on Reddit, many fingers were pointed towards the Cupertino-based company. However, despite the best efforts of the moderators on /r/jailbreak, Reddit’s admins would not provide the necessary information to identify who filed the DMCA notices or on what grounds.

With uncertainty apparently the order of the day, moderators of the discussion forum took the drastic decision to put their platform into lockdown.

“Locking down the subreddit to prevent new threads is one of the ‘standard’ responses moderators take to show the admins that the mod team isn’t playing, and that they are serious and ready to remedy the issue,” a post from the mods reads.

“Too many DMCA notices eventually end up with a warn and a ban (or just a ban) from the admins to whatever subreddit these notices are being sent to.”

While the DMCA notices in themselves are clearly the biggest issue here, unlike Twitter and Google, for example, Reddit does not routinely share DMCA notices it receives with an external database such as Lumen. If it did, the additional transparency would perhaps help to shine some light on the topic and prevent heavy self-imposed actions, such as the voluntary lockdown of the jailbreak sub.

Moderators report that Reddit’s admins were initially unresponsive to requests for information and that a database that tracks DMCA notices sent to Reddit didn’t provide any helpful details on the sender of the notices.

Last evening, however, one of the affected jailbreak developers ‘qwertyoruiopz’
announced on Twitter that things were some way to being resolved on Reddit and the sub had been taken out of ‘lockdown mode‘.

Soon after, a welcome response from Reddit’s admins was published, effectively signaling the all-clear.

While the message was well-received, /r/jailbreak shouldn’t have been obliged to take such serious action to preserve its existence. The jailbreaking of iOS devices is considered legal in the US and the DMCA notices filed against Reddit clearly caught everyone by surprise.

It remains unknown whether they were indeed sent by Apple so the possibility remains that they were sent by some kind of imposter, trying to unsettle the community. Nevertheless, it is good news that all complaints have been lifted due to the claims being invalid, as per Reddit’s admins.

Without transparency from Reddit, however, the true nature of what happened is likely to remain a mystery. That being said, the moderators of /r/jailbreak deserve a big pat on the back for taking decisive action, quickly. Things could have really spiraled out of control but by showing good intent early on, things were brought back into line relatively quickly.

Now, let’s see those notices to determine who sent them, and why.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

[$] Working toward securing PyPI downloads

Post Syndicated from jake original https://lwn.net/Articles/806986/rss

An effort to protect package downloads from the Python
Package Index
(PyPI) has resulted in a Python Enhancement Proposal
(PEP) and, perhaps belatedly, some discussion in the wider community. The
basic idea is to use The
Update Framework
(TUF) to protect PyPI users from some malicious
actors who are aiming to interfere with the installation and update of
Python modules. But the name of the PEP and its wording, coupled with some recent typosquatting problems on PyPI, caused
some confusion along the way. There are some competing interests and
different cultures coming together over this PEP; the process has not run as
smoothly as anyone might want, though that seems to be resolving itself at
this point.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close