Tag Archives: HoC

Traveling “Kodi Repair Men” Are Apparently a Thing Now

Post Syndicated from Andy original https://torrentfreak.com/traveling-kodi-repair-men-are-apparently-a-thing-now-170625/

Earlier this month, third-party Kodi add-on ZemTV and the TVAddons library were sued in a federal court in Texas.

The complaint, filed by American satellite and broadcast provider Dish Network, accused the pair of copyright infringement and demanded $150,000 for each offense.

With that case continuing, there has been significant fallout. Not only has the TVAddons repository disappeared but addon developers have been falling like dominos.

Of course, there are large numbers of people out there who are able to acquire and install new addons to restore performance to their faltering setups. These enthusiasts can weather the storms, with most understanding that such setbacks are all part of the piracy experience.

However, unlike most other types of Internet piracy, the world of augmented Kodi setups has a somewhat unusual characteristic.

Although numbers are impossible to come by, it’s likely that the majority of users have no idea how the software in their ‘pirate’ box actually works. This is because through convenience or lack of knowledge they bought their device already setup. So what can these people do?

Well, for some it’s a case of trawling the Internet for help and advice to learn how to reprogram the hardware themselves. It may take time, but those with the patience will be glad they did since it will help them deal with similar problems in the future.

For others, it’s taking the misguided route of trying to get the entirely legal (and probably sick-to-the-teeth) official Kodi team to solve their problems on Twitter. Pro tip: Don’t bother, they’re not interested.

Kodi.tv are not interested in piracy problems

It’s likely that the remainder will take their device back to where they bought it, complain like crazy, and then get things fixed for a small fee. But for those running out of options, never fear – there’s another innovative solution available.

In a local pub this week I overheard a discussion about “everybody’s Kodi going off” which wasn’t a big shock given recent developments. However, what did surprise me was the revelation that a local guy is now touring pubs in the area doing on-site “Kodi repairs.”

To put things back in working order using a laptop he’s charging $25/£20/€23 or, for those with an Amazon Firestick, a $50/£40 trade-in for a new, fully-loaded stick. Apparently, the whole thing takes about 15 to 20 mins and is conveniently carried out while having a drink. While obviously illegal, it’s amazing how quickly opportunists step in to make a few bucks.

That being said, the notion of ‘Kodi repair men’ appearing in the flesh is perhaps not such a surprise after all. Countless millions of these devices have been sold, and they invariably go wrong when pirate sources have issues. In reality, it would be more of a surprise if repairers didn’t exist because there’s clearly a lot of demand.

But exist they do and some are even doing home visits. One, who offers to assist people “for a small call out charge” via his Facebook page, has been receiving glowing reviews, like the one shown below.

Thanks for the help KodiMan

In many cases, these “repair men” are actually the same people selling the pre-configured boxes in the first place. Like pirate DVD sellers, PlayStation modders, and similar characters before them, they’re heroes to many people, particularly those in cash-deprived areas. They’re seen as Robin Hoods who can cut subscription TV prices by 95% and ensure sporting events keep flowing for next to nothing.

What remains to be seen though is how busy these people will be in the future. When people’s devices stop working there’s obviously a lot of bad feeling, so paying each time for “repairs” could eventually become tiresome. That’s certainly what copyright holders are hoping for, so expect further action against more addon providers in the future.

But in the meantime and despite the trouble, ‘pirate’ Kodi devices are still selling like hot cakes. Despite suggestions to the contrary, they’re easily purchased from sites like eBay, and plenty of local publications are carrying ads. But for those prepared to do the work themselves, everything is a lot cheaper and easier to fix when it goes wrong.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Mysterious Group Lands Denuvo Anti-Piracy Body Blow

Post Syndicated from Andy original https://torrentfreak.com/mysterious-group-lands-denuvo-anti-piracy-body-blow-170607/

While there’s always excitement in piracy land over the release of a new movie or TV show, video gaming fans really know how to party when a previously uncracked game appears online.

When that game was protected by the infamous Denuvo anti-piracy system, champagne corks explode.

There’s been a lot of activity in this area during recent months but more recently there’s been a noticeable crescendo. As more groups have become involved in trying to defeat the system, Denuvo has looked increasingly vulnerable. Over the past 24 hours, it’s looked in serious danger.

The latest drama surrounds DISHONORED.2-STEAMPUNKS, which is a pirate release of the previously uncracked action adventure game Dishonored 2. The game uses Denuvo protection and at the rate titles have been falling to pirates lately, it’s appearance wasn’t a surprise. However, the manner in which the release landed online has sent shockwaves through the scene.

The cracking scene is relatively open these days, in that people tend to have a rough idea of who the major players are. Their real-life identities are less obvious, of course, but names like CPY, Voksi, and Baldman regularly appear in discussions.

The same cannot be said about SteamPunks. With their topsite presence, they appear to be a proper ‘Scene’ group but up until yesterday, they were an unknown entity.

It’s fair to say that this dramatic appearance from nowhere raised quite a few eyebrows among the more suspicious crack aficionados. That being said, SteamPunks absolutely delivered – and then some.

Rather than simply pre-crack (remove the protection) from Dishonored 2 and then deliver it to the public, the SteamPunks release appears to contain code which enables the user to generate Denuvo licenses on a machine-by-machine basis.

If that hasn’t sunk in, the theory is that the ‘key generator’ might be able to do the same with all Denuvo-protected releases in future, blowing the system out of the water.

While that enormous feat remains to be seen, there is an unusual amount of excitement surrounding this release and the emergence of the previously unknown SteamPunks. In the words of one Reddit user, the group has delivered the cracking equivalent of The Holy Hand Grenade of Antioch, yet no one appears to have had any knowledge of them before yesterday.

Only adding to the mystery is the lack of knowledge relating to how their tool works. Perhaps ironically, perhaps importantly, SteamPunks have chosen to protect their code with VMProtect, the software system that Denuvo itself previously deployed to stop people reverse-engineering its own code.

This raises two issues. One, people could have difficulty finding out how the license generator works and two, it could potentially contain something nefarious besides the means to play Dishonored 2 for free.

With the latter in mind, a number of people in the cracking community have been testing the release but thus far, no one has found anything untoward. That doesn’t guarantee that it’s entirely clean but it does help to calm nerves. Indeed, cracking something as difficult as Denuvo in order to put out some malware seems a lot of effort when the same could be achieved much more easily.

“There is no need to break into Fort Knox to give out flyers for your pyramid scheme,” one user’s great analogy reads.

That being said, people with experience are still urging caution, which should be the case for anyone running a cracked game, no matter who released it.

Finally, another twist in the Denuvo saga arrived yesterday courtesy of VMProtect. As widely reported, someone from the company previously indicated that Denuvo had been using its VMProtect system without securing an appropriate license.

The source said that legal action was on the horizon but an announcement from VMProtect yesterday suggests that the companies are now seeing eye to eye.

“We were informed that there are open questions and some uncertainty about the use of our software by DENUVO GmbH,” VMProtect said.

“Referring to this circumstance we want to clarify that DENUVO GmbH had the right to use our software in the past and has the right to use it currently as well as in the future. In summary, no open issues exist between DENUVO GmbH and VMProtect Software for which reason you may ignore any other divergent information.”

While the above tends to imply there’s never been an issue, a little more information from VMProtect dev Ivan Permyakov may indicate that an old dispute has since been settled.

“Information about our relationship with Denuvo Software has long been outdated and irrelevant,” he said.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Popular Kodi Add-ons Quit Following Prominent Piracy Lawsuit

Post Syndicated from Ernesto original https://torrentfreak.com/popular-kodi-add-ons-quit-following-prominent-piracy-lawsuit-170607/

On Monday we broke the news that third-party Kodi add-on ZemTV and the TVAddons library were being sued in a federal court in Texas.

In a complaint filed by American satellite and broadcast provider Dish Network, both stand accused of copyright infringement, facing up to $150,000 for each offense.

The news came as a shock to many add-on developers, most of whom release their software as a hobby, with no financial motive. A potential lawsuit that can run to hundreds of thousands of dollars in damages clearly takes away most of the fun.

This could very well explain why several add-ons have shut down over the past 48 hours. While the lawsuit isn’t specifically named in most cases, there appears to be a direct connection.

One of the main add-ons that has thrown in the towel is Phoenix, which offered access to a wide range of channels, broadcasts, movies and TV shows.

“In light of current events we have decided to close down Phoenix. This is not something that was easy for us to do; we have all formed a bond that cannot be broken as a team and have a HUGE support base that we are thankful of,” Phoenix developer Cosmix writes.

“I can speak for myself when I say thank you to everybody that has ever been involved in Phoenix and it will always be one of my fondest memories,” he adds.

Cosmix’s announcement

Developer One242415, known for his work on Navi-X, Phoenix and later his own add-on, took a similar decision. He announced the news directly from his add-on which will be closed in a few days.

“I am removing my addon for good. It was a hell of a ride for me. First starting off with Navi-X, then with Mashup, then with Phoenix, and for two months with my own add-on.”

In a similar vein, developer Echo Coder also announced that all his addons will be shut down. Again, without naming a specific reason. On Twitter, he did say, however, that the recent spike in popularity of third-party add-ons was not beneficial to the community.

“The reality is we did say the growth of third party popularity would hinder us. Unfortunately, now it looks like an implosion,” he tweeted yesterday.

A few hours later this message was followed up with a note that he had pulled his own add-ons offline.

“Thank you for the last year. My addons are now off-line. Its been emotional. Take care,” Echo Coder wrote.

Echo Coder’s announcement

The above is just the tip of the iceberg. Several other third-party projects and add-ons have also shut down, announced a temporary hiatus, or other changes.

Various Kodi community websites, including Kodi Geeks, are trying to keep up with all the add-ons that are toppling, and uncertainty remains. The community is in a state of turmoil, and it will take several more days to see what the exact fallout will be.

Assuming that the Dish lawsuit is indeed the main trigger for the recent uproar, it is clear that many developers prefer to stay out of trouble. And with Kodi related piracy in the spotlights of copyright holders, legal pressure is likely to increase.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New “Out of Control” Denuvo Piracy Protection Cracked

Post Syndicated from Andy original https://torrentfreak.com/new-control-denuvo-piracy-protection-cracked-170602/

Like many games in recent times, indie title RiME uses Denuvo anti-piracy technology to keep the swashbucklers away. It won’t stay that way for long.

Earlier this week, RiME developer Tequila Works grabbed a few headlines after stating it would remove the Denuvo protection from its game, should it fall to crackers.

“I have seen some conversations about our use of Denuvo anti-tamper, and I wanted to take a moment to address it,” RiME community manager Dariuas wrote on Steam forums.

“RiME is a very personal experience told through both sight and sound. When a game is cracked, it runs the risk of creating issues with both of those items, and we want to do everything we can to preserve this quality in RiME.”

Dariuas concluded that a Denuvo-free version of RiME would be released if the game was cracked. Within days of the announcement and right on cue, pirates struck.

In a fanfare of celebrations, rising cracking star Baldman announced that he had defeated the latest v4+ iteration of Denuvo and dumped a cracked copy of RiME online. While encouraging people to buy what he describes as a “super nice” game, Baldman was less complimentary about Denuvo.

Labeling the anti-tamper technology a “huge abomination,” the cracker said that Denuvo’s creators had really upped their efforts this time out. People like Baldman who work on Denuvo talk of the protection calling on code ‘triggers.’ For RiME, things were reportedly amped up to 11.

“In Rime that ugly creature went out of control – how do you like three fucking hundreds of THOUSANDS calls to ‘triggers’ during initial game launch and savegame loading? Did you wonder why game loading times are so long – here is the answer,” Baldman explained.

“In previous games like Sniper: Ghost Warrior 3, NieR Automata, Prey there were only about 1000 ‘triggers’ called, so we have x300 here.”

But according to the cracker, the 300,000 calls to triggers was a mere “warmup” for Denuvo. After just 30 minutes of gameplay, the count rose to two million, a figure he delivered with shocked expletives.

One of the main points of criticism for protections like Denuvo is that they take a toll on both game performance and gaming hardware. Baldman, who speaks English as a second language, reports that in RiME things have got massively out of hand which negatively affects the game.

“Protection now calls about 10-30 triggers every second during actual gameplay, slowing game down. In previous games like Sniper: Ghost Warrior 3, NieR Automata, Prey there were only about 1-2 ‘triggers’ called every several minutes during gameplay, so do the math.”

Only making matters worse, the cracker says, is the fact the triggers are heavily obfuscated under a virtual machine, which further affects performance. However, thanks to RiME’s developers making good on their word, any protection-related problems will soon be a thing of the past.

“Today, we got word that there was a crack which would bypass Denuvo,” Dariuas wrote last night.

“Upon receiving this news, we worked to test this and verify that it was, in fact, the case. We have now confirmed that it is. As such, we at [publisher] Team Grey Box are following through on our promise from earlier this week that we will be replacing the current build of RiME with one that does not contain Denuvo.”

So while gamers wait for Denuvo to get stripped from RiME and pirates celebrate, the company behind the anti-piracy technology will be considering its options. If what Baldman claims is true, it sounds like more than just a little desperation is in the air.

Worryingly for Denuvo, not even throwing the kitchen sink at the problem has had much effect.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Torrent Sites See Traffic Boost After ExtraTorrent Shutdown

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-sites-see-traffic-boost-after-extratorrent-shutdown-170528/

boatssailWhen ExtraTorrent shut down last week, millions of people were left without their favorite spot to snatch torrents.

This meant that after the demise of KickassTorrents and Torrentz last summer, another major exodus commenced.

The search for alternative torrent sites is nicely illustrated by Google Trends. Immediately after ExtraTorrent shut down, worldwide searches for “torrent sites” shot through the roof, as seen below.

“Torrent sites” searches (30 days)

As is often the case, most users spread across sites that are already well-known to the file-sharing public.

TorrentFreak spoke to several people connected to top torrent sites who all confirmed that they had witnessed a significant visitor boost over the past week and a half. As the largest torrent site around, many see The Pirate Bay as the prime alternative.

And indeed, a TPB staffer confirms that they have seen a big wave of new visitors coming in, to the extent that it was causing “gateway errors,” making the site temporarily unreachable.

Thus far the new visitors remain rather passive though. The Pirate Bay hasn’t seen a large uptick in registrations and participation in the forum remains normal as well.

“Registrations haven’t suddenly increased or anything like that, and visitor numbers to the forum are about the same as usual,” TPB staff member Spud17 informs TorrentFreak.

Another popular torrent site, which prefers not to be named, reported a surge in traffic too. For a few days in a row, this site handled 100,000 extra unique visitors. A serious number, but the operator estimates that he only received about ten percent of ET’s total traffic.

More than 40% of these new visitors come from India, where ExtraTorrent was relatively popular. The site operator further notes that about two thirds have an adblocker, adding that this makes the new traffic pretty much useless, for those who are looking to make money.

That brings us to the last category of site owners, the opportunist copycats, who are actively trying to pull estranged ExtraTorrent visitors on board.

Earlier this week we wrote about the attempts of ExtraTorrent.cd, which falsely claims to have a copy of the ET database, to lure users. In reality, however, it’s nothing more than a Pirate Bay mirror with an ExtraTorrent skin.

And then there are the copycats over at ExtraTorrent.ag. These are the same people who successfully hijacked the EZTV and YIFY/YTS brands earlier. With ExtraTorrent.ag they now hope to expand their portfolio.

Over the past few days, we received several emails from other ExtraTorrent “copies”, all trying to get a piece of the action. Not unexpected, but pretty bold, particularly considering the fact that ExtraTorrent operator SaM specifically warned people not to fall for these fakes and clones.

With millions of people moving to new sites, it’s safe to say that the torrent ‘community’ is in turmoil once again, trying to find a new status quo. But this probably won’t last for very long.

While some of the die-hard ExtraTorrent fans will continue to mourn the loss of their home, history has told is that in general, the torrent community is quick to adapt. Until the next site goes down…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DevOps Cafe Episode 71 – Courtney Kissler

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/5/25/devops-cafe-episode-71-courtney-kissler.html

Ordering Up Some Transformation

John and Damon pick Courtney Kissler’s brain on the techniques that enable her to be a hands-on technology leader with a track record for getting teams to find and fix what is getting in the way. 

 

 

 

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Courtney Kissler on Twitter: @ladyhock

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

DevOps Cafe Episode 71

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/5/25/devops-cafe-episode-71.html

Ordering Up Some Transformation

John and Damon pick Courtney Kissler’s brain on the techniques that enable her to be a hands-on technology leader with a track record for getting teams to find and fix what is getting in the way. 

 

 

 

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Courtney Kissler on Twitter: @ladyhock

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

No, ExtraTorrent Has Not Been Resurrected

Post Syndicated from Ernesto original https://torrentfreak.com/no-extratorrent-has-not-been-resurected-170524/

Last week the torrent community entered a state of shock when another major torrent site closed its doors.

Having served torrents to the masses for over a decade, ExtraTorrent decided to throw in the towel, without providing any detail or an apparent motive.

The only strong message sent out by ExtraTorrent’s operator was to “stay away from fake ExtraTorrent websites and clones.”

Fast forward a few days and the first copycats have indeed appeared online. While this was expected, it’s always disappointing to see “news” sites including the likes of Forbes and The Inquirer are giving them exposure without doing thorough research.

“We are a group of uploaders and admins from ExtraTorrent. As you know, SAM from ExtraTorrent pulled the plug yesterday and took all data offline under pressure from authorities. We were in deep shock and have been working hard to get it back online with all previous data,” the email, sent out to several news outlets read.

What followed was a flurry of ‘ExtraTorrent is back’ articles and thanks to those, a lot of people now think that Extratorrent.cd is a true resurrection operated by the site’s former staffers and fans.

However, aside from its appearance, the site has absolutely nothing to do with ET.

The site is an imposter operated by the same people who also launched Kickass.cd when KAT went offline last summer. In fact, the content on both sites doesn’t come from the defunct sites they try to replace, but from The Pirate Bay.

Yes indeed, ExtraTorrent.cd is nothing more than a Pirate Bay mirror with an ExtraTorrent skin.

There are several signs clearly showing that the torrents come from The Pirate Bay. Most easy to spot, perhaps, is a comparison of search results which are identical on both sites.

Chaparall seach on Extratorrent.cd

The ExtraTorrent “resurrection” even lists TPB’s oldest active torrent from March 2004, which was apparently uploaded long before the original ExtraTorrent was launched.

Chaparall search on TPB

TorrentFreak is in touch with proper ex-staffers of ExtraTorrent who agree that the site is indeed a copycat. Some ex-staffers are considering the launch of a new ET version, just like the KAT admins did in the past, but if that happens, it will take a lot more time.

“At the moment we are all figuring out how to go about getting it back up and running in a proper fashion, but as you can imagine there a lot of obstacles and arguments, lol,” ex-ET admin Soup informed us.

So, for now, there is no real resurrection. ExtraTorrent.cd sells itself as much more than it is, as it did with Kickass.cd. While the site doesn’t have any malicious intent, aside from luring old ET members under false pretenses, people have the right to know what it really is.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Join Us at the 10th Annual Hadoop Summit / DataWorks Summit, San Jose (Jun 13-15)

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/160966148886

yahoohadoop:

image

We’re excited to co-host the 10th Annual Hadoop Summit, the leading conference for the Apache Hadoop community, taking place on June 13 – 15 at the San Jose Convention Center. In the last few years, the Hadoop Summit has expanded to cover all things data beyond just Apache Hadoop – such as data science, cloud and operations, IoT and applications – and has been aptly renamed the DataWorks Summit. The three-day program is bursting at the seams! Here are just a few of the reasons why you cannot miss this must-attend event:

  • Familiarize yourself with the cutting edge in Apache project developments from the committers
  • Learn from your peers and industry experts about innovative and real-world use cases, development and administration tips and tricks, success stories and best practices to leverage all your data – on-premise and in the cloud – to drive predictive analytics, distributed deep-learning and artificial intelligence initiatives
  • Attend one of our more than 170 technical deep dive breakout sessions from nearly 200 speakers across eight tracks
  • Check out our keynotes, meetups, trainings, technical crash courses, birds-of-a-feather sessions, Women in Big Data and more
  • Attend the community showcase where you can network with sponsors and industry experts, including a host of startups and large companies like Microsoft, IBM, Oracle, HP, Dell EMC and Teradata

Similar to previous years, we look forward to continuing Yahoo’s decade-long tradition of thought leadership at this year’s summit. Join us for an in-depth look at Yahoo’s Hadoop culture and for the latest in technologies such as Apache Tez, HBase, Hive, Data Highway Rainbow, Mail Data Warehouse and Distributed Deep Learning at the breakout sessions below. Or, stop by Yahoo kiosk #700 at the community showcase.

Also, as a co-host of the event, Yahoo is pleased to offer a 20% discount for the summit with the code MSPO20. Register here for Hadoop Summit, San Jose, California!


DAY 1. TUESDAY June 13, 2017


12:20 – 1:00 P.M. TensorFlowOnSpark – Scalable TensorFlow Learning On Spark Clusters

Andy Feng – VP Architecture, Big Data and Machine Learning

Lee Yang – Sr. Principal Engineer

In this talk, we will introduce a new framework, TensorFlowOnSpark, for scalable TensorFlow learning, that was open sourced in Q1 2017. This new framework enables easy experimentation for algorithm designs, and supports scalable training & inferencing on Spark clusters. It supports all TensorFlow functionalities including synchronous & asynchronous learning, model & data parallelism, and TensorBoard. It provides architectural flexibility for data ingestion to TensorFlow and network protocols for server-to-server communication. With a few lines of code changes, an existing TensorFlow algorithm can be transformed into a scalable application.

2:10 – 2:50 P.M. Handling Kernel Upgrades at Scale – The Dirty Cow Story

Samy Gawande – Sr. Operations Engineer

Savitha Ravikrishnan – Site Reliability Engineer

Apache Hadoop at Yahoo is a massive platform with 36 different clusters spread across YARN, Apache HBase, and Apache Storm deployments, totaling 60,000 servers made up of 100s of different hardware configurations accumulated over generations, presenting unique operational challenges and a variety of unforeseen corner cases. In this talk, we will share methods, tips and tricks to deal with large scale kernel upgrade on heterogeneous platforms within tight timeframes with 100% uptime and no service or data loss through the Dirty COW use case (privilege escalation vulnerability found in the Linux Kernel in late 2016).

5:00 – 5:40 P.M. Data Highway Rainbow –  Petabyte Scale Event Collection, Transport, and Delivery at Yahoo

Nilam Sharma – Sr. Software Engineer

Huibing Yin – Sr. Software Engineer

This talk presents the architecture and features of Data Highway Rainbow, Yahoo’s hosted multi-tenant infrastructure which offers event collection, transport and aggregated delivery as a service. Data Highway supports collection from multiple data centers & aggregated delivery in primary Yahoo data centers which provide a big data computing cluster. From a delivery perspective, Data Highway supports endpoints/sinks such as HDFS, Storm and Kafka; with Storm & Kafka endpoints tailored towards latency sensitive consumers.


DAY 2. WEDNESDAY June 14, 2017


9:05 – 9:15 A.M. Yahoo General Session – Shaping Data Platform for Lasting Value

Sumeet Singh  – Sr. Director, Products

With a long history of open innovation with Hadoop, Yahoo continues to invest in and expand the platform capabilities by pushing the boundaries of what the platform can accomplish for the entire organization. In the last 11 years (yes, it is that old!), the Hadoop platform has shown no signs of giving up or giving in. In this talk, we explore what makes the shared multi-tenant Hadoop platform so special at Yahoo.

12:20 – 1:00 P.M. CaffeOnSpark Update – Recent Enhancements and Use Cases

Mridul Jain – Sr. Principal Engineer

Jun Shi – Principal Engineer

By combining salient features from deep learning framework Caffe and big-data frameworks Apache Spark and Apache Hadoop, CaffeOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. We released CaffeOnSpark as an open source project in early 2016, and shared its architecture design and basic usage at Hadoop Summit 2016. In this talk, we will update audiences about the recent development of CaffeOnSpark. We will highlight new features and capabilities: unified data layer which multi-label datasets, distributed LSTM training, interleave testing with training, monitoring/profiling framework, and docker deployment.

12:20 – 1:00 P.M. Tez Shuffle Handler – Shuffling at Scale with Apache Hadoop

Jon Eagles – Principal Engineer  

Kuhu Shukla – Software Engineer

In this talk we introduce a new Shuffle Handler for Tez, a YARN Auxiliary Service, that addresses the shortcomings and performance bottlenecks of the legacy MapReduce Shuffle Handler, the default shuffle service in Apache Tez. The Apache Tez Shuffle Handler adds composite fetch which has support for multi-partition fetch to mitigate performance slow down and provides deletion APIs to reduce disk usage for long running Tez sessions. As an emerging technology we will outline future roadmap for the Apache Tez Shuffle Handler and provide performance evaluation results from real world jobs at scale.

2:10 – 2:50 P.M. Achieving HBase Multi-Tenancy with RegionServer Groups and Favored Nodes

Thiruvel Thirumoolan – Principal Engineer

Francis Liu – Sr. Principal Engineer

At Yahoo! HBase has been running as a hosted multi-tenant service since 2013. In a single HBase cluster we have around 30 tenants running various types of workloads (ie batch, near real-time, ad-hoc, etc). We will walk through multi-tenancy features explaining our motivation, how they work as well as our experiences running these multi-tenant clusters. These features will be available in Apache HBase 2.0.

2:10 – 2:50 P.M. Data Driving Yahoo Mail Growth and Evolution with a 50 PB Hadoop Warehouse

Nick Huang – Director, Data Engineering, Yahoo Mail  

Saurabh Dixit – Sr. Principal Engineer, Yahoo Mail

Since 2014, the Yahoo Mail Data Engineering team took on the task of revamping the Mail data warehouse and analytics infrastructure in order to drive the continued growth and evolution of Yahoo Mail. Along the way we have built a 50 PB Hadoop warehouse, and surrounding analytics and machine learning programs that have transformed the way data plays in Yahoo Mail. In this session we will share our experience from this 3 year journey, from the system architecture, analytics systems built, to the learnings from development and drive for adoption.

DAY3. THURSDAY June 15, 2017


2:10 – 2:50 P.M. OracleStore – A Highly Performant RawStore Implementation for Hive Metastore

Chris Drome – Sr. Principal Engineer  

Jin Sun – Principal Engineer

Today, Yahoo uses Hive in many different spaces, from ETL pipelines to adhoc user queries. Increasingly, we are investigating the practicality of applying Hive to real-time queries, such as those generated by interactive BI reporting systems. In order for Hive to succeed in this space, it must be performant in all aspects of query execution, from query compilation to job execution. One such component is the interaction with the underlying database at the core of the Metastore. As an alternative to ObjectStore, we created OracleStore as a proof-of-concept. Freed of the restrictions imposed by DataNucleus, we were able to design a more performant database schema that better met our needs. Then, we implemented OracleStore with specific goals built-in from the start, such as ensuring the deduplication of data. In this talk we will discuss the details behind OracleStore and the gains that were realized with this alternative implementation. These include a reduction of 97%+ in the storage footprint of multiple tables, as well as query performance that is 13x faster than ObjectStore with DirectSQL and 46x faster than ObjectStore without DirectSQL.

3:00 P.M. – 3:40 P.M. Bullet – A Real Time Data Query Engine

Akshai Sarma – Sr. Software Engineer

Michael Natkovich – Director, Engineering

Bullet is an open sourced, lightweight, pluggable querying system for streaming data without a persistence layer implemented on top of Storm. It allows you to filter, project, and aggregate on data in transit. It includes a UI and WS. Instead of running queries on a finite set of data that arrived and was persisted or running a static query defined at the startup of the stream, our queries can be executed against an arbitrary set of data arriving after the query is submitted. In other words, it is a look-forward system. Bullet is a multi-tenant system that scales independently of the data consumed and the number of simultaneous queries. Bullet is pluggable into any streaming data source. It can be configured to read from systems such as Storm, Kafka, Spark, Flume, etc. Bullet leverages Sketches to perform its aggregate operations such as distinct, count distinct, sum, count, min, max, and average.

3:00 P.M. – 3:40 P.M. Yahoo – Moving Beyond Running 100% of Apache Pig Jobs on Apache Tez

Rohini Palaniswamy – Sr. Principal Engineer

Last year at Yahoo, we spent great effort in scaling, stabilizing and making Pig on Tez production ready and by the end of the year retired running Pig jobs on Mapreduce. This talk will detail the performance and resource utilization improvements Yahoo achieved after migrating all Pig jobs to run on Tez. After successful migration and the improved performance we shifted our focus to addressing some of the bottlenecks we identified and new optimization ideas that we came up with to make it go even faster. We will go over the new features and work done in Tez to make that happen like custom YARN ShuffleHandler, reworking DAG scheduling order, serialization changes, etc. We will also cover exciting new features that were added to Pig for performance such as bloom join and byte code generation.

4:10 P.M. – 4:50 P.M. Leveraging Docker for Hadoop Build Automation and Big Data Stack Provisioning

Evans Ye,  Software Engineer

Apache Bigtop as an open source Hadoop distribution, focuses on developing packaging, testing and deployment solutions that help infrastructure engineers to build up their own customized big data platform as easy as possible. However, packages deployed in production require a solid CI testing framework to ensure its quality. Numbers of Hadoop component must be ensured to work perfectly together as well. In this presentation, we’ll talk about how Bigtop deliver its containerized CI framework which can be directly replicated by Bigtop users. The core revolution here are the newly developed Docker Provisioner that leveraged Docker for Hadoop deployment and Docker Sandbox for developer to quickly start a big data stack. The content of this talk includes the containerized CI framework, technical detail of Docker Provisioner and Docker Sandbox, a hierarchy of docker images we designed, and several components we developed such as Bigtop Toolchain to achieve build automation.

Register here for Hadoop Summit, San Jose, California with a 20% discount code MSPO20

Questions? Feel free to reach out to us at [email protected] Hope to see you there!

Copyright Troll Attorney John Steele Disbarred by Illinois Supreme Court

Post Syndicated from Ernesto original https://torrentfreak.com/copyright-troll-attorney-john-steele-disbarred-by-illinois-supreme-court-170522/

Over the years, copyright trolls have been accused of involvement in various dubious schemes and actions, but there’s one group that has gone above and beyond.

Prenda Law grabbed dozens of headlines, mostly surrounding negative court rulings over identity theft, misrepresentation and even deception.

Most controversial was the shocking revelation that Prenda uploaded their own torrents to The Pirate Bay, creating a honeypot for the people they later sued over pirated downloads.

The allegations also raised the interest of the US Department of Justice, which indicted Prenda principals John Steele and Paul Hansmeier late last year. The two stand accused of running a multi-million dollar fraud and extortion operation.

A few weeks ago Steele pleaded guilty, admitting among other things that they did indeed use The Pirate Bay to operate a honeypot for online pirates.

Following the guilty plea the Illinois Supreme Court, which started looking into the case long before the indictment, has now decided to disbar the attorney. This means that Steele no longer has the right to practice law.

The decision doesn’t really come as a surprise. Steele has admitted to two of the 18 counts listed in the indictment, including some of the allegations that were also listed by the Supreme Court.

In its conclusion, the Court lists a variety of misconduct including “conduct involving dishonesty, fraud, deceit, or misrepresentation, by conduct including filing lawsuits without supporting facts, under the names of entities like Ingenuity 13 and AF Holdings, which were created by Movant for purposes of exacting settlements.”

Also, Steele’s trolling operation was “using means that had no substantial purpose other than to embarrass or burden a third person, or using methods of obtaining evidence that violates the legal rights of such a person…,” the Supreme Court writes.

Steele was disbarred “on consent,” according to Cook County Record, which means that he agreed to have his Illinois law practice license revoked.

The disbarment is not unexpected considering Steele’s guilty plea. However, victims of the Prenda trolling scheme may still welcome it as a form of justice. Meanwhile, Steele has bigger problems to worry about.

The former Prenda attorney is still awaiting his sentencing in the criminal case. In theory, he faces a statutory maximum sentence of 40 years in prison as well as a criminal fine of hundreds of thousands of dollars. However, by signing a plea agreement, he likely gets a reduced sentence.

The Illnois Supreme Court conclusions are available here (pdf), courtesy of Fight Copyright Trolls.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

ExtraTorrent’s Distribution Groups ettv and EtHD Keep Going

Post Syndicated from Ernesto original https://torrentfreak.com/extratorrents-distribution-groups-ettv-and-ethd-keep-going-170519/

This week the torrent community entered a state of shock when another major torrent site closed its doors.

Having served torrents to the masses for over a decade, ExtraTorrent decided to throw in the towel, without providing any detail or an apparent motive.

ExtraTorrent operator SaM simply informed us that “it’s time we say goodbye.”

Now that a few days have passed the dust is slowly beginning to settle. Frequent ExtraTorrent users have started to flock to alternatives such as The Pirate Bay, Torrentz2 and RARBG, which have all noticed a clear uptick in users.

What has also become clear is that ExtraTorrent won’t have quit without leaving its mark. The site was home to several prominent uploaders and groups, and some feared that these would go down with the site. However, it looks like that won’t be the case for them all.

On Thursday, shortly after the site was closed, ExtraTorrent operator SaM said that the movie torrent distribution group ETRG would disappear, but that there was hope for others.

“Ettv and Ethd could remain operational if they get enough donations to sustain the expenses and if the people handling it [are] ready to keep going,” SaM said.

Indeed, both TV groups are keeping the ET spirit alive as dozens of fresh torrents have appeared over the past few days. While they’re no longer on ExtraTorrent, the accounts on The Pirate Bay remain very active, as can be seen below.

ettv’s recent releases

Another well-known uploader, DDR, will continue to release torrents as well. TorrentFreak was informed that the uploader will use the ‘SaM’ accounts at The Pirate Bay and 1337x to continue his work.

And ExtraTorrent’s name lives on elsewhere too. The image hosting site Extraimage, which was regularly used by torrent uploaders to feature samples, is still up and running as well.

There is another major casualty of the ExtraTorrent closure though. TorrentFreak is informed that ET’s inhouse encoder FUM, known for regular high-quality TV releases, will stop.

Over the weeks we will see what the real impact of the surprise shutdown will be. A community was destroyed this week, and many uploaders lost their home, but as we’ve seen with KickassTorrents, Torrentz, and other sites before them, the torrent ecosystem isn’t easily disrupted.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Weaponising a teddy bear

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/weaponising-teddy-bear/

At primary school, I loved my Tamagotchi: it moved, it beeped, it was almost like I could talk to it! Nowadays, kids can actually have conversations with their toys, and some toys are IoT devices, capable of accessing online services or of interacting with people via the Internet. And so to one of this week’s news stories: using a Raspberry Pi, an eleven-year-old has demonstrated how to weaponise a teddy bear. This has garnered lots of attention, because he did it at a cybersecurity conference in The Hague, and he used the Bluetooth devices of the assembled experts to do it.

AFP news agency on Twitter

Eleven-year-old “cyber ninja” stuns security experts by hacking into their bluetooth devices to manipulate teddy bear #InternetofThings https://t.co/bx9kTbNUcT

Reuben Paul, from Texas, used a Raspberry Pi together with his laptop to download the numbers of audience members’ smartphones. He then proceeded to use a Python program to manipulate his bear, Bob, using one of the numbers he’d accessed, making him blink one of his lights and record an audio message from the audience.

Reuben has quite of bit of digital making experience, and he’s very concerned about the safety risks of IoT devices. “IoT home appliances, things that can be used in our everyday lives, our cars, lights, refrigerators, everything like this that is connected can be used and weaponised to spy on us or harm us,” he told AFP.

Apparently even his father, software security expert Mano Paul, was unaware of just how unsafe IoT toys can be until Reuben “shocked” him by hacking a toy car.

Reuben is using his computer skills for good: he has already founded an organisation to educate children and adults about cybersecurity. Considering that he is also the youngest Shaolin Kung Fu black belt in the US and reportedly has excellent gymnastics skills, I’m getting serious superhero vibes from this kid!

No Title

No Description

And to think that the toys that were around when I was Reuben’s age could be used for nothing more devious than distracting me from class…

The post Weaponising a teddy bear appeared first on Raspberry Pi.

Build a Visualization and Monitoring Dashboard for IoT Data with Amazon Kinesis Analytics and Amazon QuickSight

Post Syndicated from Karan Desai original https://aws.amazon.com/blogs/big-data/build-a-visualization-and-monitoring-dashboard-for-iot-data-with-amazon-kinesis-analytics-and-amazon-quicksight/

Customers across the world are increasingly building innovative Internet of Things (IoT) workloads on AWS. With AWS, they can handle the constant stream of data coming from millions of new, internet-connected devices. This data can be a valuable source of information if it can be processed, analyzed, and visualized quickly in a scalable, cost-efficient manner. Engineers and developers can monitor performance and troubleshoot issues while sales and marketing can track usage patterns and statistics to base business decisions.

In this post, I demonstrate a sample solution to build a quick and easy monitoring and visualization dashboard for your IoT data using AWS serverless and managed services. There’s no need for purchasing any additional software or hardware. If you are already using AWS IoT, you can build this dashboard to tap into your existing device data. If you are new to AWS IoT, you can be up and running in minutes using sample data. Later, you can customize it to your needs, as your business grows to millions of devices and messages.

Architecture

The following is a high-level architecture diagram showing the serverless setup to configure.

 

AWS service overview

AWS IoT is a managed cloud platform that lets connected devices interact easily and securely with cloud applications and other devices. AWS IoT can process and route billions of messages to AWS endpoints and to other devices reliably and securely.

Amazon Kinesis Firehose is the easiest way to capture, transform, and load streaming data continuously into AWS from thousands of data sources, such as IoT devices. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.

Amazon Kinesis Analytics allows you to process streaming data coming from IoT devices in real time with standard SQL, without having to learn new programming languages or processing frameworks, providing actionable insights promptly.

The processed data is fed into Amazon QuickSight, which is a fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from the data.

The most popular way for Internet-connected devices to send data is using MQTT messages. The AWS IoT gateway receives these messages from registered IoT devices. The solution in this post uses device data from AWS Simple Beer Service (SBS), a series of internet-connected kegerators sending sensor outputs such as temperature, humidity, and sound levels in a JSON payload. You can use any existing IoT data source that you may have.

The AWS IoT rules engine allows selecting data from message payloads, processing it, and sending it to other services. You forward the data to a Firehose delivery stream to consolidate the continuous data stream into batches for further processing. The batched data is also stored temporarily in an Amazon S3 bucket for later retrieval and can be set for deletion after a specified time using S3 Lifecycle Management rules.

The incoming data from the Firehose delivery stream is fed into an Analytics application that provides an easy way to process the data in real time using standard SQL queries. Analytics allows writing standard SQL queries to extract specific components from the incoming data stream and perform real-time ETL on it. In this post, you use this feature to aggregate minimum and maximum temperature values from the sensors per minute. You load it in Amazon QuickSight to create a monitoring dashboard and check if the devices are over-heating or cooling down during use. You also extract every device’s location, parameters such as temperature, sound levels, humidity, and the time stamp in Analytics to use on the visualization dashboard.

The processed data from the two queries is fed into two Firehose delivery streams, both of which batch the data into CSV files every minute and store it in S3. The batching time interval is configurable between 1 and 15 minutes in 1-second intervals.

Finally, you use Amazon QuickSight to ingest the processed CSV files from S3 as a data source to build visualizations. Amazon QuickSight’s super-fast, parallel, in-memory, calculation engine (SPICE) parses the ingested data and allows you to create a variety of visualizations with different graph types. You can also use the Amazon QuickSight built-in Story feature to combine visualizations into business dashboards that can be shared in a secure manner.

Implementation

AWS IoT, Amazon Kinesis, and Amazon QuickSight are all fully managed services, which means you can complete the entire setup in just a few steps using the AWS Management Console. Don’t worry about setting up any underlying hardware or installing any additional software. So, get started.

Step 1. Set up your AWS IoT data source

Do you currently use AWS IoT? If you have an existing IoT thing set up and running on AWS IoT, you can skip to Step 2.

If you have an AWS IoT button or other IoT devices that can publish MQTT messages and would like to use that for the setup, follow the Getting Started with AWS IoT topic to connect your thing to AWS IoT. Continue to Step 2.

If you do not have an existing IoT device, you can generate simulated device data using a script on your local machine and have it publish to AWS IoT. The following script lets you set up your AWS IoT environment and publish simulated data that mimics device data from Simple Beer Service.

Generate sample Data

Running the sbs.py Python script generates fictitious AWS IoT messages from multiple SBS devices. The IoT rule sends the message to Firehose for further processing.

The script requires access to AWS CLI credentials and boto3 installation on the machine running the script. Download and run the following Python script:

https://github.com/awslabs/sbs-iot-data-generator/blob/master/sbs.py

The script generates random data that looks like the following:

{"deviceParameter": "Temperature", "deviceValue": 33, "deviceId": "SBS01", "dateTime": "2017-02-03 11:29:37"}
{"deviceParameter": "Sound", "deviceValue": 140, "deviceId": "SBS03", "dateTime": "2017-02-03 11:29:38"}
{"deviceParameter": "Humidity", "deviceValue": 63, "deviceId": "SBS01", "dateTime": "2017-02-03 11:29:39"}
{"deviceParameter": "Flow", "deviceValue": 80, "deviceId": "SBS04", "dateTime": "2017-02-03 11:29:41"}

Run the script and keep it running for the duration of the project to generate sufficient data.

Tip: If you encounter any issues running the script from your local machine, launch an EC2 instance and run the script there as a root user. Remember to assign an appropriate IAM role to your instance at the time of launch that allows it to access AWS IoT.

Step 2. Create three Firehose delivery streams

For this post, you require three Firehose delivery streams:  one to batch raw data from AWS IoT, and two to batch output device data and aggregated data from Analytics.

  1. In the console, choose Firehose.
  2. Create all three Firehose delivery streams using the following field values.

Delivery stream 1:

Name IoT-Source-Stream
S3 bucket <your unique name>-kinesis
S3 prefix source/

Delivery stream 2:

Name IoT-Destination-Data-Stream
S3 bucket <your unique name>-kinesis
S3 prefix data/

Delivery stream 3:

Name IoT-Destination-Aggregate-Stream
S3 bucket <your unique name>-kinesis
S3 prefix aggregate/

Step 3. Set up AWS IoT to receive and forward incoming data

  1. In the console, choose IoT.
  2. Create a new AWS IoT rule with the following field values.
Name IoT_to_Firehose
Attribute *
Topic Filter /sbs/devicedata/#
Add Action Send messages to an Amazon Kinesis Firehose stream (select IoT-Source-Stream from dropdown)
Select Separator “\n (newline)”

A quick check before proceeding further: make sure that you have run the script to generate simulated IoT data or that your IoT Thing is running and delivering data. If not, set it up now. The Amazon Kinesis Analytics application you set up in the next step needs the data to process it further.

Step 4: Create an Analytics application to process data

  1. In the console, choose Kinesis.
  2. Create a new application.
  3. Enter a name of your choice, for example, SBS-IoT-Data.
  4. For the source, choose IoT-Source-Stream.

Analytics auto-discovers the schema on the data by sampling records from the input stream. It also includes an in-built SQL editor that allows you to write standard SQL queries to transform incoming data.

Tip: If Analytics is unable to discover your incoming data, it may be missing the appropriate IAM permissions. In the IAM console, select the role that you assigned to your IoT rule in Step 3. Make sure that it has the ARN of the IoT-Source-Data Firehose stream listed in the firehose:putRecord section.

Here is a sample SQL query that generates two output streams:

  • DESTINATION_SQL_BASIC_STREAM contains the device ID, device parameter, its value, and the time stamp from the incoming stream.
  • DESTINATION_SQL_AGGREGATE_STREAM aggregates the maximum and minimum values of temperatures from the sensors over a one-minute period from the incoming data.
-- Create an output stream with four columns, which is used to send IoT data to the destination
CREATE OR REPLACE STREAM "DESTINATION_SQL_BASIC_STREAM" (dateTime TIMESTAMP, deviceId VARCHAR(8), deviceParameter VARCHAR(16), deviceValue INTEGER);

-- Create a pump that continuously selects from the source stream and inserts it into the output data stream
CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "DESTINATION_SQL_BASIC_STREAM"

-- Filter specific columns from the source stream
SELECT STREAM "dateTime", "deviceId", "deviceParameter", "deviceValue" FROM "SOURCE_SQL_STREAM_001";

-- Create a second output stream with three columns, which is used to send aggregated min/max data to the destination
CREATE OR REPLACE STREAM "DESTINATION_SQL_AGGREGATE_STREAM" (dateTime TIMESTAMP, highestTemp SMALLINT, lowestTemp SMALLINT);

-- Create a pump that continuously selects from a source stream 
CREATE OR REPLACE PUMP "STREAM_PUMP_2" AS INSERT INTO "DESTINATION_SQL_AGGREGATE_STREAM"

-- Extract time in minutes, plus the highest and lowest value of device temperature in that minute, into the destination aggregate stream, aggregated per minute
SELECT STREAM FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO MINUTE) AS "dateTime", MAX("deviceValue") AS "highestTemp", MIN("deviceValue") AS "lowestTemp" FROM "SOURCE_SQL_STREAM_001" WHERE "deviceParameter"='Temperature' GROUP BY FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO MINUTE);

Real-time analytics shows the results of the SQL query. If everything is working correctly, you see three streams listed, similar to the following screenshot.

Step 5: Connect the Analytics application to output Firehose delivery streams

You create two destinations for the two delivery streams that you created in the previous step. A single Analytics application can have multiple destinations defined; however, this needs to be set up using the AWS CLI, not from the console. If you do not already have it, install the AWS CLI on your local machine and configure it with your credentials.

Tip: If you are running the IoT script from an EC2 instance, it comes pre-installed with the AWS CLI.

Create the first destination delivery stream 

The AWS CLI command to create a new output Firehose delivery stream is as follows:

aws kinesisanalytics add-application-output --application-name <Name of Analytics Application> --current-application-version-id <number> --application-output 'Name=DESTINATION_SQL_BASIC_STREAM,KinesisFirehoseOutput={ResourceARN=<ARN of IoT-Data-Stream>,RoleARN=<ARN of Analytics application>,DestinationSchema={RecordFormatType=CSV}'

Do not copy this into the CLI just yet! Before entering this command, make the following four changes to personalize it:

  • For Name of Analytics Application, enter the value from Step 4, or from the Analytics console.
  • For current-application-version-ID, run the following command:
aws kinesisanalytics describe-application --application-name <application name from above>; | grep ApplicationVersionId
  • For ResourceARN, run the following command:
aws firehose describe-delivery-stream --delivery-stream-name IoT-Destination-Data-Stream | grep DeliveryStreamARN
  • For RoleARN, run the following command:
aws kinesisanalytics describe-application --application-name <application name from above>; | grep RoleARN

Now, paste the complete command in the AWS CLI and press Enter. If there are any errors, the response provides details. If everything goes well, a new destination delivery stream is created to send the first query (DESTINATION_SQL_BASIC_STREAM) to IoT-Destination-Data-Stream.

Create the second destination delivery stream

Following similar steps as above, create a second destination Firehose delivery stream with the following changes:

  • For Name of Analytics Application, enter the same name as the first delivery stream.
  • For current-application-version-ID, increment by 1 from the previous value (unless you made other changes in between these steps). If unsure, run the same command as above to get it again.
  • For ResourceARN, get the value by running the following CLI command:
aws firehose describe-delivery-stream --delivery-stream-name IoT-Destination-Aggregate-Stream | grep DeliveryStreamARN
  • For RoleArn, enter the same value as the first stream.

Run the aws kinesisanalytics CLI command, similar to the previous step but with the new parameters substituted. This creates the second output Firehose destination delivery stream.

Update the IAM role for Analytics to allow writing to both output streams.

  1. In the console, choose IAM, Roles.
  2. Select the role that you created with Analytics in Step 4.
  3. Choose Policy, JSON, and Edit.
  4. Find “Sid”: “WriteOutputFirehose” in the JSON document, go to the “Resource” section and make sure that it includes Resource ARNs of both streams that you found in the previous step.
  5. If it has only one ARN, add the second ARN and choose Save.

This completes the Amazon Kinesis setup. The incoming IoT data is processed by Analytics and delivered, using two output delivery streams, to two separate folders in your S3 bucket.

Step 6: Set up Amazon QuickSight to analyze the data

To build the visualization dashboard, ingest the processed CSV files from the S3 bucket into Amazon QuickSight.

  1. In the console, choose QuickSight.
  2. If this is your first time using Amazon QuickSight, you are asked to create a new account. Follow the prompts.
  3. When you are logged in to your account, choose New Analysis and enter a name of your choice.
  4. Choose New data set for the analysis or, if you have previously imported your data set, select one from the available data sets.
  5. You import two data sets: one with general device parameters information, and the other with aggregates of maximum and minimum temperatures for monitoring. For the first data set, choose S3 from the list of available data sources and enter a name, for example, IoT Device Data.
  6. The location of the S3 bucket and the objects to use are provided to Amazon QuickSight as a manifest file. Create a new manifest file following the supported formats for Amazon S3 manifest files.
  7. In the URIPrefixes section, provide your appropriate S3 bucket and folder location for the general device data. Hint: it should include <your unique name>-kinesis/data/.

Your manifest file should look similar to the following:

{ 
    "fileLocations": [                                                    
              {"URIPrefixes": ["https://s3.amazonaws.com/<YOUR_BUCKET_NAME>/data/<YEAR>/<MONTH>/<DATE>/<HOUR>/"]}
     ],
     "globalUploadSettings": { 
     "format": "CSV",  
     "delimiter": ","
    }
}

Amazon QuickSight imports and parses the data set, and provides available data fields that can be used for making graphs. The Edit/Preview data button allows you to format and transform the data, change data types, and filter or join your data. Make sure that the columns have the correct titles. If not, you can edit them and then save.

Tip: choose the downward arrow on the top right and unselect Files include headers to give each column appropriate headers. Choose Save. This takes you back to the data sets page.

Follow the same steps as above to import the second data set. This time, your manifest should include your aggregate data set folder on S3, which is named <your unique name>-kinesis/aggregate/. Update headers if necessary and choose Save & visualize.

Build an analysis

The visualization screen shows the data set that you last imported, which in this case is the aggregate data. To include the general device data as well, for Fields on the top left, choose Edit analysis data sets. Choose Add data set and select the other data set that you saved earlier.

Now both data sets are available on the analysis screen. For Visual Types at bottom left, select the type of graph to make. For Fields, select the fields to visualize. For example, drag Device ID, Device Parameter, and Value to Field wells, as shown in the screenshot below, to generate a visualization of average parameter values compared across devices.

You can create another visual by choose +Add. This time, select a line graph to show monitoring of the maximum temperature values of the sensors in any minute, from the aggregate data set.

If you would like to create an interactive story to present to your team or organization, you can choose the Story option on the left panel. Create a dashboard with multiple visualizations, to save and share securely with the intended audience. An example of a story is shown below.

Conclusion

Any data is valuable only when it can be actually put to use. In this post, you’ve seen how it’s possible to quickly build a simple Analytics application to ingest, process, and visualize IoT data in near real time entirely using AWS managed services. This solution is scalable and reliable, and costs a fraction of other business intelligence solutions. It is easy enough that anyone with an AWS account can build and use it without any special training.

If you have any questions or suggestions, please comment below.


About the Author

Karan Desai is a Solutions Architect with Amazon Web Services. He works with startups and small businesses in the US, helping them adopt cloud technology to build scalable and secure solutions using AWS. In his spare time, he likes to build personal IoT projects, travel to offbeat places and write about it.

 

 


Related

Visualize Big Data with Amazon QuickSight, Presto, and Apache Spark on Amazon EMR