Tag Archives: S.

TheFatRat – Massive Exploitation Tool

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/A3ozOmH4BDE/

TheFatRat is an easy-to-use Exploitation Tool that can help you to generate backdoors and post exploitation attacks like browser attack DLL files. This tool compiles malware with popular payloads and then the compiled malware can be executed on Windows, Linux, Mac OS X and Android. The malware that is created with this tool also has […]

The…

Read the full post at darknet.org.uk

The Secret Code of Beatrix Potter

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/the_secret_code.html

Interesting:

As codes go, Potter’s wasn’t inordinately complicated. As Wiltshire explains, it was a “mono-alphabetic substitution cipher code,” in which each letter of the alphabet was replaced by a symbol­ — the kind of thing they teach you in Cub Scouts. The real trouble was Potter’s own fluency with it. She quickly learned to write the code so fast that each sheet looked, even to Linder’s trained eye, like a maze of scribbles.

Banning VPNs and Proxies is Dangerous, IT Experts Warn

Post Syndicated from Andy original https://torrentfreak.com/banning-vpns-and-proxies-is-dangerous-it-experts-warn-170623/

In April, draft legislation was developed to crack down on systems and software that allow Russian Internet users to bypass website blockades approved by telecoms watchdog Roskomnadzor.

Earlier this month the draft bill was submitted to the State Duma, the lower house of the Russian parliament. If passed, the law will make it illegal for services to circumvent web blockades by “routing traffic of Russian Internet users through foreign servers, anonymous proxy servers, virtual private networks and other means.”

As the plans currently stand, anonymization services that fail to restrict access to sites listed by telecoms watchdog Rozcomnadzor face being blocked themselves. Sites offering circumvention software for download also face potential blacklisting.

This week the State Duma discussed the proposals with experts from the local Internet industry. In addition to the head of Rozcomnadzor, representatives from service providers, search engines and even anonymization services were in attendance. Novaya Gazeta has published comments (Russian) from some of the key people at the meeting and it’s fair to say there’s not a lot of support.

VimpelCom, the sixth largest mobile network operator in the world with more than 240 million subscribers, sent along Director for Relations with Government, Sergey Malyanov. He wondered where all this blocking will end up.

“First we banned certain information. Then this information was blocked with the responsibility placed on both owners of resources and services. Now there are blocks on top of blocks – so we already have a triple effort,” he said.

“It is now possible that there will be a fourth iteration: the block on the block to block those that were not blocked. And with that, we have significantly complicated the law and the activities of all the people affected by it.”

Malyanov said that these kinds of actions have the potential to close down the entire Internet by ruining what was once an open network running standard protocols. But amid all of this, will it even be effective?

“The question is not even about the losses that will be incurred by network operators, the owners of the resources and the search engines. The question is whether this bill addresses the goal its creators have set for themselves. In my opinion, it will not.”

Group-IB, one of the world’s leading cyber-security and threat intelligence providers, was represented CEO Ilya Sachkov. He told parliament that “ordinary respectable people” who use the Internet should always use a VPN for security. Nevertheless, he also believes that such services should be forced to filter sites deemed illegal by the state.

But in a warning about blocks in general, he warned that people who want to circumvent them will always be one step ahead.

“We have to understand that by the time the law is adopted the perpetrators will already find it very easy to circumvent,” he said.

Mobile operator giant MTS, which turns over billions of dollars and employs 50,000+ people, had their Vice-President of Corporate and Legal Affairs in attendance. Ruslan Ibragimov said that in dealing with a problem, the government should be cautious of not causing more problems, including disruption of a growing VPN market.

“We have an understanding that evil must be fought, but it’s not necessary to create a new evil, even more so – for those who are involved in this struggle,” he said.

“Broad wording of this law may pose a threat to our network, which could be affected by the new restrictive measures, as well as the VPN market, which we are currently developing, and whose potential market is estimated at 50 billion rubles a year.”

In its goal to maintain control of the Internet, it’s clear that Russia is determined to press ahead with legislative change. Unfortunately, it’s far from clear that there’s a technical solution to the problem, but if one is pursued regardless, there could be serious fallout.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] ProofMode: a camera app for verifiable photography

Post Syndicated from corbet original https://lwn.net/Articles/726142/rss

The default apps on a mobile platform like Android are familiar targets for
replacement, especially for developers concerned about security. But while
messaging and voice apps (which can be replaced by Signal and Ostel, for
instance) may be the best known examples, the non-profit Guardian Project has taken up the
cause of improving the security features of the camera app. Its latest
such project is ProofMode, an app
to let users take photos and videos that can be verified as authentic by
third parties.

Kotlin and Groovy JVM Languages with AWS Lambda

Post Syndicated from Juan Villa original https://aws.amazon.com/blogs/compute/kotlin-and-groovy-jvm-languages-with-aws-lambda/


Juan Villa – Partner Solutions Architect

 

When most people hear “Java” they think of Java the programming language. Java is a lot more than a programming language, it also implies a larger ecosystem including the Java Virtual Machine (JVM). Java, the programming language, is just one of the many languages that can be compiled to run on the JVM. Some of the most popular JVM languages, other than Java, are Clojure, Groovy, Scala, Kotlin, JRuby, and Jython (see this link for a list of more JVM languages).

Did you know that you can compile and subsequently run all these languages on AWS Lambda?

AWS Lambda supports the Java 8 runtime, but this does not mean you are limited to the Java language. The Java 8 runtime is capable of running JVM languages such as Kotlin and Groovy once they have been compiled and packaged as a “fat” JAR (a JAR file containing all necessary dependencies and classes bundled in).

In this blog post we’ll work through building AWS Lambda functions in both Kotlin and Groovy programming languages. To compile and package our projects we will use Gradle build tool.

To follow along, please clone the Git repository available at GitHub here. Also, I recommend using an Integrated Development Environment (IDE) such as JetBrain’s IntelliJ IDEA, this is the IDE I used while working on these projects.

Kotlin

Kotlin is a statically-typed JVM language designed and developed by JetBrains (one of our Amazon Partner Network Technology partners) and the open source community. Compared to Java the programming language, Kotlin has additional powerful language features such as: Data Classes, Default Arguments, Extensions, Elvis Operator, and Destructuring Declarations. This is a just a short list of Kotlin’s powerful language features. For a more thorough list of features, and how to use them, refer to the full documentation of the Kotlin language.

Let’s jump right into the code and see what an AWS Lambda function looks like in Kotlin.

package com.aws.blog.jvmlangs.kotlin

import java.io.*
import com.fasterxml.jackson.module.kotlin.*

data class HandlerInput(val who: String)
data class HandlerOutput(val message: String)

class Main {
    val mapper = jacksonObjectMapper()

    fun handler(input: InputStream, output: OutputStream): Unit {
        val inputObj = mapper.readValue<HandlerInput>(input)
        mapper.writeValue(output, HandlerOutput("Hello ${inputObj.who}"))
    }
}

The above example is a very simple Hello World application that accepts as an input a JSON object containing a key called “who” and returns a JSON object containing a key called “message” with a value of “Hello {who}”.

AWS Lambda does not support serializing JSON objects into Kotlin data classes, but don’t worry! AWS Lambda supports passing an input object as a Stream, and also supports an output Stream for returning a result (see this link for more information). Combined with the Input/Output Stream form of the handler function, we are using the Jackson library with a Kotlin extension function to support serialization and deserialization of Kotlin data class types.

To get started with this example, let’s first compile and package the Kotlin project.

git clone https://github.com/awslabs/lambda-kotlin-groovy-example
cd lambda-kotlin-groovy-example/kotlin
./gradlew shadowJar

Once packaged, a JAR file containing all necessary dependencies will be available at “build/libs/ jvmlangs-kotlin-1.0-SNAPSHOT-all.jar”. Now let’s deploy this package to AWS Lambda.

To deploy the lambda function, we will be using the AWS Command Line Interface (CLI). You can find information on how to set up the AWS CLI here. This tool allows you to set up and manage AWS services via the command line.

aws lambda create-function --region us-east-1 --function-name kotlin-hello \
--zip-file fileb://build/libs/jvmlangs-kotlin-1.0-SNAPSHOT-all.jar \
--role arn:aws:iam::<account_id>:role/lambda_basic_execution \
--handler com.aws.blog.jvmlangs.kotlin.Main::handler --runtime java8 \
--timeout 15 --memory-size 128

Once deployed, we can test the function by invoking the lambda function from the CLI.

aws lambda invoke --function-name kotlin-hello --payload '{"who": "AWS Fan"}' output.txt
cat output.txt

If successful, you’ll see an output of “{"message":"Hello AWS Fan"}”.

Groovy

Groovy is an optionally typed JVM language with both dynamic and static typing capabilities. Groovy is currently being supported by the Apache Software Foundation. Like Kotlin, Groovy also packs a lot of powerful features such as: Closures, Dynamic Typing, Collection Literals, String Interpolation, and Elvis Operator. This is just a short list, see the full documentation for a list of features and how to use them.

Once again, let’s jump right into the code.

package com.aws.blog.jvmlangs.groovy

class HandlerInput {
    String who
}
class HandlerOutput {
    String message
}

class Main {
    def handler(HandlerInput input) {
        return new HandlerOutput(message: "Hello ${input.who}")
    }
}

Just like the Kotlin example, we have defined a function that takes a simple JSON object containing a “who” key value and build a response containing a “message” key. Note that in this case we are not using the Input/Output Stream form of the handler function, but rather we are letting AWS Lambda serialize the input JSON object into the type HandlerInput. To accomplish this, AWS Lambda uses the Jackson library and handles the serialization for us.

Let’s go ahead and compile and package this Groovy example.

git clone https://github.com/awslabs/lambda-kotlin-groovy-example
cd lambda-kotlin-groovy-example/groovy
./gradlew shadowJar

Once packaged, a JAR file containing all necessary dependencies will be available at “build/libs/ jvmlangs-groovy-1.0-SNAPSHOT-all.jar”. Now let’s deploy this package to AWS Lambda.

aws lambda create-function --region us-east-1 --function-name groovy-hello \
--zip-file fileb://build/libs/jvmlangs-groovy-1.0-SNAPSHOT-all.jar \
--role arn:aws:iam::<account_id>:role/lambda_basic_execution \
--handler com.aws.blog.jvmlangs.groovy.Main::handler --runtime java8 \
--timeout 15 --memory-size 128

Once deployed, we can test the function by invoking the lambda function from the CLI.

aws lambda invoke --function-name groovy-hello --payload '{"who": "AWS Fan"}' output.txt
cat output.txt

If successful, you’ll see an output of “{"message":"Hello AWS Fan"}”.

Gradle Build Tool

Finally, let’s touch up on how we built the JAR package from the Kotlin and Groovy sources above. To build the JARs we used the Gradle build tool. Gradle builds a project by reading instructions from a file called “build.gradle”. This is a file written in Gradle’s Groovy Domain Specific Langauge (DSL). You can find more information on the gradle build file by looking at their documentation. Let’s take a look at the Gradle build files we used for this post.

For the Kotlin example, this is the build file we used.

buildscript {
    repositories {
        mavenCentral()
        jcenter()
    }
    dependencies {
        classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
        classpath "com.github.jengelman.gradle.plugins:shadow:1.2.3"
    }
}

group 'com.aws.blog.jvmlangs.kotlin'
version '1.0-SNAPSHOT'

apply plugin: 'kotlin'
apply plugin: 'com.github.johnrengelman.shadow'

repositories {
    mavenCentral()
}

dependencies {
    compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
    compile "com.fasterxml.jackson.module:jackson-module-kotlin:2.8.2"
}

For the Groovy example this is the build file we used.

buildscript {
    repositories {
        jcenter()
    }
    dependencies {
        classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3'
    }
}

group 'com.aws.blog.jvmlangs.groovy'
version '1.0-SNAPSHOT'

apply plugin: 'groovy'
apply plugin: 'com.github.johnrengelman.shadow'

repositories {
    mavenCentral()
}

dependencies {
    compile 'org.codehaus.groovy:groovy-all:2.3.11'
    testCompile group: 'junit', name: 'junit', version: '4.11'
}

As you can see, the build files for both Kotlin and Groovy files are very similar. For the Kotlin project we define a dependency on the Jackson Kotlin module. Also, for each respective language we include the language supporting libraries (kotlin-stdlib and groovy-all respectively).

In addition, you will notice that we are using a plugin called “shadow”. We use this plugin to package all the project dependencies into one JAR by using the Gradle task “shadowJar”. You can find more information on Shadow in their documentation.

Final Words

Don’t stop here though! Take a look at other JVM languages and get them running on AWS Lambda with the Java 8 runtime. Maybe start with Clojure? or Scala?

Also take a look AWS Lambda Java libraries provided by AWS. They provide interfaces and models to make handling events from event sources easier to handle.

Court Suspends Ban on Roku Sales in Mexico

Post Syndicated from Ernesto original https://torrentfreak.com/court-suspends-ban-on-roku-sales-in-mexico-170623/

Last week, news broke that the Superior Court of Justice of the City of Mexico had issued a ban on Roku sales.

The order prohibited stores such as Amazon, Liverpool, El Palacio de Hierro, and Sears from importing and selling the devices. In addition, several banks were told stop processing payments from accounts that are linked to pirated services on Roku.

While Roku itself is not offering any pirated content, there is a market for third-party pirate channels outside the Roku Channel Store, which turn the boxes into pirate tools. Cablevision filed a complaint about this unauthorized use which eventually resulted in the ban.

The news generated headlines all over the world and was opposed immediately by several of the parties involved. Yesterday, a federal judge decided to suspend the import and sales ban, at least temporarily.

As a result, local vendors can resume their sales of the popular media player.

“Roku is pleased with today’s court decision, which paves the way for sales of Roku devices to resume in Mexico,” Roku’s General Counsel Steve Kay informed TorrentFreak after he heard the news.

Roku

TorrentFreak has not been able to get a copy of the suspension order, but it’s likely that the court wants to review the case in more detail before a final decision is made.

While streaming player piracy is seen as one of the greatest threats the entertainment industry faces today, the Roku ban went quite far. In a way, it would be similar to banning the Chrome browser because certain add-ons and sites allow users to stream pirated movies.

Roku, meanwhile, says it will continue to work with rightholders and other stakeholders to prevent piracy on its platform, to the best of their ability.

“Piracy is a problem the industry at large is facing,” Key tells TorrentFreak.

“We prohibit copyright infringement of any kind on the Roku platform. We actively work to prevent third-parties from using our platform to distribute copyright infringing content. Moreover, we have been actively working with other industry stakeholders on a wide range of anti-piracy initiatives.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon Patents Measures to Prevent In-Store Comparison Shopping

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/amazon_patents_.html

Amazon has been issued a patent on security measures that prevents people from comparison shopping while in the store. It’s not a particularly sophisticated patent — it basically detects when you’re using the in-store Wi-Fi to visit a competitor’s site and then blocks access — but it is an indication of how retail has changed in recent years.

What’s interesting is that Amazon is on the other of this arms race. As an on-line retailer, it wants people to walk into stores and then comparison shop on its site. Yes, I know it’s buying Whole Foods, but it’s still predominantly an online retailer. Maybe it patented this to prevent stores from implementing the technology.

It’s probably not nearly that strategic. It’s hard to build a business strategy around a security measure that can be defeated with cellular access.

A Raspbian desktop update with some new programming tools

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/a-raspbian-desktop-update-with-some-new-programming-tools/

Today we’ve released another update to the Raspbian desktop. In addition to the usual small tweaks and bug fixes, the big new changes are the inclusion of an offline version of Scratch 2.0, and of Thonny (a user-friendly IDE for Python which is excellent for beginners). We’ll look at all the changes in this post, but let’s start with the biggest…

Scratch 2.0 for Raspbian

Scratch is one of the most popular pieces of software on Raspberry Pi. This is largely due to the way it makes programming accessible – while it is simple to learn, it covers many of the concepts that are used in more advanced languages. Scratch really does provide a great introduction to programming for all ages.

Raspbian ships with the original version of Scratch, which is now at version 1.4. A few years ago, though, the Scratch team at the MIT Media Lab introduced the new and improved Scratch version 2.0, and ever since we’ve had numerous requests to offer it on the Pi.

There was, however, a problem with this. The original version of Scratch was written in a language called Squeak, which could run on the Pi in a Squeak interpreter. Scratch 2.0, however, was written in Flash, and was designed to run from a remote site in a web browser. While this made Scratch 2.0 a cross-platform application, which you could run without installing any Scratch software, it also meant that you had to be able to run Flash on your computer, and that you needed to be connected to the internet to program in Scratch.

We worked with Adobe to include the Pepper Flash plugin in Raspbian, which enables Flash sites to run in the Chromium browser. This addressed the first of these problems, so the Scratch 2.0 website has been available on Pi for a while. However, it still needed an internet connection to run, which wasn’t ideal in many circumstances. We’ve been working with the Scratch team to get an offline version of Scratch 2.0 running on Pi.

Screenshot of Scratch on Raspbian

The Scratch team had created a website to enable developers to create hardware and software extensions for Scratch 2.0; this provided a version of the Flash code for the Scratch editor which could be modified to run locally rather than over the internet. We combined this with a program called Electron, which effectively wraps up a local web page into a standalone application. We ended up with the Scratch 2.0 application that you can find in the Programming section of the main menu.

Physical computing with Scratch 2.0

We didn’t stop there though. We know that people want to use Scratch for physical computing, and it has always been a bit awkward to access GPIO pins from Scratch. In our Scratch 2.0 application, therefore, there is a custom extension which allows the user to control the Pi’s GPIO pins without difficulty. Simply click on ‘More Blocks’, choose ‘Add an Extension’, and select ‘Pi GPIO’. This loads two new blocks, one to read and one to write the state of a GPIO pin.

Screenshot of new Raspbian iteration of Scratch 2, featuring GPIO pin control blocks.

The Scratch team kindly allowed us to include all the sprites, backdrops, and sounds from the online version of Scratch 2.0. You can also use the Raspberry Pi Camera Module to create new sprites and backgrounds.

This first release works well, although it can be slow for some operations; this is largely unavoidable for Flash code running under Electron. Bear in mind that you will need to have the Pepper Flash plugin installed (which it is by default on standard Raspbian images). As Pepper Flash is only compatible with the processor in the Pi 2.0 and Pi 3, it is unfortunately not possible to run Scratch 2.0 on the Pi Zero or the original models of the Pi.

We hope that this makes Scratch 2.0 a more practical proposition for many users than it has been to date. Do let us know if you hit any problems, though!

Thonny: a more user-friendly IDE for Python

One of the paths from Scratch to ‘real’ programming is through Python. We know that the transition can be awkward, and this isn’t helped by the tools available for learning Python. It’s fair to say that IDLE, the Python IDE, isn’t the most popular piece of software ever written…

Earlier this year, we reviewed every Python IDE that we could find that would run on a Raspberry Pi, in an attempt to see if there was something better out there than IDLE. We wanted to find something that was easier for beginners to use but still useful for experienced Python programmers. We found one program, Thonny, which stood head and shoulders above all the rest. It’s a really user-friendly IDE, which still offers useful professional features like single-stepping of code and inspection of variables.

Screenshot of Thonny IDE in Raspbian

Thonny was created at the University of Tartu in Estonia; we’ve been working with Aivar Annamaa, the lead developer, on getting it into Raspbian. The original version of Thonny works well on the Pi, but because the GUI is written using Python’s default GUI toolkit, Tkinter, the appearance clashes with the rest of the Raspbian desktop, most of which is written using the GTK toolkit. We made some changes to bring things like fonts and graphics into line with the appearance of our other apps, and Aivar very kindly took that work and converted it into a theme package that could be applied to Thonny.

Due to the limitations of working within Tkinter, the result isn’t exactly like a native GTK application, but it’s pretty close. It’s probably good enough for anyone who isn’t a picky UI obsessive like me, anyway! Have a look at the Thonny webpage to see some more details of all the cool features it offers. We hope that having a more usable environment will help to ease the transition from graphical languages like Scratch into ‘proper’ languages like Python.

New icons

Other than these two new packages, this release is mostly bug fixes and small version bumps. One thing you might notice, though, is that we’ve made some tweaks to our custom icon set. We wondered if the icons might look better with slightly thinner outlines. We tried it, and they did: we hope you prefer them too.

Downloading the new image

You can either download a new image from the Downloads page, or you can use apt to update:

sudo apt-get update
sudo apt-get dist-upgrade

To install Scratch 2.0:

sudo apt-get install scratch2

To install Thonny:

sudo apt-get install python3-thonny

One more thing…

Before Christmas, we released an experimental version of the desktop running on Debian for x86-based computers. We were slightly taken aback by how popular it turned out to be! This made us realise that this was something we were going to need to support going forward. We’ve decided we’re going to try to make all new desktop releases for both Pi and x86 from now on.

The version of this we released last year was a live image that could run from a USB stick. Many people asked if we could make it permanently installable, so this version includes an installer. This uses the standard Debian install process, so it ought to work on most machines. I should stress, though, that we haven’t been able to test on every type of hardware, so there may be issues on some computers. Please be sure to back up your hard drive before installing it. Unlike the live image, this will erase and reformat your hard drive, and you will lose anything that is already on it!

You can still boot the image as a live image if you don’t want to install it, and it will create a persistence partition on the USB stick so you can save data. Just select ‘Run with persistence’ from the boot menu. To install, choose either ‘Install’ or ‘Graphical install’ from the same menu. The Debian installer will then walk you through the install process.

You can download the latest x86 image (which includes both Scratch 2.0 and Thonny) from here or here for a torrent file.

One final thing

This version of the desktop is based on Debian Jessie. Some of you will be aware that a new stable version of Debian (called Stretch) was released last week. Rest assured – we have been working on porting everything across to Stretch for some time now, and we will have a Stretch release ready some time over the summer.

The post A Raspbian desktop update with some new programming tools appeared first on Raspberry Pi.

Sci-Hub Ordered to Pay $15 Million in Piracy Damages

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-ordered-to-pay-15-million-in-piracy-damages-170623/

Two years ago, academic publisher Elsevier filed a complaint against Sci-Hub and several related “pirate” sites.

It accused the websites of making academic papers widely available to the public, without permission.

While Sci-Hub is nothing like the average pirate site, it is just as illegal according to Elsevier’s legal team, who obtained a preliminary injunction from a New York District Court last fall.

The injunction ordered Sci-Hub’s founder Alexandra Elbakyan to quit offering access to any Elsevier content. However, this didn’t happen.

Instead of taking Sci-Hub down, the lawsuit achieved the opposite. Sci-Hub grew bigger and bigger up to a point where its users were downloading hundreds of thousands of papers per day.

Although Elbakyan sent a letter to the court earlier, she opted not engage in the US lawsuit any further. The same is true for her fellow defendants, associated with Libgen. As a result, Elsevier asked the court for a default judgment and a permanent injunction which were issued this week.

Following a hearing on Wednesday, the Court awarded Elsevier $15,000,000 in damages, the maximum statutory amount for the 100 copyrighted works that were listed in the complaint. In addition, the injunction, through which Sci-Hub and LibGen lost several domain names, was made permanent.

Sci-Hub founder Alexandra Elbakyan says that even if she wanted to pay the millions of dollars in revenue, she doesn’t have the money to do so.

“The money project received and spent in about six years of its operation do not add up to 15 million,” Elbakyan tells torrentFreak.

“More interesting, Elsevier says: the Sci-Hub activity ’causes irreparable injury to Elsevier, its customers and the public’ and US court agreed. That feels like a perfect crime. If you want to cause an irreparable injury to American public, what do you have to do? Now we know the answer: establish a website where they can read research articles for free,” she adds.

Previously, Elbakyan already confirmed to us that, lawsuit or not, the site is not going anywhere.

“The Sci-Hub will continue as usual. In case of problems with the domain names, users can rely on TOR scihub22266oqcxt.onion,” Elbakyan added.

Sci-Hub is regularly referred to as the “Pirate Bay for science,” and based on the site’s resilience and its response to legal threats, it can certainly live up to this claim.

The Association of American Publishers (AAP) is happy with the outcome of the case.

“As the final judgment shows, the Court has not mistaken illegal activity for a public good,” AAP President and CEO Maria A. Pallante says.

“On the contrary, it has recognized the defendants’ operation for the flagrant and sweeping infringement that it really is and affirmed the critical role of copyright law in furthering scientific research and the public interest.”

Matt McKay, a spokesperson for the International Association of Scientific, Technical and Medical Publishers (STM) in Oxford went even further, telling Nature that the site doesn’t offer any value to the scientific comunity.

“Sci-Hub does not add any value to the scholarly community. It neither fosters scientific advancement nor does it value researchers’ achievements. It is simply a place for someone to go to download stolen content and then leave.”

Hundreds of thousands of academics, who regularly use the site to download papers, might contest this though.

With no real prospect of recouping the damages and an ever-resilient Elbakyan, Elsevier’s legal battle could just be a win on paper. Sci-Hub and Libgen are not going anywhere, it seems, and the lawsuit has made them more popular than ever before.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Bill Simplification – Consolidated CloudWatch Charges

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-bill-simplification-consolidated-cloudwatch-charges/

The bill that you receive for your use of AWS in July will include a change in the way that Amazon CloudWatch charges are presented. The CloudWatch team made this change in order to make your bill simpler and easier to understand.

Consolidating Charges
In the past, charges for your usage of CloudWatch were split between two sections of your bill. For historical reasons, the charges for CloudWatch Alarms, CloudWatch Metrics, and calls to the CloudWatch API were reported in the Elastic Compute Cloud (EC2) detail section, while charges for CloudWatch Logs and CloudWatch Dashboards were reported in the CloudWatch detail section, like this:

We have received feedback that splitting the charges across two sections of the bill made it difficult to locate and understand the entire set of monitoring charges. In order to address this issue, we are moving the charges that were formerly listed in the Elastic Compute Cloud (EC2) detail section to the CloudWatch detail section. We are making the same change to the detailed billing report, moving the affected charges from the AmazonEC2 product code to the AmazonCloudWatch product code and changing to the AmazonCloudWatch product name. This change does not affect your overall bill; it simply consolidates all of the charges for the use of CloudWatch in one section.

Billing Metric
The CloudWatch billing metric named Estimated Charges can be viewed as a Total Estimated Charge, or broken down By Service:

The total will not change. However, as noted above, the charges that formerly had AmazonEC2 as the ServiceName dimension will now have it set to AmazonCloudWatch:

You may need to adjust thresholds on your billing alarms as a result:

Once again, your total AWS bill will not change. You will begin to see the consolidated charges for CloudWatch in your AWS bill for July 2017.

Jeff;

 

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Kim Dotcom Opposes US’s “Fugitive” Claims at Supreme Court

Post Syndicated from Ernesto original https://torrentfreak.com/kim-dotcom-opposes-uss-fugitive-claims-supreme-court-170622/

megaupload-logoWhen Megaupload and Kim Dotcom were raided five years ago, the authorities seized millions of dollars in cash and other property.

The US government claimed the assets were obtained through copyright crimes so went after the bank accounts, cars, and other seized possessions of the Megaupload defendants.

Kim Dotcom and his colleagues were branded as “fugitives” and the Government won its case. Dotcom’s legal team quickly appealed this verdict, but lost once more at the Fourth Circuit appeals court.

A few weeks ago Dotcom and his former colleagues petitioned the Supreme Court to take on the case.

They don’t see themselves as “fugitives” and want the assets returned. The US Government opposed the request, but according to a new reply filed by Megaupload’s legal team, the US Government ignores critical questions.

The Government has a “vested financial stake” in maintaining the current situation, they write, which allows the authorities to use their “fugitive” claims as an offensive weapon.

“Far from being directed towards persons who have fled or avoided our country while claiming assets in it, fugitive disentitlement is being used offensively to strip foreigners of their assets abroad,” the reply brief (pdf) reads.

According to Dotcom’s lawyers there are several conflicting opinions from lower courts, which should be clarified by the Supreme Court. That Dotcom and his colleagues have decided to fight their extradition in New Zealand, doesn’t warrant the seizure of their assets.

“Absent review, forfeiture of tens of millions of dollars will be a fait accompli without the merits being reached,” they write, adding that this is all the more concerning because the US Government’s criminal case may not be as strong as claimed.

“This is especially disconcerting because the Government’s criminal case is so dubious. When the Government characterizes Petitioners as ‘designing and profiting from a system that facilitated wide-scale copyright infringement,’ it continues to paint a portrait of secondary copyright infringement, which is not a crime.”

The defense team cites several issues that warrant review and urges the Supreme Court to hear the case. If not, the Government will effectively be able to use assets seizures as a pressure tool to urge foreign defendants to come to the US.

“If this stands, the Government can weaponize fugitive disentitlement in order to claim assets abroad,” the reply brief reads.

“It is time for the Court to speak to the Questions Presented. Over the past two decades it has never had a better vehicle to do so, nor is any such vehicle elsewhere in sight,” Dotcom’s lawyers add.

Whether the Supreme Court accepts or denies the case will likely be decided in the weeks to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

Coolest Projects 2017 attendee

Crowd at Coolest Projects 2017

This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Coolest Projects 2017 LED ping-pong ball display
Coolest Projects 2017 Benjamin and Oly

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

Coolest Projects 2017 Aimee's cook book
Coolest Projects 2017 Aimee's setup

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Coolest Projects 2017 face detection bear
Coolest Projects 2017 face detection interface
Coolest Projects 2017 face detection database

Helen’s and Oly’s favourite project involved…live bees!

Coolest Projects 2017 live bees

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

Coolest Projects 2017 Aimee's bees

Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

Philip Colligan on Twitter

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

Coolest Projects 2017 coolest robot no.1
Coolest Projects 2017 coolest robot no.2
Coolest Projects 2017 coolest robot no.3

Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Coolest Projects 2017 Mogamad, Daniel, Basheerah, Oly
Coolest Projects 2017 Alexa text-based game

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Coolest Projects 2017 Lego home alone house
Coolest Projects 2017 Lego home alone innards
Coolest Projects 2017 Lego home alone innards closeup

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

Coolest Projects 2017 NI Jam DOTS activity 1
Coolest Projects 2017 NI Jam DOTS activity 2
Coolest Projects 2017 NI Jam DOTS activity 3
Coolest Projects 2017 NI Jam DOTS activity 4
Coolest Projects 2017 NI Jam DOTS activity 5
Coolest Projects 2017 NI Jam DOTS activity 6

Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Philip Colligan on Twitter

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Coolest Projects 2017 winning team 1
Coolest Projects 2017 winning team 2
Coolest Projects 2017 winning team 3

Take a look at the gallery of all winners over on Flickr.

The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.

NSA Insider Security Post-Snowden

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_insider_sec.html

According to a recently declassified report obtained under FOIA, the NSA’s attempts to protect itself against insider attacks aren’t going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department’s inspector general completed in 2016.

[…]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of “privileged” users, who have greater power to access the N.S.A.’s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative — called “Secure the Net” by the N.S.A. — had some successes, it “did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data.”

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 “Secure the Net” initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA’s computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD’s inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), “NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users.” The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, “NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach.”

There seem to be two possible explanations for the fact that the NSA couldn’t track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA’s crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

Three Men Sentenced Following £2.5m Internet Piracy Case

Post Syndicated from Andy original https://torrentfreak.com/three-men-sentenced-following-2-5m-internet-piracy-case-170622/

While legal action against low-level individual file-sharers is extremely rare in the UK, the country continues to pose a risk for those engaged in larger-scale infringement.

That is largely due to the activities of the Police Intellectual Property Crime Unit and private anti-piracy outfits such as the Federation Against Copyright Theft (FACT). Investigations are often a joint effort which can take many years to complete, but the outcomes can often involve criminal sentences.

That was the profile of another Internet piracy case that concluded in London this week. It involved three men from the UK, Eric Brooks, 43, from Bolton, Mark Valentine, 44, from Manchester, and Craig Lloyd, 33, from Wolverhampton.

The case began when FACT became aware of potentially infringing activity back in February 2011. The anti-piracy group then investigated for more than a year before handing the case to police in March 2012.

On July 4, 2012, officers from City of London Police arrested Eric Brooks’ at his home in Bolton following a joint raid with FACT. Computer equipment was seized containing evidence that Brooks had been running a Netherlands-based server hosting more than £100,000 worth of pirated films, music, games, software and ebooks.

According to police, a spreadsheet on Brooks’ computer revealed he had hundreds of paying customers, all recruited from online forums. Using PayPal or utilizing bank transfers, each paid money to access the server. Police mentioned no group or site names in information released this week.

“Enquiries with PayPal later revealed that [Brooks] had made in excess of £500,000 in the last eight years from his criminal business and had in turn defrauded the film and TV industry alone of more than £2.5 million,” police said.

“As his criminal enterprise affected not only the film and TV but the wider entertainment industry including music, games, books and software it is thought that he cost the wider industry an amount much higher than £2.5 million.”

On the same day police arrested Brooks, Mark Valentine’s home in Manchester had a similar unwelcome visit. A day later, Craig Lloyd’s home in Wolverhampton become the third target for police.

Computer equipment was seized from both addresses which revealed that the pair had been paying for access to Brooks’ servers in order to service their own customers.

“They too had used PayPal as a means of taking payment and had earned thousands of pounds from their criminal actions; Valentine gaining £34,000 and Lloyd making over £70,000,” police revealed.

But after raiding the trio in 2012, it took more than four years to charge the men. In a feature common to many FACT cases, all three were charged with Conspiracy to Defraud rather than copyright infringement offenses. All three men pleaded guilty before trial.

On Monday, the men were sentenced at Inner London Crown Court. Brooks was sentenced to 24 months in prison, suspended for 12 months and ordered to complete 140 hours of unpaid work.

Valentine and Lloyd were each given 18 months in prison, suspended for 12 months. Each was ordered to complete 80 hours unpaid work.

Detective Constable Chris Glover, who led the investigation for the City of London Police, welcomed the sentencing.

“The success of this investigation is a result of co-ordinated joint working between the City of London Police and FACT. Brooks, Valentine and Lloyd all thought that they were operating under the radar and doing something which they thought was beyond the controls of law enforcement,” Glover said.

“Brooks, Valentine and Lloyd will now have time in prison to reflect on their actions and the result should act as deterrent for anyone else who is enticed by abusing the internet to the detriment of the entertainment industry.”

While even suspended sentences are a serious matter, none of the men will see the inside of a cell if they meet the conditions of their sentence for the next 12 months. For a case lasting four years involving such large sums of money, that is probably a disappointing result for FACT and the police.

Nevertheless, the men won’t be allowed to enjoy the financial proceeds of their piracy, if indeed any money is left. City of London Police say the trio will be subject to a future confiscation hearing to seize any proceeds of crime.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DynamoDB Accelerator (DAX) Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/dynamodb-accelerator-dax-now-generally-available/

Earlier this year I told you about Amazon DynamoDB Accelerator (DAX), a fully-managed caching service that sits in front of (logically speaking) your Amazon DynamoDB tables. DAX returns cached responses in microseconds, making it a great fit for eventually-consistent read-intensive workloads. DAX supports the DynamoDB API, and is seamless and easy to use. As a managed service, you simply create your DAX cluster and use it as the target for your existing reads and writes. You don’t have to worry about patching, cluster maintenance, replication, or fault management.

Now Generally Available
Today I am pleased to announce that DAX is now generally available. We have expanded DAX into additional AWS Regions and used the preview time to fine-tune performance and availability:

Now in Five Regions – DAX is now available in the US East (Northern Virginia), EU (Ireland), US West (Oregon), Asia Pacific (Tokyo), and US West (Northern California) Regions.

In Production – Our preview customers are reporting that they are using DAX in production, that they loved how easy it was to add DAX to their application, and have told us that their apps are now running 10x faster.

Getting Started with DAX
As I outlined in my earlier post, it is easy to use DAX to accelerate your existing DynamoDB applications. You simply create a DAX cluster in the desired region, update your application to reference the DAX SDK for Java (the calls are the same; this is a drop-in replacement), and configure the SDK to use the endpoint to your cluster. As a read-through/write-through cache, DAX seamlessly handles all of the DynamoDB read/write APIs.

We are working on SDK support for other languages, and I will share additional information as it becomes available.

DAX Pricing
You pay for each node in the cluster (see the DynamoDB Pricing page for more information) on a per-hour basis, with prices starting at $0.269 per hour in the US East (Northern Virginia) and US West (Oregon) regions. With DAX, each of the nodes in your cluster serves as a read target and as a failover target for high availability. The DAX SDK is cluster aware and will issue round-robin requests to all nodes in the cluster so that you get to make full use of the cluster’s cache resources.

Because DAX can easily handle sudden spikes in read traffic, you may be able to reduce the amount of provisioned throughput for your tables, resulting in an overall cost savings while still returning results in microseconds.

Jeff;

 

Is Continuing to Patch Windows XP a Mistake?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/is_continuing_t.html

Last week, Microsoft issued a security patch for Windows XP, a 16-year-old operating system that Microsoft officially no longer supports. Last month, Microsoft issued a Windows XP patch for the vulnerability used in WannaCry.

Is this a good idea? This 2014 essay argues that it’s not:

The zero-day flaw and its exploitation is unfortunate, and Microsoft is likely smarting from government calls for people to stop using Internet Explorer. The company had three ways it could respond. It could have done nothing­ — stuck to its guns, maintained that the end of support means the end of support, and encouraged people to move to a different platform. It could also have relented entirely, extended Windows XP’s support life cycle for another few years and waited for attrition to shrink Windows XP’s userbase to irrelevant levels. Or it could have claimed that this case is somehow “special,” releasing a patch while still claiming that Windows XP isn’t supported.

None of these options is perfect. A hard-line approach to the end-of-life means that there are people being exploited that Microsoft refuses to help. A complete about-turn means that Windows XP will take even longer to flush out of the market, making it a continued headache for developers and administrators alike.

But the option Microsoft took is the worst of all worlds. It undermines efforts by IT staff to ditch the ancient operating system and undermines Microsoft’s assertion that Windows XP isn’t supported, while doing nothing to meaningfully improve the security of Windows XP users. The upside? It buys those users at best a few extra days of improved security. It’s hard to say how that was possibly worth it.

This is a hard trade-off, and it’s going to get much worse with the Internet of Things. Here’s me:

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

At least Microsoft has security engineers on staff that can write a patch for Windows XP. There will be no one able to write patches for your 16-year-old thermostat and refrigerator, even assuming those devices can accept security patches.

Opus 1.2 released

Post Syndicated from ris original https://lwn.net/Articles/726134/rss

Version 1.2 of the Opus audio codec has been released. “For music encoding Opus has already been shown to out-perform other audio codecs at both 64 kb/s and 96 kb/s. We originally thought that 64 kb/s was near the lowest bitrate at which Opus could be useful for streaming stereo music. However, with variable bitrate (VBR) improvements in Opus 1.1, suddenly 48 kb/s became a realistic target. Opus 1.2 continues on the path to lowering the bitrate limit. Music at 48 kb/s is now quite usable and while the artefacts are generally audible, they are rarely annoying. Even more, we’ve actually been pushing all the way to fullband stereo at just 32 kb/s!

Most of the music encoding quality improvements in 1.2 don’t come from big new features (like tonality analysis that got added to version 1.1), but from many small changes that all add up.”

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;