Watching Pirate Streams Isn’t Illegal, EU Commission Argues

Post Syndicated from Ernesto original https://torrentfreak.com/watching-pirate-streams-isnt-illegal-eu-commission-argues-161001/

streamingkeyOnline streaming continues to gain in popularity, both from authorized and pirate sources.

Unlike traditional forms of downloading, however, in many countries the legality of viewing unauthorized streams remains unclear.

In the European Union this may change in the near future. This week the European Court of Justice held a hearing during which it reviewed several questions related to pirate streaming.

The questions were raised in a case between Dutch anti-piracy group BREIN and the Filmspeler.nl store, which sells “piracy configured” media players. While these devices don’t ‘host’ any infringing content, they ship with add-ons that make it very easy to watch infringing content.

The Dutch District Court previously referred the case to the EU Court of Justice, where several questions were discussed in a hearing this week. In addition to BREIN and Filmspeler, the European commission and Spain weighed in on the issue as well.

The first main question that the Court will try to answer is rather specific. It asks whether selling pre-programmed media-players with links to pirate sources, through add-ons for example, are permitted.

Not surprisingly, Filmspeler.nl believes that it should be allowed. They argued that there is no communication to the public or a crucial intervention from their side, since these pirate add-ons are already publicly available.

The European Commission doesn’t classify selling pre-loaded boxes as infringing either, and notes that rightholders have other options to go after intermediaries, such as blocking requests.

BREIN, which covered the hearing in detail, countered this argument noting that Filmspeler willingly provides access to illegal content for profit. Spain sided with BREIN and argued that willingly including pirate plugins should not be allowed.

The second question is more crucial for the general public as it asks whether it is illegal for consumers to stream pirated content from websites or services.

“Is it lawful under EU law to temporarily reproduce content through streaming if the content originates from a third-party website where it’s made available without permission?”

Spain argued that streaming pirated content should not be allowed in any way. BREIN agreed with this position and argued that streaming should be on par with unauthorized downloading, which is illegal under EU case law.

Interestingly, the European Commission doesn’t believe that consumers who watch pirate streams are infringing. From the user’s perspective they equate streaming to watching, which is legitimate.

Based on the hearing the Advocate General will issue a recommendation later this year, which will be followed by a final verdict from the EU Court of Justice somewhere early 2017.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

Post Syndicated from Daniel Huesch original http://blogs.aws.amazon.com/application-management/post/Tx2WKWC9RIY0RD8/Deploy-an-App-to-an-AWS-OpsWorks-Layer-Using-AWS-CodePipeline

Deploy an App to an AWS OpsWorks Layer Using AWS CodePipeline

AWS CodePipeline lets you create continuous delivery pipelines that automatically track code changes from sources such as AWS CodeCommit, Amazon S3, or GitHub. Now, you can use AWS CodePipeline as a code change-management solution for apps, Chef cookbooks, and recipes that you want to deploy with AWS OpsWorks.

This blog post demonstrates how you can create an automated pipeline for a simple Node.js app by using AWS CodePipeline and AWS OpsWorks. After you configure your pipeline, every time you update your Node.js app, AWS CodePipeline passes the updated version to AWS OpsWorks. AWS OpsWorks then deploys the updated app to your fleet of instances, leaving you to focus on improving your application. AWS makes sure that the latest version of your app is deployed.

Step 1: Upload app code to an Amazon S3 bucket

The Amazon S3 bucket must be in the same region in which you later create your pipeline in AWS CodePipeline. For now, AWS CodePipeline supports the AWS OpsWorks provider in the us-east-1 region only; all resources in this blog post should be created in the US East (N. Virginia) region. The bucket must also be versioned, because AWS CodePipeline requires a versioned source. For more information, see Using Versioning.

Upload your app to an Amazon S3 bucket

  1. Download a ZIP file of the AWS OpsWorks sample, Node.js app, and save it to a convenient location on your local computer: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-app.zip.
  2. Open the Amazon S3 console at https://console.aws.amazon.com/s3/. Choose Create Bucket. Be sure to enable versioning.
  3. Choose the bucket that you created and upload the ZIP file that you saved in step 1.


     

  4. In the Properties pane for the uploaded ZIP file, make a note of the S3 link to the file. You will need the bucket name and the ZIP file name portion of this link to create your pipeline.

Step 2: Create an AWS OpsWorks to Amazon EC2 service role

1.     Go to the Identity and Access Management (IAM) service console, and choose Roles.
2.     Choose Create Role, and name it aws-opsworks-ec2-role-with-s3.
3.     In the AWS Service Roles section, choose Amazon EC2, and then choose the policy called AmazonS3ReadOnlyAccess.
4.     The new role should appear in the Roles dashboard.

Step 3: Create an AWS OpsWorks Chef 12 Linux stack

To use AWS OpsWorks as a provider for a pipeline, you must first have an AWS OpsWorks stack, a layer, and at least one instance in the layer. As a reminder, the Amazon S3 bucket to which you uploaded your app must be in the same region in which you later create your AWS OpsWorks stack and pipeline, US East (N. Virginia).

1.     In the OpsWorks console, choose Add Stack, and then choose a Chef 12 stack.
2.     Set the stack’s name to CodePipeline Demo and make sure the Default operating system is set to Linux.
3.     Enable Use custom Chef cookbooks.
4.     For Repository type, choose HTTP Archive, and then use the following cookbook repository on S3: https://s3.amazonaws.com/opsworks-codepipeline-demo/opsworks-nodejs-demo-cookbook.zip. This repository contains a set of Chef cookbooks that include Chef recipes you’ll use to install the Node.js package and its dependencies on your instance. You will use these Chef recipes to deploy the Node.js app that you prepared in step 1.1.

 Step 4: Create and configure an AWS OpsWorks layer

Now that you’ve created an AWS OpsWorks stack called CodePipeline Demo, you can create an OpsWorks layer.

1.     Choose Layers, and then choose Add Layer in the AWS OpsWorks stack view.
2.     Name the layer Node.js App Server. For Short Name, type app1, and then choose Add Layer.
3.     After you create the layer, open the layer’s Recipes tab. In the Deploy lifecycle event, type nodejs_demo. Later, you will link this to a Chef recipe that is part of the Chef cookbook you referenced when you created the stack in step 3.4. This Chef recipe runs every time a new version of your application is deployed.

4.     Now, open the Security tab, choose Edit, and choose AWS-OpsWorks-WebApp from the Security groups drop-down list. You will also need to set the EC2 Instance Profile to use the service role you created in step 2.2 (aws-opsworks-ec2-role-with-s3).

Step 5: Add your App to AWS OpsWorks

Now that your layer is configured, add the Node.js demo app to your AWS OpsWorks stack. When you create the pipeline, you’ll be required to reference this demo Node.js app.

  1. Have the Amazon S3 bucket link from the step 1.4 ready. You will need the link to the bucket in which you stored your test app.
  2. In AWS OpsWorks, open the stack you created (CodePipeline Demo), and in the navigation pane, choose Apps.
  3. Choose Add App.
  4. Provide a name for your demo app (for example, Node.js Demo App), and set the Repository type to an S3 Archive. Paste your S3 bucket link (s3://bucket-name/file name) from step 1.4.
  5. Now that your app appears in the list on the Apps page, add an instance to your OpsWorks layer.

 Step 6: Add an instance to your AWS OpsWorks layer

Before you create a pipeline in AWS CodePipeline, set up at least one instance within the layer you defined in step 4.

  1. Open the stack that you created (CodePipeline Demo), and in the navigation pane, choose Instances.
  2. Choose +Instance, and accept the default settings, including the hostname, size, and subnet. Choose Add Instance.

  1. By default, the instance is in a stopped state. Choose start to start the instance.

Step 7: Create a pipeline in AWS CodePipeline

Now that you have a stack and an app configured in AWS OpsWorks, create a pipeline with AWS OpsWorks as the provider to deploy your app to your specified layer. If you update your app or your Chef deployment recipes, the pipeline runs again automatically, triggering the deployment recipe to run and deploy your updated app.

This procedure creates a simple pipeline that includes only one Source and one Deploy stage. However, you can create more complex pipelines that use AWS OpsWorks as a provider.

To create a pipeline

  1. Open the AWS CodePipeline console in the U.S. East (N. Virginia) region.
  2. Choose Create pipeline.
  3. On the Getting started with AWS CodePipeline page, type MyOpsWorksPipeline, or a pipeline name of your choice, and then choose Next step.
  4. On the Source Location page, choose Amazon S3 from the Source provider drop-down list.
  5. In the Amazon S3 details area, type the Amazon S3 bucket path to your application, in the format s3://bucket-name/file name. Refer to the link you noted in step 1.4. Choose Next step.
  6. On the Build page, choose No Build from the drop-down list, and then choose Next step.
  7. On the Deploy page, choose AWS OpsWorks as the deployment provider.


     

  8. Specify the names of the stack, layer, and app that you created earlier, then choose Next step.
  9. On the AWS Service Role page, choose Create Role. On the IAM console page that opens, you will see the role that will be created for you (AWS-CodePipeline-Service). From the Policy Name drop-down list, choose Create new policy. Be sure the policy document has the following content, and then choose Allow.
    For more information about the service role and its policy statement, see Attach or Edit a Policy for an IAM Service Role.


     

  10. On the Review your pipeline page, confirm the choices shown on the page, and then choose Create pipeline.

The pipeline should now start deploying your app to your OpsWorks layer on its own.  Wait for deployment to finish; you’ll know it’s finished when Succeeded is displayed in both the Source and Deploy stages.

Step 8: Verifying the app deployment

To verify that AWS CodePipeline deployed the Node.js app to your layer, sign in to the instance you created in step 4. You should be able to see and use the Node.js web app.

  1. On the AWS OpsWorks dashboard, choose the stack and the layer to which you just deployed your app.
  2. In the navigation pane, choose Instances, and then choose the public IP address of your instance to view the web app. The running app will be displayed in a new browser tab.


     

  3. To test the app, on the app’s web page, in the Leave a comment text box, type a comment, and then choose Send. The app adds your comment to the web page. You can add more comments to the page, if you like.

Wrap up

You now have a working and fully automated pipeline. As soon as you make changes to your application’s code and update the S3 bucket with the new version of your app, AWS CodePipeline automatically collects the artifact and uses AWS OpsWorks to deploy it to your instance, by running the OpsWorks deployment Chef recipe that you defined on your layer. The deployment recipe starts all of the operations on your instance that are required to support a new version of your artifact.

To learn more about Chef cookbooks and recipes: https://docs.chef.io/cookbooks.html

To learn more about the AWS OpsWorks and AWS CodePipeline integration: https://docs.aws.amazon.com/opsworks/latest/userguide/other-services-cp.html

Varda: The Mysterious Fiber Bomb Problem: A Debugging Story

Post Syndicated from jake original http://lwn.net/Articles/702321/rss

Over at the Sandstorm Blog, project founder Kenton Varda relates a debugging war story. Sandstorm web servers would mysteriously peg the CPU around once a week, slowing request processing to a crawl, seemingly at random.
Obviously, we needed to take a CPU profile while the bug was in progress. Of course, the bug only reproduced in production, therefore we’d have to take our profile in production. This ruled out any profiling technology that would harm performance at other times – so, no instrumented binaries. We’d need a sampling profiler that could run on an existing process on-demand. And it would have to understand both C++ and V8 Javascript. (This last requirement ruled out my personal favorite profiler, pprof from google-perftools.)

Luckily, it turns out there is a correct modern answer: Linux’s “perf” tool. This is a sampling profiler that relies on Linux kernel APIs, thus not requiring loading any code into the target binary at all, at least for C/C++. And for Javascript, it turns out V8 has built-in support for generating a “perf map”, which tells the tool how to map JITed code locations back to Javascript source: just pass the –perf_basic_prof_only_functions flag on the Node command-line. This flag is safe in production – it writes some data to disk over time, but we rebuild all our VMs weekly, so the files never get large enough to be a problem.”

Popular YouTuber Experiments With WebTorrent to Beat Censorship

Post Syndicated from Andy original https://torrentfreak.com/popular-youtuber-experiments-with-webtorrent-to-beat-censorship-160930/

sadyoutubeWhen discussing the most influential websites on the planet, there can be little doubt that YouTube is a true giant. The video-hosting platform is the second most popular site on the Internet behind its owner’s Google.com

YouTube attracts well over a billion visitors every month, with many flocking to the platform to view the original content uploaded by its army of contributors. However, with great power comes great responsibility and for YouTube that means pleasing advertisers.

As a result, YouTube has rules in place over what kind of content can be monetized, something which caused a huge backlash recently.

In a nutshell, if you don’t produce content that is almost entirely “appropriate for all audiences,” (without references to drugs, violence, and sex, for example), your content is at risk of making no money. But YouTube goes further still, by flagging “controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies.” Awkward.

Many YouTubers view this refusal to monetize content as a form of censorship but recognize that as long as they’re in bed with the company, they’re going to have to play by its rules. For some, this means assessing alternatives.

Popular YouTuber Connor Hill (Bluedrake42 – 186,600 subscribers) is no stranger to YouTube flagging his videos. As a result, he’s decided to take matters into his own hands by experimenting with WebTorrent.

bd42-ss

As previously reported, WebTorrent brings torrents to the web. Instead of using standalone applications it allows people to share files directly from their browser, without having to configure or install anything.

Early on, WebTorrent creator Feross Aboukhadijeh identified “people-powered websites” as a revolutionary application for WebTorrent.

“Imagine a video site like YouTube, where visitors help to host the site’s content. The more people that use a WebTorrent-powered website, the faster and more resilient it becomes,” he told TF.

It is exactly this application for the technology that has excited Bluedrake. By taking his content, embedding it in his website, and using his own fans for distribution, Bluedrake says he can take back control.

“This solution does not require torrent clients, this solution does not require torrent files, this is a seamless video-player hosted solution, with a completely decentralized database, supported by the people watching the content itself,” Bluedrake says in a new video. “And it works…REALLY well.

Of course, all torrents need seeds to ensure that older content is always available, so Bluedrake says that the servers already funded by his community will have backup copies of all videos ready to seed, whenever that’s necessary.

“That’s literally the best of both worlds. A CDN and a TVDN – a Torrent Video Distribution Network – at the same time. It will be community-funded and community supported…and then we’ll have truly censorship-free, entirely impervious video content, in a network. That gives me chills,” Bluedrake adds.

But while this solution offers the opportunity to avoid censorship, there is no intention to break the law. Bluedrake insists that the freedom of peer-to-peer will only be used for speech, not to infringe copyright.

“All I want is a site where people can say what they want. I want a site where people can operate their business without having somebody else step in and take away their content when they say something they don’t like. We’re going to host our own content distribution network within a peer-to-peer, web-socketed torrent service,” he says.

The development has excited WebTorrent creator Feross Aboukhadijeh.

“This is just one of the extremely creative uses for WebTorrent that I’ve heard about. I’m continually amazed at what WebTorrent users are building with the open source torrent engine,” Feross informs TF.

“When a video site uses WebTorrent, visitors help to host the site’s content. The more people that use a WebTorrent-powered website, the faster and more resilient it becomes. I think that’s pretty cool. It’s something that traditional CDNs cannot offer.

“The magic of WebTorrent is that people can use it however they like. It’s not just a desktop torrent app but it’s a JavaScript library that anyone can use anywhere on the web.”

Of course, one YouTuber using the technology is a modest start but the potential is there for this to get much bigger if Bluedrake can make a success of it.

“The way that we get P2P technology to go mainstream is simple: make it easy, make it better,” Feross says.

“This is part of a larger trend of decentralized protocols replacing centralized services, as we’ve seen with Bitcoin and blockchain apps.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Prepare for re:Invent 2016 – Attend our Upcoming Webinars!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prepare-for-reinvent-2016-attend-our-upcoming-webinars/

We are 60 days away from AWS re:Invent 2016! I’m already working on a big pile of blog posts and my colleagues are working night and day to make sure that you have plenty of opportunities to learn about our services. With two new venues, hands-on labs, certification exams, a full day of additional content, and the re:Source Mini Con (full day technical deep dives), and twice as many breakout  sessions as last year, I am confident that you will have plenty to do during the day. After that, we have some great after-hours experiences on the agenda, including the famous wing eating contest, and expanded Pub Crawl, a Harley Ride, the re:Play party, and the re:Invent 5k (that’s kilometers, not kilobytes).

In order to make sure that you are fully prepared before you arrive in Las Vegas, we have put together three webinars. These are free and optional, but I’d recommend attending in order to make sure that you are positioned to make the most of your trip.

Here’s what we have:

How to Reserved Your Seats -We expect to have over 450 breakout sessions spread across 20 or so tracks. This year, based on your feedback, you will be able to reserve your seats ahead of time. That way you can make better plans and remove some uncertainty. In the webinar, you’ll learn how to reserve seats and build your calendar using our online tool.  Here are the webinar sessions (all times are PT):

Deep Dive on AWS re:Invent 2016 Breakout Sessions – Each of the tracks that I mentioned above will contain sessions at the introductory, advanced, and expert levels. This webinar will help you to learn more about the tracks and the sessions so that you can start to prepare your schedule. Again, the sessions, with times in PT:

Know Before You Go – The webinar will provide you with the big picture: keynotes, breakout sessions, training and certification, re:Invent Central, networking opportunities, and our popular after-hours activities.


Jeff;

 

PS -As we get closer to the big event, I know that many past attendees and AWS Partners will be publishing how-to guides of their own. Time permitting (hah!) I’ll put together a round-up of these posts.

 

 

Friday’s security advisories

Post Syndicated from jake original http://lwn.net/Articles/702310/rss

Arch Linux has updated c-ares
(code execution) and wordpress (multiple vulnerabilities).

CentOS has updated python-twisted-web (C7; C6: HTTP proxy redirect).

Debian has updated wordpress (multiple vulnerabilities).

Debian-LTS has updated chicken
(two vulnerabilities), firefox-esr
(regression in previous security update), icedove (multiple vulnerabilities), and ruby-activesupport-3.2 (access restriction bypass).

Fedora has updated curl (F23:
code execution) and php-adodb (F24;
F23: SQL injection).

openSUSE has updated libgcrypt
(42.1: flawed random number generation), openjpeg (42.1: denial of service), and postgresql93 (13.2: two vulnerabilities).

Oracle has updated python-twisted-web (OL7; OL6: HTTP proxy redirect).

Red Hat has updated python-twisted-web (RHEL7&6: HTTP proxy redirect).

SUSE has updated pidgin (SLE11:
multiple vulnerabilities) and postgresql94 (SLE11: two vulnerabilities).

J.J. Abrams Can’t Stop Copyright Lawsuit Against Star Trek Fan-Film

Post Syndicated from Ernesto original https://torrentfreak.com/j-j-abrams-cant-stop-copyright-lawsuit-star-trek-fan-film-160930/

axanarEarlier this year Paramount Pictures and CBS Studios filed a lawsuit against the makers of a Star Trek inspired fan film, accusing them of copyright infringement.

The dispute centers around the well-received short film Star Trek: Prelude to Axanar and the planned follow-up feature film Axanar.

Among other things, the Star Trek rightsholders claim ownership over various Star Trek related settings, characters, species, clothing, colors, shapes, words, short phrases and even the Klingon language.

A few months after the complaint was filed it appeared that the movie studios and the Axanar team had found a way to resolve their issues. During a Star Trek fan event director J.J. Abrams announced that the case would be over soon, citing discussions with Star Trek Beyond director Justin Lin.

“We started talking about this realizing that this is not an appropriate way to deal with the fans. The fans should be celebrating this thing,” Abrams said. “So Justin went to the studio and pushed them to stop this lawsuit and now, within the next few weeks, it will be announced this is going away.”

However, as time passed it appears that the director had spoken too soon, or perhaps made up the entire claim ad-lib. The case didn’t “go away” at all and this week it became clear that Paramount and CBS Studios see J.J. Abrams’ comments as irrelevant.

Both parties are currently in the discovery phase where they hope to gather evidence from the other side to back up their claims. Axanar was particularly interested in obtaining any communications the studios had with Justin Lin and J.J. Abrams, which seem to favor their claims.

However, through their lawyers CBS and Paramount refused to hand anything over, noting that this is information is irrelevant, if it exists at all. “We objected to your requests for communications with Justin Lin and J.J. Abrams as irrelevant, and did not agree to produce those documents,” they wrote in an email earlier this month.

The email

axanemail

To resolve this and other outstanding discovery disputes, the parties now ask the court what information should be handed over, and what can remain confidential.

In the joint motion (pdf) CBS and Paramount reiterate that the comments J.J. Abrams made are “not relevant” to any party’s claim. The directors are not authorized to speak on behalf of the movie studios and their comments have no impact on the damages amount, they argue.

“J.J. Abrams is a producer/director of certain Star Trek Copyrighted Works and Justin Lin was the director of Star Trek Beyond. Neither Mr. Abrams nor Mr. Lin is an authorized representative of either of the Plaintiffs,” the studios claim.

“A third party’s statement about the merits of this lawsuit has absolutely no bearing on the amount of money Defendants’ obtained by their infringing conduct, nor does it bear on any other aspect of damages,” they add.

Axanar disagrees with this assessment. They claim that Abrams statements about dropping the “ridiculous” lawsuit in the interest of fans, is central to a possible fair use claim and damages.

“Statements that Star Trek belongs to all of us and that the lawsuit is ridiculous and was going to be ‘dropped’ is relevant to the impact on the market prong of the fair use analysis, and Plaintiffs utter lack of
damages,” Axanar claims.

The court will now have to decide what information CBS and Paramount must share. It’s clear, however, that J.J. Abrams spoke way too soon and that the movie studios are not ready to drop their lawsuit without putting up a fight.

While Abrams may not have realized it at the time, his comments are a blessing for the fan-film. It offers Axanar great leverage in potential settlement discussions and will reflect badly on CBS and Paramount if the case heads to trial.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Open Sourcing a Deep Learning Solution for Detecting NSFW Images

Post Syndicated from davglass original https://yahooeng.tumblr.com/post/151148689421

By Jay Mahadeokar and Gerry Pesavento

Automatically identifying that an image is not suitable/safe for work (NSFW), including offensive and adult images, is an important problem which researchers have been trying to tackle for decades. Since images and user-generated content dominate the Internet today, filtering NSFW images becomes an essential component of Web and mobile applications. With the evolution of computer vision, improved training data, and deep learning algorithms, computers are now able to automatically classify NSFW image content with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

To the best of our knowledge, there is no open source model or algorithm for identifying NSFW images. In the spirit of collaboration and with the hope of advancing this endeavor, we are releasing our deep learning model that will allow developers to experiment with a classifier for NSFW detection, and provide feedback to us on ways to improve the classifier.

Our general purpose Caffe deep neural network model (Github code) takes an image as input and outputs a probability (i.e a score between 0-1) which can be used to detect and filter NSFW images. Developers can use this score to filter images below a certain suitable threshold based on a ROC curve for specific use-cases, or use this signal to rank images in search results.

Convolutional Neural Network (CNN) architectures and tradeoffs

In recent years, CNNs have become very successful in image classification problems [1] [5] [6]. Since 2012, new CNN architectures have continuously improved the accuracy of the standard ImageNet classification challenge. Some of the major breakthroughs include AlexNet (2012) [6], GoogLeNet [5], VGG (2013) [2] and Residual Networks (2015) [1]. These networks have different tradeoffs in terms of runtime, memory requirements, and accuracy. The main indicators for runtime and memory requirements are:

  1. Flops or connections – The number of connections in a neural network determine the number of compute operations during a forward pass, which is proportional to the runtime of the network while classifying an image.
  2. Parameters -–The number of parameters in a neural network determine the amount of memory needed to load the network.

Ideally we want a network with minimum flops and minimum parameters, which would achieve maximum accuracy.

Training a deep neural network for NSFW classification

We train the models using a dataset of positive (i.e. NSFW) images and negative (i.e. SFW – suitable/safe for work) images. We are not releasing the training images or other details due to the nature of the data, but instead we open source the output model which can be used for classification by a developer.

We use the Caffe deep learning library and CaffeOnSpark; the latter is a powerful open source framework for distributed learning that brings Caffe deep learning to Hadoop and Spark clusters for training models (Big shout out to Yahoo’s CaffeOnSpark team!).

While training, the images were resized to 256×256 pixels, horizontally flipped for data augmentation, and randomly cropped to 224×224 pixels, and were then fed to the network. For training residual networks, we used scale augmentation as described in the ResNet paper [1], to avoid overfitting. We evaluated various architectures to experiment with tradeoffs of runtime vs accuracy.

  1. MS_CTC [4] – This architecture was proposed in Microsoft’s constrained time cost paper. It improves on top of AlexNet in terms of speed and accuracy maintaining a combination of convolutional and fully-connected layers.
  2. Squeezenet [3] – This architecture introduces the fire module which contain layers to squeeze and then expand the input data blob. This helps to save the number of parameters keeping the Imagenet accuracy as good as AlexNet, while the memory requirement is only 6MB.
  3. VGG [2] – This architecture has 13 conv layers and 3 FC layers.
  4. GoogLeNet [5] – GoogLeNet introduces inception modules and has 20 convolutional layer stages. It also uses hanging loss functions in intermediate layers to tackle the problem of diminishing gradients for deep networks.
  5. ResNet-50 [1] – ResNets use shortcut connections to solve the problem of diminishing gradients. We used the 50-layer residual network released by the authors.
  6. ResNet-50-thin – The model was generated using our pynetbuilder tool and replicates the Residual Network paper’s 50-layer network (with half number of filters in each layer). You can find more details on how the model was generated and trained here.

Tradeoffs of different architectures: accuracy vs number of flops vs number of params in network.

The deep models were first pre-trained on the ImageNet 1000 class dataset. For each network, we replace the last layer (FC1000) with a 2-node fully-connected layer. Then we fine-tune the weights on the NSFW dataset. Note that we keep the learning rate multiplier for the last FC layer 5 times the multiplier of other layers, which are being fine-tuned. We also tune the hyper parameters (step size, base learning rate) to optimize the performance.

We observe that the performance of the models on NSFW classification tasks is related to the performance of the pre-trained model on ImageNet classification tasks, so if we have a better pretrained model, it helps in fine-tuned classification tasks. The graph below shows the relative performance on our held-out NSFW evaluation set. Please note that the false positive rate (FPR) at a fixed false negative rate (FNR) shown in the graph is specific to our evaluation dataset, and is shown here for illustrative purposes. To use the models for NSFW filtering, we suggest that you plot the ROC curve using your dataset and pick a suitable threshold.

Comparison of performance of models on Imagenet and their counterparts fine-tuned on NSFW dataset.

We are releasing the thin ResNet 50 model, since it provides good tradeoff in terms of accuracy, and the model is lightweight in terms of runtime (takes < 0.5 sec on CPU) and memory (~23 MB). Please refer our git repository for instructions and usage of our model. We encourage developers to try the model for their NSFW filtering use cases. For any questions or feedback about performance of model, we encourage creating a issue and we will respond ASAP.

Results can be improved by fine-tuning the model for your dataset or use case. If you achieve improved performance or you have trained a NSFW model with different architecture, we encourage contributing to the model or sharing the link on our description page.

Disclaimer: The definition of NSFW is subjective and contextual. This model is a general purpose reference model, which can be used for the preliminary filtering of pornographic images. We do not provide guarantees of accuracy of output, rather we make this available for developers to explore and enhance as an open source project.

We would like to thank Sachin Farfade, Amar Ramesh Kamat, Armin Kappeler, and Shraddha Advani for their contributions in this work.

References:

[1] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition” arXiv preprint arXiv:1512.03385 (2015).

[2] Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.”; arXiv preprint arXiv:1409.1556(2014).

[3] Iandola, Forrest N., Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 1MB model size.”; arXiv preprint arXiv:1602.07360 (2016).

[4] He, Kaiming, and Jian Sun. “Convolutional neural networks at constrained time cost.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5353-5360. 2015.

[5] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet,Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. “Going deeper with convolutions” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9. 2015.

[6] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks” In Advances in neural information processing systems, pp. 1097-1105. 2012.

mimikittenz – Extract Plain-Text Passwords From Memory

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/93eG03hh3EE/

mimikittenz is a post-exploitation powershell tool that utilizes the Windows function ReadProcessMemory() in order to extract plain-text passwords from various target processes. The aim of mimikittenz is to provide user-level (non-admin privileged) sensitive data extraction in order to maximise post exploitation efforts and increase value of…

Read the full post at darknet.org.uk

Personalized Group Recommendations Are Here | code.flickr.com

Post Syndicated from davglass original https://yahooeng.tumblr.com/post/151144204266

Personalized Group Recommendations Are Here | code.flickr.com:

There are two primary paradigms for the discovery of digital content. First is the search paradigm, in which the user is actively looking for specific content using search terms and filters (e.g., Google web search, Flickr image search, Yelprestaurant search, etc.). Second is a passive approach, in which the user browses content presented to them (e.g., NYTimes news, Flickr Explore, and Twitter trending topics). Personalization benefits both approaches by providing relevant content that is tailored to users’ tastes (e.g., Google News, Netflix homepage, LinkedIn job search, etc.). We believe personalization can improve the user experience at Flickr by guiding both new as well as more experienced members as they explore photography. Today, we’re excited to bring you personalized group recommendations.

Read more over at code.flickr.com

Doorjam – play your own theme music

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/doorjam-create-your-own-theme-music/

Have you ever dreamed about having your own theme music? That perfect song that reflects your mood as you enter a room, drawing the attention of others towards you?

I know I have. Though that might be due to my desire to live in a Disney movie, or maybe just because I spent three years studying drama and live in a constant state of theatrical bliss.

Whatever the reason, it’s fair to say that Doorjam is an awesome build.

Doorjam

Walk into your theme song. Powered by Spotify. http://doorjam.in

Using a WiFi dongle, repurposed as an iBeacon, the Doorjam mobile phone app allows you to select your theme song from Spotify and play it via a boombox when you are in range.

Stick-figure diagram showing the way Doorjam lets you choose your theme music and plays it when you're within range

The team at redpepper have made the build code available publicly, taking makers through a step-by-step tutorial on their website.

So while we work on our own Doorjam build, why don’t you tell us what your ultimate theme music would be?

And for inspiration, I’ll hand over to Joseph…

(500) Days of Summer – “You Make My Dreams Come True” by Hall & Oates [HD VIDEO CLIP]

I know this feeling very well.

 

The post Doorjam – play your own theme music appeared first on Raspberry Pi.

The Hacking of Yahoo

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/the_hacking_of_.html

Last week, Yahoo! announced that it was hacked pretty massively in 2014. Over half a billion usernames and passwords were affected, making this the largest data breach of all time.

Yahoo! claimed it was a government that did it:

A recent investigation by Yahoo! Inc. has confirmed that a copy of certain user account information was stolen from the company’s network in late 2014 by what it believes is a state-sponsored actor.

I did a bunch of press interviews after the hack, and repeatedly said that “state-sponsored actor” is often code for “please don’t blame us for our shoddy security because it was a really sophisticated attacker and we can’t be expected to defend ourselves against that.”

Well, it turns out that Yahoo! had shoddy security and it was a bunch of criminals that hacked them. The first story is from the New York Times, and outlines the many ways Yahoo! ignored security issues.

But when it came time to commit meaningful dollars to improve Yahoo’s security infrastructure, Ms. Mayer repeatedly clashed with Mr. Stamos, according to the current and former employees. She denied Yahoo’s security team financial resources and put off proactive security defenses, including intrusion-detection mechanisms for Yahoo’s production systems.

The second story is from the Wall Street Journal:

InfoArmor said the hackers, whom it calls “Group E,” have sold the entire Yahoo database at least three times, including one sale to a state-sponsored actor. But the hackers are engaged in a moneymaking enterprise and have “a significant criminal track record,” selling data to other criminals for spam or to affiliate marketers who aren’t acting on behalf of any government, said Andrew Komarov, chief intelligence officer with InfoArmor Inc.

That is not the profile of a state-sponsored hacker, Mr. Komarov said. “We don’t see any reason to say that it’s state sponsored,” he said. “Their clients are state sponsored, but not the actual hackers.”

Man Who Leaked The Revenant Online Fined $1.1m

Post Syndicated from Andy original https://torrentfreak.com/man-leaked-revenant-online-fined-1-1m-160930/

revenantIn December 2015, many so-called ‘screener’ copies of the latest movies leaked online. Among them a near perfect copy of Alejandro G. Iñárritu’s ‘The Revenant’.

Starring Leonardo DiCaprio and slated for a Christmas day release, in a matter of hours the tale vengeance clocked up tens of thousands of illegal downloads.

With such a high-profile leak, it was inevitable that the authorities would attempt to track down the individual responsible. It didn’t take them long.

Following an FBI investigation, former studio worker William Kyle Morarity was discovered as the culprit. Known online by the username “clutchit,” the 31-year-old had uploaded The Revenant and The Peanuts Movie to private torrent tracker Pass The Popcorn.

The Revenant

therevenant

Uploading a copyrighted work being prepared for commercial distribution is a felony that carries a maximum penalty of three years in prison, so his sentencing always had the potential to be punishing for the Lancaster man, despite his early guilty plea.

This week Morarity was sentenced in federal court for criminal copyright infringement after admitting screener copies of both movies to the Internet.

After being posted online six days in advance of its theatrical release, it was estimated that The Revenant was downloaded at least a million times during a six week period, costing Twentieth Century Fox Film Corporation to suffer losses of “well over $1 million.”

United States District Court Judge Stephen V. Wilson ordered Morarity to pay $1.12 million in restitution to Twentieth Century Fox. He also sentenced the 31-year-old to eight months’ home detention and 24 months’ probation.

According to court documents, Morarity obtained the screeners and copied them to a portable hard drive. He then uploaded the movies to Pass The Popcorn on December 17 and December 19.

“The film industry creates thousands of jobs in Southern California,” said United States Attorney Eileen M. Decker commenting on the sentencing.

“The defendant’s illegal conduct caused significant harm to the victim movie studio. The fact that the defendant stole these films while working on the lot of a movie studio makes his crime more egregious.”

Deirdre Fike, the Assistant Director in Charge of the FBI’s Los Angeles Field Office, said that Morarity had abused his position of trust to obtain copies of the movies and then used them in a way that caused Fox to incur huge losses.

“The theft of intellectual property – in this case, major motion pictures – discourages creative incentive and affects the average American making ends meet in the entertainment industry,” Fike said.

As part of his punishment, Morarity also agreed to assist the FBI to produce a public service announcement aimed at educating the public about the harms of copyright infringement and the illegal uploading of movies to the Internet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New P2 Instance Type for Amazon EC2 – Up to 16 GPUs

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-p2-instance-type-for-amazon-ec2-up-to-16-gpus/

I like to watch long-term technology and business trends and watch as they shape the products and services that I get to use and to write about. As I was preparing to write today’s post, three such trends came to mind:

  • Moore’s Law – Coined in 1965, Moore’s Law postulates that the number of transistors on a chip doubles every year.
  • Mass Market / Mass Production – Because all of the technologies that we produce, use, and enjoy every day consume vast numbers of chips, there’s a huge market for them.
  • Specialization  – Due to the previous trend, even niche markets can be large enough to be addressed by purpose-built products.

As the industry pushes forward in accord with these trends, a couple of interesting challenges have surfaced over the past decade or so. Again, here’s a quick list (yes, I do think in bullet points):

  • Speed of Light – Even as transistor density increases, the speed of light imposes scaling limits (as computer pioneer Grace Hopper liked to point out, electricity can travel slightly less than 1 foot in a nanosecond).
  • Semiconductor Physics – Fundamental limits in the switching time (on/off) of a transistor ultimately determine the minimum achievable cycle time for a CPU.
  • Memory Bottlenecks – The well-known von Neumann Bottleneck imposes limits on the value of additional CPU power.

The GPU (Graphics Processing Unit) was born of these trends, and addresses many of the challenges! Processors have reached the upper bound on clock rates, but Moore’s Law gives designers more and more transistors to work with. Those transistors can be used to add more cache and more memory to a traditional architecture, but the von Neumann Bottleneck limits the value of doing so. On the other hand, we now have large markets for specialized hardware (gaming comes to mind as one of the early drivers for GPU consumption). Putting all of this together, the GPU scales out (more processors and parallel banks of memory) instead of up (faster processors and bottlenecked memory). Net-net: the GPU is an effective way to use lots of transistors to provide massive amounts of compute power!

With all of this as background, I would like to tell you about the newest EC2 instance type, the P2. These instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

New P2 Instance Type
This new instance type incorporates up to 8 NVIDIA Tesla K80 Accelerators, each running a pair of NVIDIA GK210 GPUs. Each GPU provides 12 GB of memory (accessible via 240 GB/second of memory bandwidth), and 2,496 parallel processing cores. They also include ECC memory protection, allowing them to fix single-bit errors and to detect double-bit errors. The combination of ECC memory protection and double precision floating point operations makes these instances a great fit for all of the workloads that I mentioned above.

Here are the instance specs:

Instance Name GPU Count vCPU Count Memory Parallel Processing Cores
GPU Memory
Network Performance
p2.large 1 4 61 GiB 2,496 12 GB High
p2.8xlarge 8 32 488 GiB 19,968 96 GB 10 Gigabit
p2.16xlarge 16 64 732 GiB 39,936 192 GB 20 Gigabit

All of the instances are powered by an AWS-Specific version of Intel’s Broadwell processor, running at 2.7 GHz. The p2.16xlarge gives you control over C-states and P-states, and can turbo boost up to 3.0 GHz when running on 1 or 2 cores.

The GPUs support CUDA 7.5 and above, OpenCL 1.2, and the GPU Compute APIs. The GPUs on the p2.8xlarge and the p2.16xlarge are connected via a common PCI fabric. This allows for low-latency, peer to peer GPU to GPU transfers.

All of the instances make use of our new Enhanced Network Adapter (ENA – read Elastic Network Adapter – High Performance Network Interface for Amazon EC2 to learn more) and can, per the table above, support up to 20 Gbps of low-latency networking when used within a Placement Group.

Having a powerful multi-vCPU processor and multiple, well-connected GPUs on a single instance, along with low-latency access to other instances with the same features creates a very impressive hierarchy for scale-out processing:

  • One vCPU
  • Multiple vCPUs
  • One GPU
  • Multiple GPUs in an instance
  • Multiple GPUs in multiple instances within a Placement Group

P2 instances are VPC only, require the use of 64-bit, HVM-style, EBS-backed AMIs, and you can launch them today in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions as On-Demand Instances, Spot Instances, Reserved Instances, or Dedicated Hosts.

Here’s how I installed the NVIDIA drivers and the CUDA toolkit on my P2 instance, after first creating, formatting, attaching, and mounting (to /ebs) an EBS volume that had enough room for the CUDA toolkit and the associated samples (10 GiB is more than enough):

$ cd /ebs
$ sudo yum update -y
$ sudo yum groupinstall -y "Development tools"
$ sudo yum install -y kernel-devel-`uname -r`
$ wget http://us.download.nvidia.com/XFree86/Linux-x86_64/352.99/NVIDIA-Linux-x86_64-352.99.run
$ wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda_7.5.18_linux.run
$ chmod +x NVIDIA-Linux-x86_64-352.99.run
$ sudo ./NVIDIA-Linux-x86_64-352.99.run
$ chmod +x cuda_7.5.18_linux.run
$ sudo ./cuda_7.5.18_linux.run   # Don't install driver, just install CUDA and sample
$ sudo nvidia-smi -pm 1
$ sudo nvidia-smi -acp 0
$ sudo nvidia-smi --auto-boost-permission=0
$ sudo nvidia-smi -ac 2505,875

Note that NVIDIA-Linux-x86_64-352.99.run and cuda_7.5.18_linux.run are interactive programs; you need to accept the license agreements, choose some options, and enter some paths. Here’s how I set up the CUDA toolkit and the samples when I ran cuda_7.5.18_linux.run:

P2 and OpenCL in Action
With everything set up, I took this Gist and compiled it on a p2.8xlarge instance:

[ec2-user@ip-10-0-0-242 ~]$ gcc test.c -I /usr/local/cuda/include/ -L /usr/local/cuda-7.5/lib64/ -lOpenCL -o test

Here’s what it reported:

[ec2-user@ip-10-0-0-242 ~]$ ./test
1. Device: Tesla K80
 1.1 Hardware version: OpenCL 1.2 CUDA
 1.2 Software version: 352.99
 1.3 OpenCL C version: OpenCL C 1.2
 1.4 Parallel compute units: 13
2. Device: Tesla K80
 2.1 Hardware version: OpenCL 1.2 CUDA
 2.2 Software version: 352.99
 2.3 OpenCL C version: OpenCL C 1.2
 2.4 Parallel compute units: 13
3. Device: Tesla K80
 3.1 Hardware version: OpenCL 1.2 CUDA
 3.2 Software version: 352.99
 3.3 OpenCL C version: OpenCL C 1.2
 3.4 Parallel compute units: 13
4. Device: Tesla K80
 4.1 Hardware version: OpenCL 1.2 CUDA
 4.2 Software version: 352.99
 4.3 OpenCL C version: OpenCL C 1.2
 4.4 Parallel compute units: 13
5. Device: Tesla K80
 5.1 Hardware version: OpenCL 1.2 CUDA
 5.2 Software version: 352.99
 5.3 OpenCL C version: OpenCL C 1.2
 5.4 Parallel compute units: 13
6. Device: Tesla K80
 6.1 Hardware version: OpenCL 1.2 CUDA
 6.2 Software version: 352.99
 6.3 OpenCL C version: OpenCL C 1.2
 6.4 Parallel compute units: 13
7. Device: Tesla K80
 7.1 Hardware version: OpenCL 1.2 CUDA
 7.2 Software version: 352.99
 7.3 OpenCL C version: OpenCL C 1.2
 7.4 Parallel compute units: 13
8. Device: Tesla K80
 8.1 Hardware version: OpenCL 1.2 CUDA
 8.2 Software version: 352.99
 8.3 OpenCL C version: OpenCL C 1.2
 8.4 Parallel compute units: 13

As you can see, I have a ridiculous amount of compute power available at my fingertips!

New Deep Learning AMI
As I said at the beginning, these instances are a great fit for machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads.

In order to help you to make great use of one or more P2 instances, we are launching a  Deep Learning AMI today. Deep learning has the potential to generate predictions (also known as scores or inferences) that are more reliable than those produced by less sophisticated machine learning, at the cost of a most complex and more computationally intensive training process. Fortunately, the newest generations of deep learning tools are able to distribute the training work across multiple GPUs on a single instance as well as across multiple instances each containing multiple GPUs.

The new AMI contains the following frameworks, each installed, configured, and tested against the popular MNIST database:

MXNet – This is a flexible, portable, and efficient library for deep learning. It supports declarative and imperative programming models across a wide variety of programming languages including C++, Python, R, Scala, Julia, Matlab, and JavaScript.

Caffe – This deep learning framework was designed with  expression, speed, and modularity in mind. It was developed at the Berkeley Vision and Learning Center (BVLC) with assistance from many community contributors.

Theano – This Python library allows you define, optimize, and evaluate mathematical expressions that involve multi-dimensional arrays.

TensorFlow – This is an open source library for numerical calculation using data flow graphs (each node in the graph represents a mathematical operation; each edge represents multidimensional data communicated between them).

Torch – This is a GPU-oriented scientific computing framework with support for machine learning algorithms, all accessible via LuaJIT.

Consult the README file in ~ec2-user/src to learn more about these frameworks.

AMIs from NVIDIA
You may also find the following AMIs to be of interest:


Jeff;

EC2 Reserved Instance Update – Convertible RIs and Regional Benefit

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-reserved-instance-update-convertible-ris-and-regional-benefit/

We launched EC2 Reserved Instances almost eight years ago. The model that we originated in 2009 provides you with two separate benefits: capacity reservations and a significant discount on the use of specific instances in an Availability Zone. Over time, based on customer feedback, we have refined the model and made additional options available including Scheduled Reserved Instances, the ability to Modify Reserved Instances Reservations, and the ability to buy and sell Reserved Instances (RIs) on the Reserved Instance Marketplace.

Today we are enhancing the Reserved Instance model once again. Here’s what we are launching:

Regional Benefit -Many customers have told us that the discount is more important than the capacity reservation, and that they would be willing to trade it for increased flexibility. Starting today, you can choose to waive the capacity reservation associated with Standard RI, run your instance in any AZ in the Region, and have your RI discount automatically applied.

Convertible Reserved Instances -Convertible RIs give you even more flexibility and offer a significant discount (typically 45% compared to On-Demand). They allow you to change the instance family and other parameters associated with a Reserved Instance at any time. For example, you can convert C3 RIs to C4 RIs to take advantage of a newer instance type, or convert C4 RIs to M4 RIs if your application turns out to need more memory. You can also use Convertible RIs to take advantage of EC2 price reductions over time.

Let’s take a closer look…

Regional Benefit
Reserved Instances (either Standard or Convertible) can now be set to automatically apply across all Availability Zones in a region. The regional benefit automatically applies your RIs to instances across all Availability Zones in a region, broadening the application of your RI discounts. When this benefit is used, capacity is not reserved since the selection of an Availability Zone is required to provide a capacity reservation. In dynamic environments where you frequently launch, use, and then terminate instances this new benefit will expand your options and reduce the amount of time you spend seeking optimal alignment between your RIs and your instances. In horizontally scaled architectures using instances launched via Auto Scaling and connected via Elastic Load Balancing, this new benefit can be of considerable value.

After you click on Purchase Reserved Instances in the AWS Management Console, clicking on Search will display RI’s that have this new benefit:

You can check Only show offerings that reserve capacity if you want to shop for RIs that apply to a single Availability Zone and also reserve capacity:

Convertible RIs
Perhaps you, like many of our customers, purchase RIs to benefit from the best pricing for their workloads. However, if you don’t have a good understanding of your long-term requirements you may be able to make use of our new Convertible RI. If your needs change, you simply exchange your Convertible Reserved Instances for other ones. You can change into Convertible RIs that have a new instance type, operating system, or tenancy without resetting the term. Also, there’s no fee for making an exchange and you can do so as often as you like.

When you make the exchange, you must acquire new RIs that are of equal or greater value than those you started with; in some cases you’ll need to make a true-up payment in order to balance the books. The exchange process is based on the list value of each Convertible RI; this value is simply the sum of all payments you’ll make over the remaining term of the original RI.

You can shop for a Convertible RI by making sure that the Offering Class to Convertible before clicking on Search:

The Convertible RIs offer capacity assurance, are typically priced at a 45% discount when compared to On-Demand, and are available for all current EC2 instance types on a three year term. All three payment options (No Upfront, Partial Upfront, and All Upfront) are available.

Available Now
All of the purchasing and exchange options that I described above can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Reserved Instance APIs (DescribeReservedInstances, PurchaseReservedInstances, ModifyReservedInstances, and so forth).

Convertible RIs and the regional benefit are available in all public AWS Regions, excluding AWS GovCloud (US) and China (Beijing), which are coming soon.


Jeff;

 

Security updates for Thursday

Post Syndicated from jake original http://lwn.net/Articles/702214/rss

CentOS has updated bind (C7; C6; C5: denial of service), bind97 (C5: denial of service), kvm (C5: two vulnerabilities), and openssl (C7; C6: multiple vulnerabilities).

Fedora has updated vfrnav (F24: unspecified).

Oracle has updated bind (OL7; OL6; OL5: denial of service) and bind97 (OL5: denial of service).

Scientific Linux has updated bind
(denial of service), bind97 (SL5: denial of service), kvm (SL5: two vulnerabilities), and openssl (SL7&6: multiple vulnerabilities).

SUSE has updated postgresql93
(SLE12: two vulnerabilities) and postgresql94 (SLE12: two vulnerabilities).

Ubuntu has updated clamav (16.04,
14.04, 12.04: three code execution flaws), samba (16.04, 14.04: crypto downgrade), and systemd (16.04: denial of service).

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close