Tag Archives: research

Source Code for IoT Botnet ‘Mirai’ Released

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/10/source-code-for-iot-botnet-mirai-released/

The source code that powers the “Internet of Things” (IoT) botnet responsible for launching the historically large distributed denial-of-service (DDoS) attack against KrebsOnSecurity last month has been publicly released, virtually guaranteeing that the Internet will soon be flooded with attacks from many new botnets powered by insecure routers, IP cameras, digital video recorders and other easily hackable devices.

The leak of the source code was announced Friday on the English-language hacking community Hackforums. The malware, dubbed “Mirai,” spreads to vulnerable devices by continuously scanning the Internet for IoT systems protected by factory default or hard-coded usernames and passwords.

The Hackforums post that includes links to the Mirai source code.

The Hackforums post that includes links to the Mirai source code.

Vulnerable devices are then seeded with malicious software that turns them into “bots,” forcing them to report to a central control server that can be used as a staging ground for launching powerful DDoS attacks designed to knock Web sites offline.

The Hackforums user who released the code, using the nickname “Anna-senpai,” told forum members the source code was being released in response to increased scrutiny from the security industry.

“When I first go in DDoS industry, I wasn’t planning on staying in it long,” Anna-senpai wrote. “I made my money, there’s lots of eyes looking at IOT now, so it’s time to GTFO [link added]. So today, I have an amazing release for you. With Mirai, I usually pull max 380k bots from telnet alone. However, after the Kreb [sic] DDoS, ISPs been slowly shutting down and cleaning up their act. Today, max pull is about 300k bots, and dropping.”

Sources tell KrebsOnSecurity that Mirai is one of at least two malware families that are currently being used to quickly assemble very large IoT-based DDoS armies. The other dominant strain of IoT malware, dubbed “Bashlight,” functions similarly to Mirai in that it also infects systems via default usernames and passwords on IoT devices.

According to research from security firm Level3 Communications, the Bashlight botnet currently is responsible for enslaving nearly a million IoT devices and is in direct competition with botnets based on Mirai.

“Both [are] going after the same IoT device exposure and, in a lot of cases, the same devices,” said Dale Drew, Level3’s chief security officer.

Infected systems can be cleaned up by simply rebooting them — thus wiping the malicious code from memory. But experts say there is so much constant scanning going on for vulnerable systems that vulnerable IoT devices can be re-infected within minutes of a reboot. Only changing the default password protects them from rapidly being reinfected on reboot.

In the days since the record 620 Gbps DDoS on KrebsOnSecurity.com, this author has been able to confirm that the attack was launched by a Mirai botnet. As I wrote last month, preliminary analysis of the attack traffic suggested that perhaps the biggest chunk of the attack came in the form of traffic designed to look like it was generic routing encapsulation (GRE) data packets, a communication protocol used to establish a direct, point-to-point connection between network nodes. GRE lets two peers share data they wouldn’t be able to share over the public network itself.

One security expert who asked to remain anonymous said he examined the Mirai source code following its publication online and confirmed that it includes a section responsible for coordinating GRE attacks.

It’s an open question why anna-senpai released the source code for Mirai, but it’s unlikely to have been an altruistic gesture: Miscreants who develop malicious software often dump their source code publicly when law enforcement investigators and security firms start sniffing around a little too close to home. Publishing the code online for all to see and download ensures that the code’s original authors aren’t the only ones found possessing it if and when the authorities come knocking with search warrants.

My guess is that (if it’s not already happening) there will soon be many Internet users complaining to their ISPs about slow Internet speeds as a result of hacked IoT devices on their network hogging all the bandwidth. On the bright side, if that happens it may help to lessen the number of vulnerable systems.

On the not-so-cheerful side, there are plenty of new, default-insecure IoT devices being plugged into the Internet each day. Gartner Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015, and will reach 20.8 billion by 2020. In 2016, 5.5 million new things will get connected each day, Gartner estimates.

For more on what we can and must do about the dawning IoT nightmare, see the second half of this week’s story, The Democratization of Censorship. In the meantime, this post from Sucuri Inc. points to some of the hardware makers whose default-insecure products are powering this IoT mess.

Open Sourcing a Deep Learning Solution for Detecting NSFW Images

Post Syndicated from davglass original https://yahooeng.tumblr.com/post/151148689421

By Jay Mahadeokar and Gerry Pesavento

Automatically identifying that an image is not suitable/safe for work (NSFW), including offensive and adult images, is an important problem which researchers have been trying to tackle for decades. Since images and user-generated content dominate the Internet today, filtering NSFW images becomes an essential component of Web and mobile applications. With the evolution of computer vision, improved training data, and deep learning algorithms, computers are now able to automatically classify NSFW image content with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

To the best of our knowledge, there is no open source model or algorithm for identifying NSFW images. In the spirit of collaboration and with the hope of advancing this endeavor, we are releasing our deep learning model that will allow developers to experiment with a classifier for NSFW detection, and provide feedback to us on ways to improve the classifier.

Our general purpose Caffe deep neural network model (Github code) takes an image as input and outputs a probability (i.e a score between 0-1) which can be used to detect and filter NSFW images. Developers can use this score to filter images below a certain suitable threshold based on a ROC curve for specific use-cases, or use this signal to rank images in search results.

Convolutional Neural Network (CNN) architectures and tradeoffs

In recent years, CNNs have become very successful in image classification problems [1] [5] [6]. Since 2012, new CNN architectures have continuously improved the accuracy of the standard ImageNet classification challenge. Some of the major breakthroughs include AlexNet (2012) [6], GoogLeNet [5], VGG (2013) [2] and Residual Networks (2015) [1]. These networks have different tradeoffs in terms of runtime, memory requirements, and accuracy. The main indicators for runtime and memory requirements are:

  1. Flops or connections – The number of connections in a neural network determine the number of compute operations during a forward pass, which is proportional to the runtime of the network while classifying an image.
  2. Parameters -–The number of parameters in a neural network determine the amount of memory needed to load the network.

Ideally we want a network with minimum flops and minimum parameters, which would achieve maximum accuracy.

Training a deep neural network for NSFW classification

We train the models using a dataset of positive (i.e. NSFW) images and negative (i.e. SFW – suitable/safe for work) images. We are not releasing the training images or other details due to the nature of the data, but instead we open source the output model which can be used for classification by a developer.

We use the Caffe deep learning library and CaffeOnSpark; the latter is a powerful open source framework for distributed learning that brings Caffe deep learning to Hadoop and Spark clusters for training models (Big shout out to Yahoo’s CaffeOnSpark team!).

While training, the images were resized to 256×256 pixels, horizontally flipped for data augmentation, and randomly cropped to 224×224 pixels, and were then fed to the network. For training residual networks, we used scale augmentation as described in the ResNet paper [1], to avoid overfitting. We evaluated various architectures to experiment with tradeoffs of runtime vs accuracy.

  1. MS_CTC [4] – This architecture was proposed in Microsoft’s constrained time cost paper. It improves on top of AlexNet in terms of speed and accuracy maintaining a combination of convolutional and fully-connected layers.
  2. Squeezenet [3] – This architecture introduces the fire module which contain layers to squeeze and then expand the input data blob. This helps to save the number of parameters keeping the Imagenet accuracy as good as AlexNet, while the memory requirement is only 6MB.
  3. VGG [2] – This architecture has 13 conv layers and 3 FC layers.
  4. GoogLeNet [5] – GoogLeNet introduces inception modules and has 20 convolutional layer stages. It also uses hanging loss functions in intermediate layers to tackle the problem of diminishing gradients for deep networks.
  5. ResNet-50 [1] – ResNets use shortcut connections to solve the problem of diminishing gradients. We used the 50-layer residual network released by the authors.
  6. ResNet-50-thin – The model was generated using our pynetbuilder tool and replicates the Residual Network paper’s 50-layer network (with half number of filters in each layer). You can find more details on how the model was generated and trained here.

Tradeoffs of different architectures: accuracy vs number of flops vs number of params in network.

The deep models were first pre-trained on the ImageNet 1000 class dataset. For each network, we replace the last layer (FC1000) with a 2-node fully-connected layer. Then we fine-tune the weights on the NSFW dataset. Note that we keep the learning rate multiplier for the last FC layer 5 times the multiplier of other layers, which are being fine-tuned. We also tune the hyper parameters (step size, base learning rate) to optimize the performance.

We observe that the performance of the models on NSFW classification tasks is related to the performance of the pre-trained model on ImageNet classification tasks, so if we have a better pretrained model, it helps in fine-tuned classification tasks. The graph below shows the relative performance on our held-out NSFW evaluation set. Please note that the false positive rate (FPR) at a fixed false negative rate (FNR) shown in the graph is specific to our evaluation dataset, and is shown here for illustrative purposes. To use the models for NSFW filtering, we suggest that you plot the ROC curve using your dataset and pick a suitable threshold.

Comparison of performance of models on Imagenet and their counterparts fine-tuned on NSFW dataset.

We are releasing the thin ResNet 50 model, since it provides good tradeoff in terms of accuracy, and the model is lightweight in terms of runtime (takes < 0.5 sec on CPU) and memory (~23 MB). Please refer our git repository for instructions and usage of our model. We encourage developers to try the model for their NSFW filtering use cases. For any questions or feedback about performance of model, we encourage creating a issue and we will respond ASAP.

Results can be improved by fine-tuning the model for your dataset or use case. If you achieve improved performance or you have trained a NSFW model with different architecture, we encourage contributing to the model or sharing the link on our description page.

Disclaimer: The definition of NSFW is subjective and contextual. This model is a general purpose reference model, which can be used for the preliminary filtering of pornographic images. We do not provide guarantees of accuracy of output, rather we make this available for developers to explore and enhance as an open source project.

We would like to thank Sachin Farfade, Amar Ramesh Kamat, Armin Kappeler, and Shraddha Advani for their contributions in this work.

References:

[1] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition” arXiv preprint arXiv:1512.03385 (2015).

[2] Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.”; arXiv preprint arXiv:1409.1556(2014).

[3] Iandola, Forrest N., Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 1MB model size.”; arXiv preprint arXiv:1602.07360 (2016).

[4] He, Kaiming, and Jian Sun. “Convolutional neural networks at constrained time cost.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5353-5360. 2015.

[5] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet,Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. “Going deeper with convolutions” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9. 2015.

[6] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks” In Advances in neural information processing systems, pp. 1097-1105. 2012.

Research: Movie Piracy Hurts Sales, But Not Always

Post Syndicated from Ernesto original https://torrentfreak.com/research-movie-piracy-hurts-sales-but-not-always-160929/

europe-flagResearch into online piracy comes in all shapes and sizes, often with equally mixed results. The main question is often whether piracy is hurting sales.

New research conducted by economists from the European Commission’s Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs, tries to find answers for the movie industry.

For a new paper titled “Movie Piracy and Displaced Sales in Europe,” the researchers conducted a large-scale survey among 30,000 respondents from six countries, documenting their movie consumption patterns.

Using statistical models and longitudinal data, they were able to estimate how piracy affects legal sales and if this differs from country to country.

Perhaps unsurprisingly, the findings show that not every pirated movie is a lost sale. Instead, for every hundred films that are first viewed from a pirated source, 37 viewings from paid movies are ‘lost’.

This results in a displacement rate of 0.37, which is still a high number of course, also compared to previous research.

It’s worth noting that in some cases piracy actually has a beneficial effect. This is true for movies that people have seen more than twice.

“Interestingly, we found evidence of a sampling effect: for movies that are seen more than twice, first unpaid consumption slightly increases paid second consumption,” the researchers write.

However, the sampling effect doesn’t outweigh the loss in sales. Overall the researchers estimate that online piracy leads to a significant loss in revenue for the movie industry.

“Using a back-of-the-envelope calculation, we show that this implies that unpaid movie viewings reduced movie sales in Europe by about 4.4% during the sample period,” they write.

This negative effect is driven by a relatively small group of consumers. Roughly 20% of the respondents with the highest movie consumption are responsible for 94% of lost movie sales. Or put differently, the most avid film fans pirate the most.

Interestingly, there are large between-country differences too. In Germany online movie piracy results in ‘only’ a 1.65% loss, this figure is 10.41% for Spain. The UK (2.89%), France (5.73%), Poland (7.21%) and Sweden (7.65%) rank somewhere in between.

According to the researchers, their findings can help policymakers to decide what the best anti-piracy enforcement strategies are. In addition, changes between countries could help to evaluate existing and future measures and inspire future research.

“The estimates that we provide can help policy makers to asses the efficient use of public resources to be spent on copyright enforcement of movies.”

“In particular, since we find that virtually all the lost sales of movies are due to a very small group of individuals, most damages of movie piracy could therefore potentially be prevented with well targeted policies,” the researchers conclude.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Cost of Cyberattacks Is Less than You Might Think

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/the_cost_of_cyb.html

Interesting research from Sasha Romanosky at RAND:

Abstract: In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.

The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:

Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company’s annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.

As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.

He also noted that the effects of a data incident typically don’t have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn’t make a lot of sense to invest too much in cyber security.

What’s being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn’t really a problem, but instead that there is a significant market failure that governments need to address.

AWS Week in Review – September 19, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-september-19-2016/

Eighteen (18) external and internal contributors worked together to create this edition of the AWS Week in Review. If you would like to join the party (with the possibility of a free lunch at re:Invent), please visit the AWS Week in Review on GitHub.

Monday

September 19

Tuesday

September 20

Wednesday

September 21

Thursday

September 22

Friday

September 23

Saturday

September 24

Sunday

September 25

New & Notable Open Source

  • ecs-refarch-cloudformation is reference architecture for deploying Microservices with Amazon ECS, AWS CloudFormation (YAML), and an Application Load Balancer.
  • rclone syncs files and directories to and from S3 and many other cloud storage providers.
  • Syncany is an open source cloud storage and filesharing application.
  • chalice-transmogrify is an AWS Lambda Python Microservice that transforms arbitrary XML/RSS to JSON.
  • amp-validator is a serverless AMP HTML Validator Microservice for AWS Lambda.
  • ecs-pilot is a simple tool for managing AWS ECS.
  • vman is an object version manager for AWS S3 buckets.
  • aws-codedeploy-linux is a demo of how to use CodeDeploy and CodePipeline with AWS.
  • autospotting is a tool for automatically replacing EC2 instances in AWS AutoScaling groups with compatible instances requested on the EC2 Spot Market.
  • shep is a framework for building APIs using AWS API Gateway and Lambda.

New SlideShare Presentations

New Customer Success Stories

  • NetSeer significantly reduces costs, improves the reliability of its real-time ad-bidding cluster, and delivers 100-millisecond response times using AWS. The company offers online solutions that help advertisers and publishers match search queries and web content to relevant ads. NetSeer runs its bidding cluster on AWS, taking advantage of Amazon EC2 Spot Fleet Instances.
  • New York Public Library revamped its fractured IT environment—which had older technology and legacy computing—to a modernized platform on AWS. The New York Public Library has been a provider of free books, information, ideas, and education for more than 17 million patrons a year. Using Amazon EC2, Elastic Load Balancer, Amazon RDS and Auto Scaling, NYPL is able to build scalable, repeatable systems quickly at a fraction of the cost.
  • MakerBot uses AWS to understand what its customers need, and to go to market faster with new and innovative products. MakerBot is a desktop 3-D printing company with more than 100 thousand customers using its 3-D printers. MakerBot uses Matillion ETL for Amazon Redshift to process data from a variety of sources in a fast and cost-effective way.
  • University of Maryland, College Park uses the AWS cloud to create a stable, secure and modern technical environment for its students and staff while ensuring compliance. The University of Maryland is a public research university located in the city of College Park, Maryland, and is the flagship institution of the University System of Maryland. The university uses AWS to migrate all of their datacenters to the cloud, as well as Amazon WorkSpaces to give students access to software anytime, anywhere and with any device.

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Using Neural Networks to Identify Blurred Faces

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/using_neural_ne.html

Neural networks are good at identifying faces, even if they’re blurry:

In a paper released earlier this month, researchers at UT Austin and Cornell University demonstrate that faces and objects obscured by blurring, pixelation, and a recently-proposed privacy system called P3 can be successfully identified by a neural network trained on image datasets­ — in some cases at a more consistent rate than humans.

“We argue that humans may no longer be the ‘gold standard’ for extracting information from visual data,” the researchers write. “Recent advances in machine learning based on artificial neural networks have led to dramatic improvements in the state of the art for automated image recognition. Trained machine learning models now outperform humans on tasks such as object recognition and determining the geographic location of an image.”

Research paper

YouTube-MP3 Ripping Site Sued By IFPI, RIAA and BPI

Post Syndicated from Andy original https://torrentfreak.com/youtube-mp3-ripping-site-sued-by-ifpi-riaa-and-bpi-160926/

Two weeks ago, the International Federation of the Phonographic Industry published research which claimed that half of 16 to 24-year-olds use stream-ripping tools to copy music from sites like YouTube.

The industry group said that the problem of stream-ripping has become so serious that in volume terms it had overtaken downloading from ‘pirate’ sites. Given today’s breaking news, the timing of the report was no coincidence.

Earlier today in a California District Court, a huge coalition of recording labels sued the world’s largest YouTube ripping site. UMG Recordings, Capitol Records, Warner Bros, Sony Music, Arista Records, Atlantic Records and several others claim that YouTube-MP3 (YTMP3), owner Philip Matesanz, and Does 1-10 have infringed their rights.

“YTMP3 rapidly and seamlessly removes the audio tracks contained in videos streamed from YouTube that YTMP3’s users access, converts those audio tracks to an MP3 format, copies and stores them on YTMP3’s servers, and then distributes copies of the MP3 audio files from its servers to its users in the United States, enabling its users to download those MP3 files to their computers, tablets, or smartphones,” the complaint reads.

The labels allege that YouTube-MP3 is one of the most popular sites in the entire world and as a result its owner, German-based company PMD Technologies UG, is profiting handsomely from their intellectual property.

“Defendants are depriving Plaintiffs and their recording artists of the fruits of their labor, Defendants are profiting from the operation of the YTMP3 website. Through the promise of illicit delivery of free music, Defendants have attracted millions of users to the YTMP3 website, which in turn generates advertising revenues for Defendants,” the labels add.

And it’s very clear that the labels mean business. YouTube-MP3 is being sued for direct, contributory, vicarious and inducement of copyright infringement, plus circumvention of technological measures.

Among other things, the labels are also demanding a preliminary and permanent injunction forbidding the Defendants from further infringing their rights. They also want YouTube-MP3’s domain name to be surrendered.

“This is a coordinated action to protect the rights of artists and labels from the blatant infringements of YouTube-mp3, the world’s single-largest ‘stream ripping’ site,” says IFPI Chief Executive Frances Moore.

“Music companies and digital services today offer fans more options than ever before to listen to music legally, when and where they want to do so – over hundreds of services with scores of millions of tracks – all while compensating artists and labels. Stream ripping sites should not be allowed jeopardize this.”

Cary Sherman, the Chairman and CEO of the Recording Industry Association of America (RIAA) says that YouTube-MP3 is making money on the back of their business and needs to be stopped.

“This site is raking in millions on the backs of artists, songwriters and labels. We are doing our part, but everyone in the music ecosystem who says they believe that artists should be compensated for their work has a role to play,” Sherman says.

“It should not be so easy to engage in this activity in the first place, and no stream ripping site should appear at the top of any search result or app chart.”

BPI Chief Executive Geoff Taylor says that it’s time for web services and related companies to stop supporting similar operations.

“It’s time to stop illegal sites like this building huge fortunes by ripping off artists and labels. Fans have access now to a fantastic range of legal music streaming services, but they can only exist if we take action to tackle the online black market,” Taylor says.

“We hope that responsible advertisers, search engines and hosting providers will also reflect on the ethics of supporting sites that enrich themselves by defrauding creators.”

TorrentFreak contacted YouTube-MP3 owner Philip Matesanz for comment but at the time of publication we were yet to receive a response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Democratization of Censorship

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/09/the-democratization-of-censorship/

John Gilmore, an American entrepreneur and civil libertarian, once famously quipped that “the Internet interprets censorship as damage and routes around it.” This notion undoubtedly rings true for those who see national governments as the principal threats to free speech.

However, events of the past week have convinced me that one of the fastest-growing censorship threats on the Internet today comes not from nation-states, but from super-empowered individuals who have been quietly building extremely potent cyber weapons with transnational reach.

underwater

More than 20 years after Gilmore first coined that turn of phrase, his most notable quotable has effectively been inverted — “Censorship can in fact route around the Internet.” The Internet can’t route around censorship when the censorship is all-pervasive and armed with, for all practical purposes, near-infinite reach and capacity. I call this rather unwelcome and hostile development the “The Democratization of Censorship.”

Allow me to explain how I arrived at this unsettling conclusion. As many of you know, my site was taken offline for the better part of this week. The outage came in the wake of a historically large distributed denial-of-service (DDoS) attack which hurled so much junk traffic at Krebsonsecurity.com that my DDoS protection provider Akamai chose to unmoor my site from its protective harbor.

Let me be clear: I do not fault Akamai for their decision. I was a pro bono customer from the start, and Akamai and its sister company Prolexic have stood by me through countless attacks over the past four years. It just so happened that this last siege was nearly twice the size of the next-largest attack they had ever seen before. Once it became evident that the assault was beginning to cause problems for the company’s paying customers, they explained that the choice to let my site go was a business decision, pure and simple.

Nevertheless, Akamai rather abruptly informed me I had until 6 p.m. that very same day — roughly two hours later — to make arrangements for migrating off their network. My main concern at the time was making sure my hosting provider wasn’t going to bear the brunt of the attack when the shields fell. To ensure that absolutely would not happen, I asked Akamai to redirect my site to 127.0.0.1 — effectively relegating all traffic destined for KrebsOnSecurity.com into a giant black hole.

Today, I am happy to report that the site is back up — this time under Project Shield, a free program run by Google to help protect journalists from online censorship. And make no mistake, DDoS attacks — particularly those the size of the assault that hit my site this week — are uniquely effective weapons for stomping on free speech, for reasons I’ll explore in this post.

Google's Project Shield is now protecting KrebsOnSecurity.com

Google’s Project Shield is now protecting KrebsOnSecurity.com

Why do I speak of DDoS attacks as a form of censorship? Quite simply because the economics of mitigating large-scale DDoS attacks do not bode well for protecting the individual user, to say nothing of independent journalists.

In an interview with The Boston Globe, Akamai executives said the attack — if sustained — likely would have cost the company millions of dollars. In the hours and days following my site going offline, I spoke with multiple DDoS mitigation firms. One offered to host KrebsOnSecurity for two weeks at no charge, but after that they said the same kind of protection I had under Akamai would cost between $150,000 and $200,000 per year.

Ask yourself how many independent journalists could possibly afford that kind of protection money? A number of other providers offered to help, but it was clear that they did not have the muscle to be able to withstand such massive attacks.

I’ve been toying with the idea of forming a 501(c)3 non-profit organization — ‘The Center for the Defense of Internet Journalism’, if you will — to assist Internet journalists with obtaining the kind of protection they may need when they become the targets of attacks like the one that hit my site.  Maybe a Kickstarter campaign, along with donations from well-known charitable organizations, could get the ball rolling.  It’s food for thought.

CALIBRATING THE CANNONS

Earlier this month, noted cryptologist and security blogger Bruce Schneier penned an unusually alarmist column titled, “Someone Is Learning How to Take Down the Internet.” Citing unnamed sources, Schneier warned that there was strong evidence indicating that nation-state actors were actively and aggressively probing the Internet for weak spots that could allow them to bring the entire Web to a virtual standstill.

“Someone is extensively testing the core defensive capabilities of the companies that provide critical Internet services,” Schneier wrote. “Who would do this? It doesn’t seem like something an activist, criminal, or researcher would do. Profiling core infrastructure is common practice in espionage and intelligence gathering. It’s not normal for companies to do that.”

Schneier continued:

“Furthermore, the size and scale of these probes — and especially their persistence — points to state actors. It feels like a nation’s military cyber command trying to calibrate its weaponry in the case of cyberwar. It reminds me of the US’s Cold War program of flying high-altitude planes over the Soviet Union to force their air-defense systems to turn on, to map their capabilities.”

Whether Schneier’s sources were accurate in their assessment of the actors referenced in his blog post is unknown. But as my friend and mentor Roland Dobbins at Arbor Networks eloquently put it, “When it comes to DDoS attacks, nation-states are just another player.”

“Today’s reality is that DDoS attacks have become the Great Equalizer between private actors & nation-states,” Dobbins quipped.

UM…YOUR RERUNS OF ‘SEINFELD’ JUST ATTACKED ME

What exactly was it that generated the record-smashing DDoS of 620 Gbps against my site this week? Was it a space-based weapon of mass disruption built and tested by a rogue nation-state, or an arch villain like SPECTRE from the James Bond series of novels and films? If only the enemy here was that black-and-white.

No, as I reported in the last blog post before my site was unplugged, the enemy in this case was far less sexy. There is every indication that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — mainly routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords. Most of these devices are available for sale on retail store shelves for less than $100, or — in the case of routers — are shipped by ISPs to their customers.

Some readers on Twitter have asked why the attackers would have “burned” so many compromised systems with such an overwhelming force against my little site. After all, they reasoned, the attackers showed their hand in this assault, exposing the Internet addresses of a huge number of compromised devices that might otherwise be used for actual money-making cybercriminal activities, such as hosting malware or relaying spam. Surely, network providers would take that list of hacked devices and begin blocking them from launching attacks going forward, the thinking goes.

As KrebsOnSecurity reader Rob Wright commented on Twitter, “the DDoS attack on @briankrebs feels like testing the Death Star on the Millennium Falcon instead of Alderaan.” I replied that this maybe wasn’t the most apt analogy. The reality is that there are currently millions — if not tens of millions — of insecure or poorly secured IoT devices that are ripe for being enlisted in these attacks at any given time. And we’re adding millions more each year.

I suggested to Mr. Wright perhaps a better comparison was that ne’er-do-wells now have a virtually limitless supply of Stormtrooper clones that can be conscripted into an attack at a moment’s notice.

A scene from the 1978 movie Star Wars, which the Death Star tests its firepower by blowing up a planet.

A scene from the 1977 movie Star Wars, in which the Death Star tests its firepower by blowing up a planet.

SHAMING THE SPOOFERS

The problem of DDoS conscripts goes well beyond the millions of IoT devices that are shipped insecure by default: Countless hosting providers and ISPs do nothing to prevent devices on their networks from being used by miscreants to “spoof” the source of DDoS attacks.

As I noted in a November 2015 story, The Lingering Mess from Default Insecurity, one basic step that many ISPs can but are not taking to blunt these attacks involves a network security standard that was developed and released more than a dozen years ago. Known as BCP38, its use prevents insecure resources on an ISPs network (hacked servers, computers, routers, DVRs, etc.) from being leveraged in such powerful denial-of-service attacks.

Using a technique called traffic amplification and reflection, the attacker can reflect his traffic from one or more third-party machines toward the intended target. In this type of assault, the attacker sends a message to a third party, while spoofing the Internet address of the victim. When the third party replies to the message, the reply is sent to the victim — and the reply is much larger than the original message, thereby amplifying the size of the attack.

BCP38 is designed to filter such spoofed traffic, so that it never even traverses the network of an ISP that’s adopted the anti-spoofing measures. However, there are non-trivial economic reasons that many ISPs fail to adopt this best practice. This blog post from the Internet Society does a good job of explaining why many ISPs ultimately decide not to implement BCP38.

Fortunately, there are efforts afoot to gather information about which networks and ISPs have neglected to filter out spoofed traffic leaving their networks. The idea is that by “naming and shaming” the providers who aren’t doing said filtering, the Internet community might pressure some of these actors into doing the right thing (or perhaps even offer preferential treatment to those providers who do conduct this basic network hygiene).

A research experiment by the Center for Applied Internet Data Analysis (CAIDA) called the “Spoofer Project” is slowly collecting this data, but it relies on users voluntarily running CAIDA’s software client to gather that intel. Unfortunately, a huge percentage of the networks that allow spoofing are hosting providers that offer extremely low-cost, virtual private servers (VPS). And these companies will never voluntarily run CAIDA’s spoof-testing tools.

CAIDA's Spoofer Project page.

CAIDA’s Spoofer Project page.

As a result, the biggest offenders will continue to fly under the radar of public attention unless and until more pressure is applied by hardware and software makers, as well as ISPs that are doing the right thing.

How might we gain a more complete picture of which network providers aren’t blocking spoofed traffic — without relying solely on voluntary reporting? That would likely require a concerted effort by a coalition of major hardware makers, operating system manufacturers and cloud providers, including Amazon, Apple, Google, Microsoft and entities which maintain the major Web server products (Apache, Nginx, e.g.), as well as the major Linux and Unix operating systems.

The coalition could decide that they will unilaterally build such instrumentation into their products. At that point, it would become difficult for hosting providers or their myriad resellers to hide the fact that they’re allowing systems on their networks to be leveraged in large-scale DDoS attacks.

To address the threat from the mass-proliferation of hardware devices such as Internet routers, DVRs and IP cameras that ship with default-insecure settings, we probably need an industry security association, with published standards that all members adhere to and are audited against periodically.

The wholesalers and retailers of these devices might then be encouraged to shift their focus toward buying and promoting connected devices which have this industry security association seal of approval. Consumers also would need to be educated to look for that seal of approval. Something like Underwriters Laboratories (UL), but for the Internet, perhaps.

THE BLEAK VS. THE BRIGHT FUTURE

As much as I believe such efforts could help dramatically limit the firepower available to today’s attackers, I’m not holding my breath that such a coalition will materialize anytime soon. But it’s probably worth mentioning that there are several precedents for this type of cross-industry collaboration to fight global cyber threats.

In 2008, the United States Computer Emergency Readiness Team (CERT) announced that researcher Dan Kaminsky had discovered a fundamental flaw in DNS that could allow anyone to intercept and manipulate most Internet-based communications, including email and e-commerce applications. A diverse community of software and hardware makers came together to fix the vulnerability and to coordinate the disclosure and patching of the design flaw.

deathtoddosIn 2009, Microsoft heralded the formation of an industry group to collaboratively counter Conficker, a malware threat that infected tens of millions of Windows PCs and held the threat of allowing cybercriminals to amass a stupendous army of botted systems virtually overnight. A group of software and security firms, dubbed the Conficker Cabal, hashed out and executed a plan for corralling infected systems and halting the spread of Conficker.

In 2011, a diverse group of industry players and law enforcement organizations came together to eradicate the threat from the DNS Changer Trojan, a malware strain that infected millions of Microsoft Windows systems and enslaved them in a botnet that was used for large-scale cyber fraud schemes.

These examples provide useful templates for a solution to the DDoS problem going forward. What appears to be missing is any sense of urgency to address the DDoS threat on a coordinated, global scale.

That’s probably because at least for now, the criminals at the helm of these huge DDoS crime machines are content to use them to launch petty yet costly attacks against targets that suit their interests or whims.

For example, the massive 620 Gbps attack that hit my site this week was an apparent retaliation for a story I wrote exposing two Israeli men who were arrested shortly after that story ran for allegedly operating vDOS — until recently the most popular DDoS-for-hire network. The traffic hurled at my site in that massive attack included the text string “freeapplej4ck,” a reference to the hacker nickname used by one of vDOS’s alleged co-founders.

Most of the time, ne’er-do-wells like Applej4ck and others are content to use their huge DDoS armies to attack gaming sites and services. But the crooks maintaining these large crime machines haven’t just been targeting gaming sites. OVH, a major Web hosting provider based in France, said in a post on Twitter this week that it was recently the victim of an even more massive attack than hit my site. According to a Tweet from OVH founder Octave Klaba, that attack was launched by a botnet consisting of more than 145,000 compromised IP cameras and DVRs.

I don’t know what it will take to wake the larger Internet community out of its slumber to address this growing threat to free speech and ecommerce. My guess is it will take an attack that endangers human lives, shuts down critical national infrastructure systems, or disrupts national elections.

But what we’re allowing by our inaction is for individual actors to build the instrumentality of tyranny. And to be clear, these weapons can be wielded by anyone — with any motivation — who’s willing to expend a modicum of time and effort to learn the most basic principles of its operation.

The sad truth these days is that it’s a lot easier to censor the digital media on the Internet than it is to censor printed books and newspapers in the physical world. On the Internet, anyone with an axe to grind and the willingness to learn a bit about the technology can become an instant, self-appointed global censor.

I sincerely hope we can address this problem before it’s too late. And I’m deeply grateful for the overwhelming outpouring of support and solidarity that I’ve seen and heard from so many readers over the past few days. Thank you.

Mexican Police Target Popular KickassTorrents ‘Clone,’ Seize Domain

Post Syndicated from Ernesto original https://torrentfreak.com/mexican-police-target-popular-kickasstorrents-clone-seize-domain-160923/

kickasstorrents_500x500Two months ago KickassTorrents (KAT) was shut down by the U.S. Government, following the arrest of the site’s alleged owner.

Soon after the official site went offline various mirrors and clones launched to take its place, to the pleasure of hundreds of thousands of users.

One of the most popular mirrors started as KAT.am. While this domain name was swiftly seized, and later picked up by scammers, the initial site continued to operate from kickass.cd and kickass.mx.

However, this week the site got in trouble again. Without prior notice the .MX domain name was taken out of circulation by the registry, following an intervention from Mexico’s federal police.

The authorities say they were tipped off by copyright holders and wasted no time in containing the threat.

“This action took place after various distribution companies reported intellectual property infringements. In response, staff at the Center for Prevention of Electronic Crimes started a cyber intelligence operation to locate the source where this crime was committed,” the federal police reported.

“Currently the website is out of service, and our research continues to locate the administrators,” they added.

Although there is no doubt that Kickass.mx is offline, in a rather confusing press release police keep referring to kickass.com.mx, which appears to be an unrelated website.

TorrentFreak reached out to the operator of the Kickass.mx “clone,” which is really just a Pirate Bay mirror with a KickassTorrents skin, who was surprised by the domain seizure.

“The suspension of the MX TLD was very unexpected and came as a shock to us because we used EasyDNS to register the domain name,” the Kickass.mx operator says.

EasyDNS has a track record of standing up against domain seizures and suspensions that are requested without a proper court order. However, in this case EasyDNS was bypassed as the police went directly to the MX domain registry.

“Their team is trying to get into touch with the Mexican registry to get the domain back though any positive development in this regard seems unlikely,” the operator adds.

For now, the KAT-themed site remains available from the Kickass.cd domain and more backup domains are expected to follow in the near future, probably without Mexican ties.

“We already have three more TLDs and plan to set up mirror sites on them to increase resilience,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

KrebsOnSecurity Hit With Record DDoS

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/09/krebsonsecurity-hit-with-record-ddos/

On Tuesday evening, KrebsOnSecurity.com was the target of an extremely large and unusual distributed denial-of-service (DDoS) attack designed to knock the site offline. The attack did not succeed thanks to the hard work of the engineers at Akamai, the company that protects my site from such digital sieges. But according to Akamai, it was nearly double the size of the largest attack they’d seen previously, and was among the biggest assaults the Internet has ever witnessed.
iotstuf

The attack began around 8 p.m. ET on Sept. 20, and initial reports put it at approximately 665 Gigabits of traffic per second. Additional analysis on the attack traffic suggests the assault was closer to 620 Gbps in size, but in any case this is many orders of magnitude more traffic than is typically needed to knock most sites offline.

Martin McKeay, Akamai’s senior security advocate, said the largest attack the company had seen previously clocked in earlier this year at 363 Gbps. But he said there was a major difference between last night’s DDoS and the previous record holder: The 363 Gpbs attack is thought to have been generated by a botnet of compromised systems using well-known techniques allowing them to “amplify” a relatively small attack into a much larger one.

In contrast, the huge assault this week on my site appears to have been launched almost exclusively by a very large botnet of hacked devices.

The largest DDoS attacks on record tend to be the result of a tried-and-true method known as a DNS reflection attack. In such assaults, the perpetrators are able to leverage unmanaged DNS servers on the Web to create huge traffic floods.

Ideally, DNS servers only provide services to machines within a trusted domain. But DNS reflection attacks rely on consumer and business routers and other devices equipped with DNS servers that are (mis)configured to accept queries from anywhere on the Web. Attackers can send spoofed DNS queries to these so-called “open recursive” DNS servers, forging the request so that it appears to come from the target’s network. That way, when the DNS servers respond, they reply to the spoofed (target) address.

The bad guys also can amplify a reflective attack by crafting DNS queries so that the responses are much bigger than the requests. They do this by taking advantage of an extension to the DNS protocol that enables large DNS messages. For example, an attacker could compose a DNS request of less than 100 bytes, prompting a response that is 60-70 times as large. This “amplification” effect is especially pronounced if the perpetrators query dozens of DNS servers with these spoofed requests simultaneously.

But according to Akamai, none of the attack methods employed in Tuesday night’s assault on KrebsOnSecurity relied on amplification or reflection. Rather, many were garbage Web attack methods that require a legitimate connection between the attacking host and the target, including SYN, GET and POST floods.

That is, with the exception of one attack method: Preliminary analysis of the attack traffic suggests that perhaps the biggest chunk of the attack came in the form of traffic designed to look like it was generic routing encapsulation (GRE) data packets, a communication protocol used to establish a direct, point-to-point connection between network nodes. GRE lets two peers share data they wouldn’t be able to share over the public network itself.

“Seeing that much attack coming from GRE is really unusual,” Akamai’s McKeay said. “We’ve only started seeing that recently, but seeing it at this volume is very new.”

McKeay explained that the source of GRE traffic can’t be spoofed or faked the same way DDoS attackers can spoof DNS traffic. Nor can junk Web-based DDoS attacks like those mentioned above. That suggests the attackers behind this record assault launched it from quite a large collection of hacked systems — possibly hundreds of thousands of systems.

“Someone has a botnet with capabilities we haven’t seen before,” McKeay said. “We looked at the traffic coming from the attacking systems, and they weren’t just from one region of the world or from a small subset of networks — they were everywhere.”

There are some indications that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords.

As noted in a recent report from Flashpoint and Level 3 Threat Research Labs, the threat from IoT-based botnets is powered by malware that goes by many names, including “Lizkebab,” “BASHLITE,” “Torlus” and “gafgyt.” According to that report, the source code for this malware was leaked in early 2015 and has been spun off into more than a dozen variants.

“Each botnet spreads to new hosts by scanning for vulnerable devices in order to install the malware,” the report notes. “Two primary models for scanning exist. The first instructs bots to port scan for telnet servers and attempts to brute force the username and password to gain access to the device.”

Their analysis continues:

“The other model, which is becoming increasingly common, uses external scanners to find and harvest new bots, in some cases scanning from the [botnet control] servers themselves. The latter model adds a wide variety of infection methods, including brute forcing login credentials on SSH servers and exploiting known security weaknesses in other services.”

I’ll address some of the challenges of minimizing the threat from large-scale DDoS attacks in a future post. But for now it seems likely that we can expect such monster attacks to soon become the new norm.

Many readers have been asking whether this attack was in retaliation for my recent series on the takedown of the DDoS-for-hire service vDOS, which coincided with the arrests of two young men named in my original report as founders of the service.

I can’t say for sure, but it seems likely related: Some of the POST request attacks that came in last night as part of this 620 Gbps attack included the string “freeapplej4ck,” a reference to the nickname used by one of the vDOS co-owners.

Update Sept. 22, 8:33 a.m. ET: Corrected the maximum previous DDoS seen by Akamai. It was 363, not 336 as stated earlier.

Stop Piracy? Legal Alternatives Beat Legal Threats, Research Shows

Post Syndicated from Ernesto original https://torrentfreak.com/stop-piracy-legal-alternatives-beat-legal-threats-research-shows-160921/

cassetteYesterday the RIAA announced the biggest growth in recorded music sales since the late 1990s, a healthy 8.1% increase compared to the year before.

The record numbers were achieved despite the widespread availability of pirated music. So what happened here? Did all those pirates suddenly grow a conscience?

The answer to this question is partly given by new research published in the journal Risk Analysis.

Researchers from the University of East Anglia, Lancaster University, and Newcastle University found that perceived risk has very little effect on people’s piracy habits. This means that stricter punishments or tough copyright laws are not the answer.

Instead, unauthorized file-sharing (UFS) is best predicted by the supposed benefits of piracy. As such, the researchers note that better legal alternatives are the best way to stop piracy.

The results are based on a psychological study among hundreds of music and ebook consumers. They were subjected to a set of questions regarding their file-sharing habits, perceived risk, industry trust, and online anonymity.

By analyzing the data the researchers found that the perceived benefit of piracy, such as quality, flexibility of use and cost are the real driver of piracy. An increase in legal risk was not directly associated with any statistically significant decrease in self-reported file-sharing.

“Given that we observe a much more powerful predictor of behavior in perceived benefit, changes to legal frameworks may not be the most effective route to change behaviour,” lead author Dr Steven Watson says.

“Specifically, one strategy to combat unlawful file-sharing would be to provide easy access to information about the benefits of legal purchases or services, in an environment in which the specific benefits UFS offers are met by these legal alternatives.”

Alternatively, there is a more indirect route to influence piracy, by increasing the “trust” people have in regulators. This could increase risk perception and also lower the perceived benefits of piracy. However, the researchers note that this isn’t the most efficient option.

In their paper, the researchers mention subscription services such as Spotify as the most compelling alternatives.

This brings us back to the record revenue the RIAA reported yesterday, which can be attributed to the growth of legal services. The RIAA notes that with the introduction of Tidal and Apple Music, subscription service revenues doubled compared to last year.

So it’s legal options that drive the recent revenue growth, not anti-piracy enforcement.

Of course, the idea that subscription services can compete with piracy isn’t new. When Spotify launched its first beta in the fall of 2008, we billed it as “an alternative to music piracy,” and various reports have shown that pirates gladly switch over to good legal services.

The UK researchers also conclude that legal alternatives are a viable option to decrease piracy, one that’s preferred over legal threats.

“It is perhaps no surprise that legal interventions regarding UFS have a limited and possibly short-term effect, while legal services that compete with UFS have attracted significant numbers of consumers,” says co-author Dr Piers Fleming.

Techdirt’s Mike Masnick, who published a “The carrot or the stick” report last year, notes that the findings are in line with their conclusions.

According to Masnick, there is now ample evidence showing that enforcement is not the answer to piracy, but thus far the relevant stakeholders continue to hide their heads in the sand.

“And yet, politicians, regulators and legacy industry folks still insist that ratcheting up enforcement is the way to go. What will it take for them to actually follow what the evidence says, rather than continuing with faith-based copyright policies?” Masnick writes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tesla Model S Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/tesla_model_s_h.html

Impressive remote ,a href=”http://www.pcworld.com/article/3121999/security/researchers-demonstrate-remote-attack-against-tesla-model-s.html”>hack of the Tesla Model S.

Details. Video.

The vulnerability is fixed.

Remember, a modern car isn’t an automobile with a computer in it. It’s a computer with four wheels and an engine. Actually, it’s a distributed 20-400-computer system with four wheels and an engine.

Malicious Torrent Network Tool Revealed By Security Company

Post Syndicated from Andy original https://torrentfreak.com/malicious-torrent-network-tool-revealed-by-security-company-160921/

danger-p2pMore than 35 years after 15-year-old high school student Rich Skrenta created the first publicly spread virus, millions of pieces of malware are being spread around the world.

Attackers’ motives are varied but these days they’re often working for financial gain. As a result, popular websites and their users are regularly targeted. Security company InfoArmor has just published a report detailing a particularly interesting threat which homes in on torrent site users.

“InfoArmor has identified a special tool used by cybercriminals to distribute malware by packaging it with the most popular torrent files on the Internet,” the company reports.

InfoArmor says the so-called “RAUM” tool is being offered via “underground affiliate networks” with attackers being financially incentivized to spread the malicious software through infected torrent files.

“Members of these networks are invited by special invitation only, with strict verification of each new member,” the company reports.

InfoArmor says that the attackers’ infrastructure has a monitoring system in place which allows them to track the latest trends in downloading, presumably so that attacks can reach the greatest numbers of victims.

“The bad actors have analyzed trends on video, audio, software and other digital content downloads from around the globe and have created seeds on famous torrent trackers using weaponized torrents packaged with malicious code,” they explain.

RAUM instances were associated with a range of malware including CryptXXX, CTB-Locker and Cerber, online-banking Trojan Dridex and password stealing spyware Pony.

“We have identified in excess of 1,639,000 records collected in the past few months from the infected victims with various credentials to online-services, gaming, social media, corporate resources and exfiltrated data from the uncovered network,” InfoArmor reveals.

What is perhaps most interesting about InfoArmor’s research is how it shines light on the operation of RAUM behind the scenes. The company has published a screenshot which claims to show the system’s dashboard, featuring infected torrents on several sites, a ‘fake’ Pirate Bay site in particular.

dashtorrents

“Threat actors were systematically monitoring the status of the created malicious seeds on famous torrent trackers such as The Pirate Bay, ExtraTorrent and many others,” the researchers write.

“In some cases, they were specifically looking for compromised accounts of other users on these online communities that were extracted from botnet logs in order to use them for new seeds on behalf of the affected victims without their knowledge, thus increasing the reputation of the uploaded files.”

raum-1

According to InfoArmor the malware was initially spread using uTorrent, although any client could have done the job. More recently, however, new seeds have been served through online servers and some hacked devices.

In some cases the malicious files continued to be seeded for more than 1.5 months. Tests by TF on the sample provided showed that most of the files listed have now been removed by the sites in question.

Completely unsurprisingly, people who use torrent sites to obtain software and games (as opposed to video and music files) are those most likely to come into contact with RAUM and associated malware. As the image below shows, Windows 7 and 10 packs and their activators feature prominently.

raum-2

“All of the created malicious seeds were monitored by cybercriminals in order to prevent early detection by [anti-virus software] and had different statuses such as ‘closed,’ ‘alive,’ and ‘detected by antivirus.’ Some of the identified elements of their infrastructure were hosted in the TOR network,” InfoArmor explains.

The researchers say that RAUM is a tool used by an Eastern European organized crime group known as Black Team. They also report several URLs and IP addresses from where the team operates. We won’t publish them here but it’s of some comfort to know that between Chrome, Firefox and MalwareBytes protection, all were successfully blocked on our test machine.

InfoArmor concludes by warning users to exercise extreme caution when downloading pirated digital content. We’d go a step further and advise people to be wary of installing all software from any untrusted sources, no matter where they’re found online.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Most Young Millennials Love Piracy and Ad-Blockers

Post Syndicated from Ernesto original https://torrentfreak.com/millennials-love-piracy-ad-blockers-160920/

piratekayDespite the availability of many legal services, piracy remains rampant among millennials in the United States.

This is one of the main conclusions of the “Millennials at the Gate” report, released by Anatomy Media. The report is based on a comprehensive survey of 2,700 young millennials between 18 and 24, and zooms in on piracy and ad-blocking preferences in this age group.

The results show that more than two thirds, a whopping 69%, admit to using at least one form of piracy to watch video.

Online streaming is by far the most popular choice among these pirates, whether it’s on the desktop (42%) or via mobile (41%). Torrenting, on the other hand, is on the decline and is stuck at 17% in this age group.

Piracy preferences

piracymillen

Streaming from unofficial sources is so dominant now that Anatomy Media decided to come up with a new word for those who engage in it: striminals. Whether they seriously considered the better fitting “striminalennials” is unclear.

“These streaming millennial criminals, or what we call ‘striminals,’ watch what they want, when they want, where they want, and they don’t pay for it,” the company explains.

Interestingly, 67% of all millennials believe that streaming unauthorized content is perfectly legal. Only 18% believe that it is wrong to stream content without paying for it.

It’s worth highlighting that it’s up for debate whether the term “criminal” accurately describes people who casually stream unauthorized videos. Attempts to make streaming a felony previously failed in the United States congress.

In addition to online piracy, young millennials are quite fond of ad-blockers. The report shows that two out of three use a mobile or desktop ad-blocker, or both.

Ad-blocking preferences

ad-blocking

Interestingly, there is a direct link between the use of ad-blockers and online piracy. Millennials who are into mobile piracy use mobile ad-blockers more often, while desktop pirates have a higher preference for desktop ad-blockers.

Anatomy Media suggest that piracy and ad-blocking might reinforce each other. Online pirates may be more likely to use ad-blockers because pirate sites are often ad-ridden, they argue. However, this causal relationship wasn’t researched.

Piracy and ad-blocking

piracyadblocking

While the above paints a grim picture for media companies, not all is lost according to Anatomy Media. The company, which conveniently specializes in “creative advertising,” says that a better viewing experience could encourage millennials to move over to the right side.

“Young millennials’ dissatisfaction with their viewer experience and their overwhelming adoption of ad blockers is a call-to-action to improve the viewer experience and review the nature of the digital ad experience,” the report concludes.

“Millennials will accept advertising as long as it is restrained, targeted and relevant,” the company self-servingly adds.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DDoS Mitigation Firm Has History of Hijacks

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/09/ddos-mitigation-firm-has-history-of-hijacks/

Last week, KrebsOnSecurity detailed how BackConnect Inc. — a company that defends victims against large-scale distributed denial-of-service (DDoS) attacks — admitted to hijacking hundreds of Internet addresses from a European Internet service provider in order to glean information about attackers who were targeting BackConnect. According to an exhaustive analysis of historic Internet records, BackConnect appears to have a history of such “hacking back” activity.

On Sept. 8, 2016, KrebsOnSecurity exposed the inner workings of vDOS, a DDoS-for-hire or “booter” service whose tens of thousands of paying customers used the service to launch attacks against hundreds of thousands of targets over the service’s four-year history in business.

vDOS as it existed on Sept. 8, 2016.

vDOS as it existed on Sept. 8, 2016.

Within hours of that story running, the two alleged owners — 18-year-old Israeli men identified in the original report — were arrested in Israel in connection with an FBI investigation into the shady business, which earned well north of $600,000 for the two men.

In my follow-up report on their arrests, I noted that vDOS itself had gone offline, and that automated Twitter feeds which report on large-scale changes to the global Internet routing tables observed that vDOS’s provider — a Bulgarian host named Verdina[dot]net — had been briefly relieved of control over 255 Internet addresses (including those assigned to vDOS) as the direct result of an unusual counterattack by BackConnect.

Asked about the reason for the counterattack, BackConnect CEO Bryant Townsend confirmed to this author that it had executed what’s known as a “BGP hijack.” In short, the company had fraudulently “announced” to the rest of the world’s Internet service providers (ISPs) that it was the rightful owner of the range of those 255 Internet addresses at Verdina occupied by vDOS.

In a post on NANOG Sept. 13, BackConnect’s Townsend said his company took the extreme measure after coming under a sustained DDoS attack thought to have been launched by a botnet controlled by vDOS. Townsend explained that the hijack allowed his firm to “collect intelligence on the actors behind the botnet as well as identify the attack servers used by the booter service.”

Short for Border Gateway Protocol, BGP is a mechanism by which ISPs of the world share information about which providers are responsible for routing Internet traffic to specific addresses. However, like most components built into the modern Internet, BGP was never designed with security in mind, which leaves it vulnerable to exploitation by rogue actors.

BackConnect’s BGP hijack of Verdina caused quite an uproar among many Internet technologists who discuss such matters at the mailing list of the North American Network Operators Group (NANOG).

BGP hijacks are hardly unprecedented, but when they are non-consensual they are either done accidentally or are the work of cyber criminals such as spammers looking to hijack address space for use in blasting out junk email. If BackConnect’s hijacking of Verdina was an example of a DDoS mitigation firm “hacking back,” what would discourage others from doing the same, they wondered?

“Once we let providers cross the line from legal to illegal actions, we’re no better than the crooks, and the Internet will descend into lawless chaos,” wrote Mel Beckman, owner of Beckman Software Engineering and a computer networking consultant in the Los Angeles area. “BackConnect’s illicit action undoubtedly injured innocent parties, so it’s not self defense, any more than shooting wildly into a crowd to stop an attacker would be self defense.”

A HISTORY OF HIJACKS

Townsend’s explanation seemed to produce more questions than answers among the NANOG crowd (read the entire “Defensive BGP Hijacking” thread here if you dare). I grew more curious to learn whether this was a pattern for BackConnect when I started looking deeper into the history of two young men who co-founded BackConnect (more on them in a bit).

To get a better picture of BackConnect’s history, I turned to BGP hijacking expert Doug Madory, director of Internet analysis at Dyn, a cloud-based Internet performance management company. Madory pulled historic BGP records for BackConnect, and sure enough a strange pattern began to emerge.

Madory was careful to caution up front that not all BGP hijacks are malicious. Indeed, my DDoS protection provider — a company called Prolexic Communications (now owned by Akamai Technologies) — practically invented the use of BGP hijacks as a DDoS mitigation method, he said.

In such a scenario, an organization under heavy DDoS attack might approach Prolexic and ask for assistance. With the customer’s permission, Prolexic would use BGP to announce to the rest of the world’s ISPs that it was now the rightful owner of the Internet addresses under attack. This would allow Prolexic to “scrub” the customer’s incoming Web traffic to drop data packets designed to knock the customer offline — and forward the legitimate traffic on to the customer’s site.

Given that BackConnect is also a DDoS mitigation company, I asked Madory how one could reasonably tell the difference between a BGP hijack that BackConnect had launched to protect a client versus one that might have been launched for other purposes — such as surreptitiously collecting intelligence on DDoS-based botnets and their owners?

Madory explained that in evaluating whether a BGP hijack is malicious or consensual, he looks at four qualities: The duration of the hijack; whether it was announced globally or just to the target ISP’s local peers; whether the hijacker took steps to obfuscate which ISP was doing the hijacking; and whether the hijacker and hijacked agreed upon the action.

bcbgp

For starters, malicious BGP attacks designed to gather information about an attacking host are likely to be very brief — often lasting just a few minutes. The brevity of such hijacks makes them somewhat ineffective at mitigating large-scale DDoS attacks, which often last for hours at a time. For example, the BGP hijack that BackConnect launched against Verdina lasted a fraction of an hour, and according to the company’s CEO was launched only after the DDoS attack subsided.

Second, if the party conducting the hijack is doing so for information gathering purposes, that party may attempt to limit the number ISPs that receive the new routing instructions. This might help an uninvited BGP hijacker achieve the end result of intercepting traffic to and from the target network without informing all of the world’s ISPs simultaneously.

“If a sizable portion of the Internet’s routers do not carry a route to a DDoS mitigation provider, then they won’t be sending DDoS traffic destined for the corresponding address space to the provider’s traffic scrubbing centers, thus limiting the efficacy of any mitigation,” Madory wrote in his own blog post about our joint investigation.

Thirdly, a BGP hijacker who is trying not to draw attention to himself can “forge” the BGP records so that it appears that the hijack was performed by another party. Madory said this forgery process often fools less experienced investigators, but that ultimately it is impossible to hide the true origin of forged BGP records.

Finally, in BGP hijacks that are consensual for DDoS mitigation purposes, the host under attack stops “announcing” to the world’s ISPs that it is the rightful owner of an address block under siege at about the same time the DDoS mitigation provider begins claiming it. When we see BGP hijacks in which both parties are claiming in the BGP records to be authoritative for a given swath of Internet addresses, Madory said, it’s less likely that the BGP hijack is consensual.

Madory and KrebsOnSecurity spent several days reviewing historic records of BGP hijacks attributed to BackConnect over the past year, and at least three besides the admitted hijack against Verdina strongly suggest that the company has engaged in this type of intel-gathering activity previously. The strongest indicator of a malicious and non-consensual BGP hijack, Madory said, were the ones that included forged BGP records.

Working together, Madory and KrebsOnSecurity identified at least 17 incidents during that time frame that were possible BGP hijacks conducted by BackConnect. Of those, five included forged BGP records. One was an hours-long hijack against Ghostnet[dot]de, a hosting provider in Germany.

Two other BGP hijacks from BackConnect that included spoofed records were against Staminus Communications, a competing DDoS mitigation provider and a firm that employed BackConnect CEO Townsend for three years as senior vice president of business development until his departure from Staminus in December 2015.

“This hijack wasn’t conducted by Staminus. It was BackConnect posing as Staminus,” Dyn’s Madory concluded.

Two weeks after BackConnect hijacked the Staminus routes, Staminus was massively hacked. Unknown attackers, operating under the banner “Fuck ‘Em All,” reset all of the configurations on the company’s Internet routers, and then posted online Staminus’s customer credentials, support tickets, credit card numbers and other sensitive data. The intruders also posted to Pastebin a taunting note ridiculing the company’s security practices.

BackConnect's apparent hijack of address space owned by Staminus Communications on Feb. 20, 2016. Image: Dyn.

BackConnect’s apparent hijack of address space owned by Staminus Communications on Feb. 20, 2016. Image: Dyn.

POINTING FINGERS

I asked Townsend to comment on the BGP hijacks identified by KrebsOnSecurity and Dyn as having spoofed source information. Townsend replied that he could not provide any insight as to why these incidents occurred, noting that he and the company’s chief technology officer — 24-year-old Marshal Webb — only had access and visibility into the network after the company BackConnect Inc. was created on April 27, 2016.

According to Townsend, the current BackConnect Inc. is wholly separate from BackConnect Security LLC, which is a company started in 2014 by two young men: Webb and a 19-year-old security professional named Tucker Preston. In April 2016, Preston was voted out of the company by Webb and Townsend and forced to sell his share of the company, which was subsequently renamed BackConnect Inc.

“Before that, the original owner of BackConnect Security LLC was the only one that had the ability to access servers and perform any type of networking commands,” he explained. “We had never noticed these occurred until this last Saturday and the previous owner never communicated anything regarding these hijacks. Wish I could provide more insight, but Marshal and I do not know the reasons behind the previous owners decision to hijack those ranges or what he was trying to accomplish.”

In a phone interview, Preston told KrebsOnSecurity that Townsend had little to no understanding about the technical side of the business, and was merely “a sales guy” for BackConnect. He claims that Webb absolutely had and still has the ability to manipulate BackConnect’s BGP records and announcements.

Townsend countered that Preston was the only network engineer at the company.

“We had to self-learn how to do anything network related once the new company was founded and Tucker removed,” he said. “Marshal and myself didn’t even know how to use BGP until we were forced to learn it in order to bring on new clients. To clarify further, Marshal did not have a networking background and had only been working on our web panel and DDoS mitigation rules.”

L33T, LULZ, W00W00 AND CHIPPY

Preston said he first met Webb in 2013 after the latter admitted to launching DDoS attacks against one of Preston’s customers at the time. Webb had been painted with a somewhat sketchy recent history at the time — being fingered as a low-skilled hacker who went by the nicknames “m_nerva” and “Chippy1337.”

Webb, whose Facebook alias is “lulznet,” was publicly accused in 2011 by the hacker group LulzSec of snitching on the activities of the group to the FBI, claiming that information he shared with law enforcement led to the arrest of a teen hacker in England associated with LulzSec. Webb has publicly denied being an informant for the FBI, but did not respond to requests for comment on this story.

LulzSec members claimed that Webb was behind the hacking of the Web site for the video game “Deus Ex.” As KrebsOnSecurity noted in a story about the Deus Ex hack, the intruder defaced the gaming site with the message “Owned by Chippy1337.”

The defacement message left on deusex.com.

The defacement message left on deusex.com.

I was introduced to Webb at the Defcon hacking convention in Las Vegas in 2014. Since then, I have come to know him a bit more as a participant of w00w00, an invite-only Slack chat channel populated mainly by information security professionals who work in the DDoS mitigation business. Webb chose the handle Chippy1337 for his account in that Slack channel.

At the time, Webb was trying to convince me to take another look at Voxility, a hosting provider that I’ve previously noted has a rather checkered history and one that BackConnect appears to rely upon exclusively for its own hosting.

In our examination of BGP hijacks attributed to BackConnect, Dyn and KrebsOnSecurity identified an unusual incident in late July 2016 in which BackConnect could be seen hijacking an address range previously announced by Datawagon, a hosting provider with a rather dodgy reputation for hosting spammers and DDoS-for-hire sites.

That address range previously announced by Datawagon included the Internet address 1.3.3.7, which is hacker “leet speak” for the word “leet,” or “elite.” Interestingly, on the w00w00 DDoS discussion Slack channel I observed Webb (Chippy1337) offering other participants in the channel vanity addresses and virtual private connections (VPNs) ending in 1.3.3.7. In the screen shot below, Webb can be seen posting a screen shot demonstrating his access to the 1.3.3.7 address while logged into it on his mobile phone.

Webb, logged into the w00w00 DDoS discussion channel using his nickname "chippy1337," demonstrating that his mobile phone connection was being routed through the Internet address 1.3.3.7, which BackConnect BGP hijacked in July 2016.

Webb, logged into the w00w00 DDoS discussion channel using his nickname “chippy1337,” demonstrating that his mobile phone connection was being routed through the Internet address 1.3.3.7, which BackConnect BGP hijacked in July 2016.

THE MONEY TEAM

The Web address 1.3.3.7 currently does not respond to browser requests, but it previously routed to a page listing the core members of a hacker group calling itself the Money Team. Other sites also previously tied to that Internet address include numerous DDoS-for-hire services, such as nazistresser[dot]biz, exostress[dot]in, scriptkiddie[dot]eu, packeting[dot]eu, leet[dot]hu, booter[dot]in, vivostresser[dot]com, shockingbooter[dot]com and xboot[dot]info, among others.

The Money Team comprised a group of online gaming enthusiasts of the massively popular game Counterstrike, and the group’s members specialized in selling cheats and hacks for the game, as well as various booter services that could be used to knock rival gamers offline.

Datawagon’s founder is an 18-year-old American named CJ Sculti whose 15-minutes of fame came last year in a cybersquatting dispute after he registered the domain dominos.pizza. A cached version of the Money Team’s home page saved by Archive.org lists CJ at the top of the member list, with “chippy1337” as the third member from the top.

The MoneyTeam's roster as of November 2015. Image: Archive.org.

The MoneyTeam’s roster as of November 2015. Image: Archive.org.

Asked why he chose to start a DDoS mitigation company with a kid who was into DDoS attacks, Preston said he got to know Webb over several years before teaming up with him to form BackConnect LLC.

“We were friends long before we ever started the company together,” Preston said. “I thought Marshal had turned over a new leaf and had moved away from all that black hat stuff. He seem to stay true to that until we split and he started getting involved with the Datawagon guys. I guess his lulz mentality came back in a really stupid way.”

Townsend said Webb was never an FBI informant, and was never arrested for involvement with LulzSec.

“Only a search warrant was executed at his residence,” Townsend said. “Chippy is not a unique handle to Marshal and it has been used by many people. Just because he uses that handle today doesn’t mean any past chippy actions are his doing. Marshal did not even go by Chippy when LulzSec was in the news. These claims are completely fabricated.”

As for the apparent Datawagon hijack, Townsend said Datawagon gave BackConnect permission to announce the company’s Internet address space but later decided not to become a customer.

“They were going to be a client and they gave us permission to announce that IP range via an LOA [letter of authorization]. They did not become a client and we removed the announcement. Also note that the date of the screen shot you present of Marshal talking about the 1.3.3.7. is not even the same as when we announced Datawagons IPs.”

SOMETHING SMELLS BAD

When vDOS was hacked, its entire user database was leaked to this author. Among the more active users of vDOS in 2016 was a user who went by the username “pp412” and who registered in February 2016 using the email address mn@gnu.so.

The information about who originally registered the gnu.so domain has long been hidden behind WHOIS privacy records. But for several months in 2015 and 2016 the registration records show it was registered to a Tucker Preston LLC. Preston denies that he ever registered the gnu.so domain, and claims that he never conducted any booter attacks via vDOS. However, Preston also was on the w00w00 Slack channel along with Webb, and registered there using the email address tucker@gnu.so.

But whoever owned that pp412 account at vDOS was active in attacking a large number of targets, including multiple assaults on networks belonging to the Free Software Foundation (FSF).

Logs from the hacked vDOS attack database show the user pp4l2 attacked the Free Software Foundation in May 2016.

Logs from the hacked vDOS attack database show the user pp4l2 attacked the Free Software Foundation in May 2016.

Lisa Marie Maginnis, until very recently a senior system administrator at the FSF, said the foundation began evaluating DDoS mitigation providers in the months leading up to its LibrePlanet2016 conference in the third week of March. The organization had never suffered any real DDoS attacks to speak of previously, but NSA whistleblower Edward Snowden was slated to speak at the conference, and the FSF was concerned that someone might launch a DDoS attack to disrupt the streaming of Snowden’s keynote.

“We were worried this might bring us some extra unwanted attention,” she said.

Maginnis said the FSF had looked at BackConnect and other providers, but that it ultimately decided it didn’t have time to do the testing and evaluation required to properly vet a provider prior to the conference. So the organization tabled that decision. As it happened, the Snowden keynote was a success, and the FSF’s fears of a massive DDoS never materialized.

But all that changed in the weeks following the conference.

“The first attack we got started off kind of small, and it came around 3:30 on a Friday morning,” Maginnis recalled. “The next Friday at about the same time we were hit again, and then the next and the next.”

The DDoS attacks grew bigger with each passing week, she said, peaking at more than 200 Gbps — more than enough to knock large hosting providers offline, let alone individual sites like the FSF’s. When the FSF’s Internet provider succeeded in blacklisting the addresses doing the attacking, the attackers switched targets and began going after larger-scale ISPs further upstream.

“That’s when our ISP told us we had to do something because the attacks were really starting to impact the ISP’s other customers,” Maginnis said. “Routing all of our traffic through another company wasn’t exactly an ideal situation for the FSF, but the other choice was we would just be disconnected and there would be no more FSF online.”

In August, the FSF announced that it had signed up with BackConnect to be protected from DDoS attacks, in part because the foundation only uses free software to perform its work, and BackConnect advertises “open source DDoS protection and security,” and it agreed to provide the service without charge.

The FSF declined to comment for this story. Maginnis said she can’t be sure whether the foundation will continue to work with BackConnect. But she said the timing of the attacks is suspicious.

“The whole thing just smells bad,” she said. “It does feel like there could be a connection between the DDoS and BackConnect’s timing to approach clients. On the other hand, I don’t think we received a single attack until Tucker [Preston] left BackConnect.”

DDoS attacks are rapidly growing in size, sophistication and disruptive impact, presenting a clear and present threat to online commerce and free speech alike. Since reporting about the hack of vDOS and the arrest of its proprietors nearly two weeks ago, KrebsOnSecurity.com has been under near-constant DDoS attack. One assault this past Sunday morning maxed out at more than 210 Gbps — the largest assault on this site to date.

Addressing the root causes that contribute to these attacks is a complex challenge that requires cooperation, courage and ingenuity from a broad array of constituencies — including ISPs, hosting providers, policy and hardware makers, and even end users.

In the meantime, some worry that as the disruption and chaos caused by DDoS attacks continues to worsen, network owners and providers may be increasingly tempted to take matters into their own hands and strike back at their assailants.

But this is almost never a good idea, said Rich Kulawiec, an anti-spam activist who is active on the NANOG mailing list.

“It’s tempting (and even trendy these days in portions of the security world which advocate striking back at putative attackers, never mind that attack attribution is almost entirely an unsolved problem in computing),” Kulawiec wrote. “It’s emotionally satisfying. It’s sometimes momentarily effective. But all it really does [is] open up still more attack vectors and accelerate the spiral to the bottom.”

KrebsOnSecurity would like to thank Dyn and Doug Madory for their assistance in researching the technical side of this story. For a deep dive into the BGP activity attributed to BackConnect, check out Madory’s post, BackConnect’s Suspicious Hijacks.

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx3UX2WK7G84E5J/32-Security-and-Compliance-Sessions-Now-Live-in-the-re-Invent-2016-Session-Catal

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32 titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof? 
 
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
 
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.
  

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.
 
 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is "yes," this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type. 
 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.
 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.
 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
 
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.
 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.”
 
This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.
 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.
 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.
 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.
 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.
 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.
 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.
 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.
 
The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.
 
For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.
 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.
 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.
 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.
 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.
 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.
 
 
 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.
 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources.
 
In addition, we will show you how to configure network security using VPCs and security groups. 
 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.
 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.
 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.
 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
 
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler. 
 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).
 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.
 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.
 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.
 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.
 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.
 
– Craig

Anti-Piracy Outfits Caught Fabricating Takedown Notices

Post Syndicated from Ernesto original https://torrentfreak.com/anti-piracy-outfits-caught-fabricating-takedown-notices-160918/

fraudalertEvery hour of the day dozens of anti-piracy outfits scour the web to find copyright-infringing content, so they can target it with takedown notices.

Since they’re dealing with such as massive volume of often automated requests, it’s no surprise that every now and then an error is made.

In recent years we have frequently pointed out such mistakes, some more serious than others. In a few cases, however, reporting organizations appear to make very little effort to be correct.

In fact, we’ve discovered that some are deliberately and automatically fabricating links to broaden their scope.

A few weeks ago we reported that defunct torrent cache services were receiving takedown notices for files that never existed, but it appears that the problem is much broader than first thought. Various torrent proxy and clone sites, dead or alive, are also receiving similar treatment.

For example, take the website Torrentz2.eu, which is a clone of the original Torrentz.eu that shut down a few weeks ago. The site links to plenty of copyrighted content, drawing the attention of rightsholders including the anti-piracy department at NBC Universal.

Below is a takedown notice sent out by NBC recently, one of many that come in the same format.

One of NBCUniversal’s takedown notices

torrentz2-eu

For most outsiders this may look like a proper notice. However, upon closer inspection it’s clear that the URL structure of the links is different from the format Torrentz2 uses. The notice is question lists this URL:

http://torrentz2.eu/dv/2012+dvdrip+battleship+mp4-q

On Torrentz2, however, the search “2012 dvdrip battleship mp4” generates the following URL, which is clearly different.

https://torrentz2.eu/search?f=2012+dvdrip+battleship+mp4

The link NBC Universal reports has never existed and simply returns a blank page. TorrentFreak reached out to the operator of the site who confirmed that they have never used this URL format.

This ‘mistake’ can be explained though. The URL structure NBCUniversal uses comes from the original Torrentz site, meaning that NBC simply did a search and replaced the old domain with a new one, without checking if the URLs exist.

In other words, they fabricated these links.

Further research reveals that this practice is rather common for clones and proxy sites. In the past we’ve already raised suspicions about long lists of URLs with the same structure, which appeared to be automatically generated.

Since most of these did link to actual content, it was hard to proof that they were being made up. However, when takedown notices are sent long after a site has gone offline, targeting content that didn’t exist when the site was still up, it becomes crystal clear what’s happening.

Take the domain name Extratorrent.space, for example. This was part of a ring of thousands of proxy sites that all shut down last year. However, anti-piracy groups are still targeting these URLs with new takedown requests.

In fact, there are many recent takedown requests that list content that wasn’t available at the time the site was operating. This is indisputable proof that these URLs are fabricated.

Ghost proxies

deadlinks

The screenshot above is just a random request that came in this week, seemingly targeting pirated copies of X-Men: Apocalypse. However, the proxy site domains in question have been offline for a long time, some close to a year.

Please note that these are not isolated or rare ‘mistakes.’ Tens, if not hundreds of thousands of fabricated links to these proxy sites have been sent out over the past several months, inflating Google’s takedown numbers.

So why are these fabricated notices being sent? One reason might be laziness. Anti-piracy outfits discover the URL structure of a site and simply keep sending notices without checking if the sites are still up.

Another motivation for anti-piracy outfits could be to boost their numbers. Many get paid based on the volume of notices they send out, so more links means extra cash.

Whatever the case is, the fabricated links above are just another example of the carelessness of some rightsholders and reporting organizations when it comes to the DMCA takedown procedure, skewing the actual numbers.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Elsevier Wants CloudFlare to Expose Pirate Sites

Post Syndicated from Ernesto original https://torrentfreak.com/elsevier-wants-cloudflare-to-expose-pirate-sites-160917/

cloudflareElsevier is one of the largest academic publishers in the world.

Through its ScienceDirect portal the company controls access to millions of scientific articles spread out over thousands of journals, most of which are behind a paywall.

Not all academics are happy with these restrictions that hamper their work. As a result, hundreds of thousands of researchers are turning to ‘pirate’ sites such as Sci-Hub, Libgen and Bookfi to access papers for free.

Elsevier views these sites as a major threat to its business model and last year it filed a complaint at a New York District Court, accusing the sites’ operators of systematic copyright infringement.

The publisher managed to obtain a preliminary injunction to seize the sites’ domain names. However, the case is still ongoing and the three sites in question continue to operate from new domains.

Over the past several months a lot of media coverage focused on Sci-Hub and its operator Alexandra Elbakyan. However, Elsevier still has no clue who’s behind the other two sites. With help from Cloudflare, it hopes to fill in the gaps.

Earlier this week Elsevier submitted a motion for leave to take discovery (pdf), so it can demand logs and other personally identifiable data about the operators of Libgen and Bookfi from Cloudflare.

Both sites previously used Cloudflare’s CDN services and the publisher is hoping that they still have crucial information on file.

Elsevier already tried to obtain the host IP addresses of the sites through the “Trusted Reporter” program, but Cloudflare replied that it could not share this info for sites that are no longer active on its network.

In addition to contacting Cloudflare, the academic publisher also requested information from Whois Privacy Corp. – the domain registration anonymization service used by both Libgen.org and Bookfi.org – but the company hasn’t responded to these requests at all.

“Elsevier has used all of the tools at its disposal in its attempt to identify the operators of Libgen.org and Bookfi.org,” Elsevier informs the court.

“However, as a consequence of the Defendants’ use of various service providers to anonymize their identities, as well as the nonresponsiveness of those service providers to Elsevier’s requests to date, these efforts have thus far been fruitless.”

According to Elsevier, a court-ordered discovery subpoena is the only option to move the case forward and identify the defendants behind Libgen and Bookfi.

“As a result, Elsevier has exhausted all other reasonable options and now must now seek this Court’s intervention in order to obtain identifying information concerning John Doe Defendants […] from CloudFlare: a business which has had direct dealings with both Libgen.org and Bookfi.org,” Elsevier adds.

Since neither Libgen not Bookfi are currently using Cloudflare’s services, it remains to be seen whether the company still has the site’s old IP-addresses and other information on file.

On Thursday the court granted Elsevier’s leave to take discovery ordering CloudFlare to save all relevant logs until a final discovery decision is taken. Before that happens, CloudFlare will have a chance to respond to the request.

To leave room for the possible discovery process, Elsevier previously asked for the pretrial hearing to be postponed. It will now take place late October.

Meanwhile, the websites continue serving ‘pirated’ papers and books through their new domain names at golibgen.io, bookfi.net and sci-hub.cc.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Earth on AWS: A Home for Geospatial Data on AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/earth-on-aws-a-home-for-geospatial-data-on-aws/

My colleague Joe Flasher is part of our Open Data team. He wrote the guest post below in order to let you know about our new Earth on AWS project.


Jeff;


 

In March 2015, we launched Landsat on AWS, a Public Dataset made up of imagery from the Landsat 8 satellite. Within the first year of launching Landsat on AWS, we logged over 1 billion requests for Landsat data and have been inpsired by our customers’ innovative uses of the data. Landsat on AWS showed that sharing data in the cloud makes it possible for anyone to build planetary-scale applications without the bandwidth, storage, memory and processing power limitations of conventional IT infrastructure

Today, we are launching Earth on AWS and making more large geospatial datasets openly available in the cloud so you can bring your algorithms to the data instead of being required to download them to your machine locally. But more than just making the data openly available, the Earth on AWS initiative will focus on providing resources to help you understand how to work with the data. We are also announcing an associated Call for Proposals for research utilizing the Earth on AWS datasets.

Making More Data Available
Earth on AWS currently contains the following data sets:

NAIP 1m Imagery
The National Agriculture Imagery Program (NAIP) acquires aerial imagery during the agricultural growing seasons in the continental U.S.. Roughly 1 meter aerial imagery (Red, Green, Blue, NIR) is available on Amazon S3. Learn more about NAIP on AWS.

Terrain Tiles
Worldwide elevation data available in terrain vector tiles. Additionally, in the United States 10 meter NED data now augments the earlier NED 3 meter and 30 meter SRTM data for crisper, more consistent mountain detail. Tiles are available via Amazon S3. Learn more about terrain tiles.

GDELT – A Global Database of Society
The GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, counts, themes, sources, emotions, counts, quotes, images, and events driving our global society every second of every day. Learn more about GDELT.

Landsat 8 Satellite Imagery
Landsat 8 data is available for anyone to use via Amazon Simple Storage Service (S3). All Landsat 8 scenes from 2015 are available along with a selection of cloud-free scenes from 2013 and 2014. All new Landsat 8 scenes are made available each day, often within hours of production. The satellite images the entire Earth every 16 days at a roughly 30 meter resolution. Learn more about Landsat on AWS.

NEXRAD Weather Radar
The Next Generation Weather Radar (NEXRAD) is a network of 160 high-resolution Doppler radar sites that detects precipitation and atmospheric movement and disseminates data in approximately 5 minute intervals from each site. NEXRAD enables severe storm prediction and is used by researchers and commercial enterprises to study and address the impact of weather across multiple sectors. Learn more about NEXRAD on AWS.

SpaceNet Machine Learning Corpus
SpaceNet is a corpus of very high-resolution DigitalGlobe satellite imagery and labeled training data for researchers to utilize to develop and train machine learning algorithms. The dataset is made up of roughly 1,990 square kilometers of imagery at 50 cm resolution and 220,594 corresponding building footprints. Learn more about the SpaceNet corpus.

NASA Earth Exchange
The NASA Earth Exchange (NEX) makes it easier and more efficient for researchers to access and process earth science data. NEX datasets available on Amazon S3 include downscaled climate projections (including newly available Localized Constructed Analogs), global MODIS vegetation indices, and Landsat Global Land Survey data. Learn more about the NASA Earth Exchange.

Beyond Opening Data
Open data is only useful when you understand what it is and how to use it for your own purposes. To that end, Earth on AWS features videos and articles of customers talking about how they use geospatial data within their own workflows. From using Lambda to replace geospatial servers to studying migrating flocks of birds with radar data, there are a wealth of examples that you can learn from.

If you have an idea of how to use Earth on AWS data, we want to hear about it! There is an open Call for Proposals for research related to Earth on AWS datasets. Our goal with this Call for Proposals is to remove traditional barriers and allow students, educators and researchers to be key drivers of technological innovation and make new advances in their fields.

Thanks to Our Customers
We’d like to thank our customers at DigitalGlobe, Mapzen, Planet, and Unidata for working with us to make these datasets available on AWS.

We are always looking for new ways to work with large datasets and if you have ideas for new data we should be adding or ways in which we should be providing the data, please contact us.

Joe Flasher, Open Geospatial Data Lead, Amazon Web Services