Tag Archives: report

Scale Your Web Application — One Step at a Time

Post Syndicated from Saurabh Shrivastava original https://aws.amazon.com/blogs/architecture/scale-your-web-application-one-step-at-a-time/

I often encounter people experiencing frustration as they attempt to scale their e-commerce or WordPress site—particularly around the cost and complexity related to scaling. When I talk to customers about their scaling plans, they often mention phrases such as horizontal scaling and microservices, but usually people aren’t sure about how to dive in and effectively scale their sites.

Now let’s talk about different scaling options. For instance if your current workload is in a traditional data center, you can leverage the cloud for your on-premises solution. This way you can scale to achieve greater efficiency with less cost. It’s not necessary to set up a whole powerhouse to light a few bulbs. If your workload is already in the cloud, you can use one of the available out-of-the-box options.

Designing your API in microservices and adding horizontal scaling might seem like the best choice, unless your web application is already running in an on-premises environment and you’ll need to quickly scale it because of unexpected large spikes in web traffic.

So how to handle this situation? Take things one step at a time when scaling and you may find horizontal scaling isn’t the right choice, after all.

For example, assume you have a tech news website where you did an early-look review of an upcoming—and highly-anticipated—smartphone launch, which went viral. The review, a blog post on your website, includes both video and pictures. Comments are enabled for the post and readers can also rate it. For example, if your website is hosted on a traditional Linux with a LAMP stack, you may find yourself with immediate scaling problems.

Let’s get more details on the current scenario and dig out more:

  • Where are images and videos stored?
  • How many read/write requests are received per second? Per minute?
  • What is the level of security required?
  • Are these synchronous or asynchronous requests?

We’ll also want to consider the following if your website has a transactional load like e-commerce or banking:

How is the website handling sessions?

  • Do you have any compliance requests—like the Payment Card Industry Data Security Standard (PCI DSS compliance) —if your website is using its own payment gateway?
  • How are you recording customer behavior data and fulfilling your analytics needs?
  • What are your loading balancing considerations (scaling, caching, session maintenance, etc.)?

So, if we take this one step at a time:

Step 1: Ease server load. We need to quickly handle spikes in traffic, generated by activity on the blog post, so let’s reduce server load by moving image and video to some third -party content delivery network (CDN). AWS provides Amazon CloudFront as a CDN solution, which is highly scalable with built-in security to verify origin access identity and handle any DDoS attacks. CloudFront can direct traffic to your on-premises or cloud-hosted server with its 113 Points of Presence (102 Edge Locations and 11 Regional Edge Caches) in 56 cities across 24 countries, which provides efficient caching.
Step 2: Reduce read load by adding more read replicas. MySQL provides a nice mirror replication for databases. Oracle has its own Oracle plug for replication and AWS RDS provide up to five read replicas, which can span across the region and even the Amazon database Amazon Aurora can have 15 read replicas with Amazon Aurora autoscaling support. If a workload is highly variable, you should consider Amazon Aurora Serverless database  to achieve high efficiency and reduced cost. While most mirror technologies do asynchronous replication, AWS RDS can provide synchronous multi-AZ replication, which is good for disaster recovery but not for scalability. Asynchronous replication to mirror instance means replication data can sometimes be stale if network bandwidth is low, so you need to plan and design your application accordingly.

I recommend that you always use a read replica for any reporting needs and try to move non-critical GET services to read replica and reduce the load on the master database. In this case, loading comments associated with a blog can be fetched from a read replica—as it can handle some delay—in case there is any issue with asynchronous reflection.

Step 3: Reduce write requests. This can be achieved by introducing queue to process the asynchronous message. Amazon Simple Queue Service (Amazon SQS) is a highly-scalable queue, which can handle any kind of work-message load. You can process data, like rating and review; or calculate Deal Quality Score (DQS) using batch processing via an SQS queue. If your workload is in AWS, I recommend using a job-observer pattern by setting up Auto Scaling to automatically increase or decrease the number of batch servers, using the number of SQS messages, with Amazon CloudWatch, as the trigger.  For on-premises workloads, you can use SQS SDK to create an Amazon SQS queue that holds messages until they’re processed by your stack. Or you can use Amazon SNS  to fan out your message processing in parallel for different purposes like adding a watermark in an image, generating a thumbnail, etc.

Step 4: Introduce a more robust caching engine. You can use Amazon Elastic Cache for Memcached or Redis to reduce write requests. Memcached and Redis have different use cases so if you can afford to lose and recover your cache from your database, use Memcached. If you are looking for more robust data persistence and complex data structure, use Redis. In AWS, these are managed services, which means AWS takes care of the workload for you and you can also deploy them in your on-premises instances or use a hybrid approach.

Step 5: Scale your server. If there are still issues, it’s time to scale your server.  For the greatest cost-effectiveness and unlimited scalability, I suggest always using horizontal scaling. However, use cases like database vertical scaling may be a better choice until you are good with sharding; or use Amazon Aurora Serverless for variable workloads. It will be wise to use Auto Scaling to manage your workload effectively for horizontal scaling. Also, to achieve that, you need to persist the session. Amazon DynamoDB can handle session persistence across instances.

If your server is on premises, consider creating a multisite architecture, which will help you achieve quick scalability as required and provide a good disaster recovery solution.  You can pick and choose individual services like Amazon Route 53, AWS CloudFormation, Amazon SQS, Amazon SNS, Amazon RDS, etc. depending on your needs.

Your multisite architecture will look like the following diagram:

In this architecture, you can run your regular workload on premises, and use your AWS workload as required for scalability and disaster recovery. Using Route 53, you can direct a precise percentage of users to an AWS workload.

If you decide to move all of your workloads to AWS, the recommended multi-AZ architecture would look like the following:

In this architecture, you are using a multi-AZ distributed workload for high availability. You can have a multi-region setup and use Route53 to distribute your workload between AWS Regions. CloudFront helps you to scale and distribute static content via an S3 bucket and DynamoDB, maintaining your application state so that Auto Scaling can apply horizontal scaling without loss of session data. At the database layer, RDS with multi-AZ standby provides high availability and read replica helps achieve scalability.

This is a high-level strategy to help you think through the scalability of your workload by using AWS even if your workload in on premises and not in the cloud…yet.

I highly recommend creating a hybrid, multisite model by placing your on-premises environment replica in the public cloud like AWS Cloud, and using Amazon Route53 DNS Service and Elastic Load Balancing to route traffic between on-premises and cloud environments. AWS now supports load balancing between AWS and on-premises environments to help you scale your cloud environment quickly, whenever required, and reduce it further by applying Amazon auto-scaling and placing a threshold on your on-premises traffic using Route 53.

The Man from Earth Sequel ‘Pirated’ on The Pirate Bay – By Its Creators

Post Syndicated from Andy original https://torrentfreak.com/the-man-from-earth-sequel-pirated-on-the-pirate-bay-by-its-creators-180116/

More than a decade ago, Hollywood was struggling to get to grips with the file-sharing phenomenon. Sharing via BitTorrent was painted as a disease that could kill the movie industry, if it was allowed to take hold. Tough action was the only way to defeat it, the suits concluded.

In 2007, however, a most unusual turn of events showed that piracy could have a magical effect on the success of a movie.

After being produced on a tiny budget, a then little-known independent sci-fi film called “The Man from Earth” turned up on pirate sites, to the surprise of its creators.

“Originally, somebody got hold of a promotional screener DVD of ‘Jerome Bixby’s The Man from Earth’, ripped the file and posted the movie online before we knew what was even happening,” Man from Earth director Richard Schenkman informs TorrentFreak.

“A week or two before the DVD’s ‘street date’, we jumped 11,000% on the IMDb ‘Moviemeter’ and we were shocked.”

With pirates fueling interest in the movie, a member of the team took an unusual step. Producer Eric Wilkinson wrote to RLSlog, a popular piracy links site – not to berate pirates – but to thank them for catapulting the movie to fame.

“Our independent movie had next to no advertising budget and very little going for it until somebody ripped one of the DVD screeners and put the movie online for all to download. Most of the feedback from everyone who has downloaded ‘The Man From Earth’ has been overwhelmingly positive. People like our movie and are talking about it, all thanks to piracy on the net!” he wrote.

Richard Schenkman told TF this morning that availability on file-sharing networks was important for the movie, since it wasn’t available through legitimate means in most countries. So, the team called out to fans for help, if they’d pirated the movie and had liked what they’d seen.

“Once we realized what was going on, we asked people to make donations to our PayPal page if they saw the movie for free and liked it, because we had all worked for nothing for two years to bring it to the screen, and the only chance we had of surviving financially was to ask people to support us and the project,” Schenkman explains.

“And, happily, many people around the world did donate, although of course only a tiny fraction of the millions and millions of people who downloaded pirated copies.”

Following this early boost The Man from Earth went on to win multiple awards. And, a decade on, it boasts a hugely commendable 8/10 score on IMDb from more than 147,000 voters, with Netflix users leaving over 650,000 ratings, which reportedly translates to well over a million views.

It’s a performance director Richard Schenkman would like to repeat with his sequel: The Man from Earth: Holocene. This time, however, he won’t be leaving the piracy aspect to chance.

Yesterday the team behind the movie took matters into their own hands, uploading the movie to The Pirate Bay and other sites so that fans can help themselves.

“It was going to get uploaded regardless of what we did or didn’t do, and we figured that as long as this was inevitable, we would do the uploading ourselves and explain why we were doing it,” Schenkman informs TF.

“And, we would once again reach out to the filesharing community and remind them that while movies may be free to watch, they are not free to make, and we need their support.”

The release, listed here on The Pirate Bay, comes with detailed notes and a few friendly pointers on how the release can be further shared. It also informs people how they can show their appreciation if they like it.

The Man from Earth: Holocene on The Pirate Bay

“It’s a revolutionary global experiment in the honor system. We’re asking people: ‘If you watch our movie, and you like it, will you pay something directly to the people who made it?’,” Schenkman says.

“That’s why we’re so grateful to all of you who visit ManFromEarth.com and make a donation – of any size – if you’ve watched the movie without paying for it up front.”

In addition to using The Pirate Bay – which is often and incorrectly berated as a purely ‘pirate’ platform with no legitimate uses – the team has also teamed up with OpenSubtitles, so translations for the movie are available right from the beginning.

Other partners include MovieSaints.com, where fans can pay to see the movie from January 19 but get a full refund if they don’t enjoy it. It’s also available on Vimeo (see below) but the version seen by pirates is slightly different, and for good reason, Schenkman says.

“This version of the movie includes a greeting from me at the beginning, pointing out that we did indeed upload the movie ourselves, and asking people to visit manfromearth.com and make a donation if they can afford to, and if they enjoyed the film.

“The version we posted is very high-resolution, although we are also sharing some smaller files for those folks who have a slow Internet connection where they live,” he explains.

“We’re asking people to share ONLY this version of the movie — NOT to edit off the appeal message. And of course we’re asking people not to post the movie at YouTube or any other platform where someone (other than us) could profit financially from it. That would not be fair, nor in keeping with the spirit of what we’re trying to do.”

It’s not often we’re able to do this so it’s a pleasure to say that The Man from Earth: Holocene can be downloaded from The Pirate Bay, in various qualities and entirely legally, here. For those who want to show their appreciation, the tip jar is here.

"The Man from Earth: Holocene" Teaser Trailer from Richard Schenkman on Vimeo.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Hollywood Wins ISP Blockade Against Popular Pirate Sites in Ireland

Post Syndicated from Ernesto original https://torrentfreak.com/hollywood-wins-isp-blockade-against-popular-pirate-sites-in-ireland-180116/

Like many other countries throughout Europe, Ireland is no stranger to pirate site blocking efforts.

The Pirate Bay was blocked back in 2009, as part of a voluntary agreement between copyright holders and local ISP Eircom. A few years later the High Court ordered other major Internet providers to follow suit.

However, The Pirate Bay is not the only ‘infringing’ site out there. The Motion Picture Association (MPA) has therefore asked the Commercial Court to expand the blockades to other sites.

On behalf of several major Hollywood studios, the group most recently targeted a group of the most used torrent and streaming sites; 1337x.io, EZTV.ag, Bmovies.is, 123movieshub.to, Putlocker.io, RARBG.to, Gowatchfreemovies.to and YTS.am.

On Monday the Commercial Court sided with the movie studios ordering all major Irish ISPs to block the sites. The latest order applies to Eircom, Sky Ireland, Vodafone Ireland, Virgin Media Ireland, Three Ireland, Digiweb, Imagine Telecommunications and Magnet Networks.

According to Justice Brian McGovern, the movie studios had made it clear that the sites in question infringed their copyrights. As such, there are “significant public interest grounds” to have them blocked.

Irish Examiner reports that none of the ISPs opposed the blocking request. This means that new pirate site blockades are mostly a formality now.

MPA EMEA President and Managing Director Stan McCoy is happy with the outcome, which he says will help to secure jobs in the movie industry.

“As the Irish film industry is continuing to thrive, the MPA is dedicated to supporting that growth by combatting the operations of illegal sites that undermine the sustainability of the sector,” McCoy says.

“Preventing these pirate sites from freely disturbing other people’s work will help us provide greater job security for the 18,000 people employed through the Irish film industry and ensure that consumers can continue to enjoy high quality content in the future.”

The MPA also obtained similar blocks against movie4k.to, primewire.ag, and onwatchseries.to. last year, which remain in effect to date.

The torrent and streaming sites that were targeted most recently have millions of visitors worldwide. While the blockades will make it harder for the Irish to access them directly, history has shown that some people circumvent these measures or simply move to other sites.

Several of the targeted sites themselves are also keeping a close eye on these blocking efforts and are providing users with alternative domains to bypass the restrictions, at least temporarily.

As such, it would be no surprise if the Hollywood studios return to the Commercial Court again in a few months.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Raspbery Pi-newood Derby

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pinewood-derby/

Andre Miron’s Pinewood Derby Instant Replay System (sorry, not sorry for the pun in the title) uses a Raspberry Pi to monitor the finishing line and play back a slow-motion instant replay, putting an end to “No, I won!” squabbles once and for all.

Raspberry Pi Based Pinewood Derby Instant Replay Demo

This is the same system I demo in this video (https://youtu.be/-QyMxKfBaAE), but on our actual track with real pinewood derby cars. Glad to report that it works great!

Pinewood Derby

For those unfamiliar with the term, the Pinewood Derby is a racing event for Cub Scouts in the USA. Cub Scouts, often with the help of a guardian, build race cars out of wood according to rules regarding weight, size, materials, etc.

Pinewood derby race car

The Cubs then race their cars in heats, with the winners advancing to district and council races.

Who won?

Andre’s Instant Replay System registers the race cars as they cross the finishing line, and it plays back slow-motion video of the crossing on a monitor. As he explains on YouTube:

The Pi is recording a constant stream of video, and when the replay is triggered, it records another half-second of video, then takes the last second and a half and saves it in slow motion (recording is done at 90 fps), before replaying.

The build also uses an attached Arduino, connected to GPIO pin 5, to trigger the recording and playback as it registers the passing cars via a voltage splitter. Additionally, the system announces the finishing places on a rather attractive-looking display above the finishing line.

Pinewood derby race car Raspberry Pi

The result? No more debate about whose car crossed the line first in neck-and-neck races.

Build your own

Andre takes us through the physical setup of the build in the video below, and you’ll find the complete code pasted in the description of the video here. Thanks, Andre!

Raspberry Pi based Pinewood Derby Instant Replay System

See the system on our actual track here: https://youtu.be/B3lcQHWGq88 Raspberry Pi based instant replay system, triggered by Arduino Pinewood Derby Timer. The Pi uses GPIO pin 5 attached to a voltage splitter on Arduino output 11 (and ground-ground) to detect when a car crosses the finish line, which triggers the replay.

Digital making in your club

If you’re a member of an various after-school association such as the Scouts or Guides, then using the Raspberry Pi and our free project resources, or visiting a Code Club or CoderDojo, are excellent ways to work towards various badges and awards. So talk to your club leader to discover all the ways in which you can incorporate digital making into your club!

The post Raspbery Pi-newood Derby appeared first on Raspberry Pi.

US Govt Brands Torrent, Streaming & Cyberlocker Sites As Notorious Markets

Post Syndicated from Andy original https://torrentfreak.com/us-govt-brands-torrent-streaming-cyberlocker-sites-as-notorious-markets-180115/

In its annual “Out-of-Cycle Review of Notorious Markets” the office of the United States Trade Representative (USTR) has listed a long list of websites said to be involved in online piracy.

The list is compiled with high-level input from various trade groups, including the MPAA and RIAA who both submitted their recommendations (1,2) during early October last year.

With the word “allegedly” used more than two dozen times in the report, the US government notes that its report does not constitute cast-iron proof of illegal activity. However, it urges the countries from where the so-called “notorious markets” operate to take action where they can, while putting owners and facilitators on notice that their activities are under the spotlight.

“A goal of the List is to motivate appropriate action by owners, operators, and service providers in the private sector of these and similar markets, as well as governments, to reduce piracy and counterfeiting,” the report reads.

“USTR highlights the following marketplaces because they exemplify global counterfeiting and piracy concerns and because the scale of infringing activity in these marketplaces can cause significant harm to U.S. intellectual property (IP) owners, consumers, legitimate online platforms, and the economy.”

The report begins with a page titled “Issue Focus: Illicit Streaming Devices”. Unsurprisingly, particularly given their place in dozens of headlines last year, the segment focus on the set-top box phenomenon. The piece doesn’t list any apps or software tools as such but highlights the general position, claiming a cost to the US entertainment industry of $4-5 billion a year.

Torrent Sites

In common with previous years, the USTR goes on to list several of the world’s top torrent sites but due to changes in circumstances, others have been delisted. ExtraTorrent, which shut down May 2017, is one such example.

As the world’s most famous torrent site, The Pirate Bay gets a prominent mention, with the USTR noting that the site is of “symbolic importance as one of the longest-running and most vocal torrent sites. The USTR underlines the site’s resilience by noting its hydra-like form while revealing an apparent secret concerning its hosting arrangements.

“The Pirate Bay has allegedly had more than a dozen domains hosted in various countries around the world, applies a reverse proxy service, and uses a hosting provider in Vietnam to evade further enforcement action,” the USTR notes.

Other torrent sites singled out for criticism include RARBG, which was nominated for the listing by the movie industry. According to the USTR, the site is hosted in Bosnia and Herzegovina and has changed hosting services to prevent shutdowns in recent years.

1337x.to and the meta-search engine Torrentz2 are also given a prime mention, with the USTR noting that they are “two of the most popular torrent sites that allegedly infringe U.S. content industry’s copyrights.” Russia’s RuTracker is also targeted for criticism, with the government noting that it’s now one of the most popular torrent sites in the world.

Streaming & Cyberlockers

While torrent sites are still important, the USTR reserves considerable space in its report for streaming portals and cyberlocker-type services.

4Shared.com, a file-hosting site that has been targeted by dozens of millions of copyright notices, is reportedly no longer able to use major US payment providers. Nevertheless, the British Virgin Islands company still collects significant sums from premium accounts, advertising, and offshore payment processors, USTR notes.

Cyberlocker Rapidgator gets another prominent mention in 2017, with the USTR noting that the Russian-hosted platform generates millions of dollars every year through premium memberships while employing rewards and affiliate schemes.

Due to its increasing popularity as a hosting and streaming operation, Openload.co (Romania) is now a big target for the USTR. “The site is used frequently in combination with add-ons in illicit streaming devices. In November 2017, users visited Openload.co a staggering 270 million times,” the USTR writes.

Owned by a Swiss company and hosted in the Netherlands, the popular site Uploaded is also criticized by the US alongside France’s 1Fichier.com, which allegedly hosts pirate games while being largely unresponsive to takedown notices. Dopefile.pk, a Pakistan-based storage outfit, is also highlighted.

On the video streaming front, it’s perhaps no surprise that the USTR focuses on sites like FMovies (Sweden), GoStream (Vietnam), Movie4K.tv (Russia) and PrimeWire. An organization collectively known as the MovShare group which encompasses Nowvideo.sx, WholeCloud.net, NowDownload.cd, MeWatchSeries.to and WatchSeries.ac, among others, is also listed.

Unauthorized music / research papers

While most of the above are either focused on video or feature it as part of their repertoire, other sites are listed for their attention to music. Convert2MP3.net is named as one of the most popular stream-ripping sites in the world and is highlighted due to the prevalence of YouTube-downloader sites and the 2017 demise of YouTube-MP3.

“Convert2MP3.net does not appear to have permission from YouTube or other sites and does not have permission from right holders for a wide variety of music represented by major U.S. labels,” the USTR notes.

Given the amount of attention the site has received in 2017 as ‘The Pirate Bay of Research’, Libgen.io and Sci-Hub.io (not to mention the endless proxy and mirror sites that facilitate access) are given a detailed mention in this year’s report.

“Together these sites make it possible to download — all without permission and without remunerating authors, publishers or researchers — millions of copyrighted books by commercial publishers and university presses; scientific, technical and medical journal articles; and publications of technological standards,” the USTR writes.

Service providers

But it’s not only sites that are being put under pressure. Following a growing list of nominations in previous years, Swiss service provider Private Layer is again singled out as a rogue player in the market for hosting 1337x.to and Torrentz2.eu, among others.

“While the exact configuration of websites changes from year to year, this is the fourth consecutive year that the List has stressed the significant international trade impact of Private Layer’s hosting services and the allegedly infringing sites it hosts,” the USTR notes.

“Other listed and nominated sites may also be hosted by Private Layer but are using
reverse proxy services to obfuscate the true host from the public and from law enforcement.”

The USTR notes Switzerland’s efforts to close a legal loophole that restricts enforcement and looks forward to a positive outcome when the draft amendment is considered by parliament.

Perhaps a little surprisingly given its recent anti-piracy efforts and overtures to the US, Russia’s leading social network VK.com again gets a place on the new list. The USTR recognizes VK’s efforts but insists that more needs to be done.

Social networking and e-commerce

“In 2016, VK reached licensing agreements with major record companies, took steps to limit third-party applications dedicated to downloading infringing content from the site, and experimented with content recognition technologies,” the USTR writes.

“Despite these positive signals, VK reportedly continues to be a hub of infringing activity and the U.S. motion picture industry reports that they find thousands of infringing files on the site each month.”

Finally, in addition to traditional pirate sites, the US also lists online marketplaces that allegedly fail to meet appropriate standards. Re-added to the list in 2016 after a brief hiatus in 2015, China’s Alibaba is listed again in 2017. The development provoked an angry response from the company.

Describing his company as a “scapegoat”, Alibaba Group President Michael Evans said that his platform had achieved a 25% drop in takedown requests and has even been removing infringing listings before they make it online.

“In light of all this, it’s clear that no matter how much action we take and progress we make, the USTR is not actually interested in seeing tangible results,” Evans said in a statement.

The full list of sites in the Notorious Markets Report 2017 (pdf) can be found below.

– 1fichier.com – (cyberlocker)
– 4shared.com – (cyberlocker)
– convert2mp3.net – (stream-ripper)
– Dhgate.com (e-commerce)
– Dopefile.pl – (cyberlocker)
– Firestorm-servers.com (pirate gaming service)
– Fmovies.is, Fmovies.se, Fmovies.to – (streaming)
– Gostream.is, Gomovies.to, 123movieshd.to (streaming)
– Indiamart.com (e-commerce)
– Kinogo.club, kinogo.co (streaming host, platform)
– Libgen.io, sci-hub.io, libgen.pw, sci-hub.cc, sci-hub.bz, libgen.info, lib.rus.ec, bookfi.org, bookzz.org, booker.org, booksc.org, book4you.org, bookos-z1.org, booksee.org, b-ok.org (research downloads)
– Movshare Group – Nowvideo.sx, wholecloud.net, auroravid.to, bitvid.sx, nowdownload.ch, cloudtime.to, mewatchseries.to, watchseries.ac (streaming)
– Movie4k.tv (streaming)
– MP3VA.com (music)
– Openload.co (cyberlocker / streaming)
– 1337x.to (torrent site)
– Primewire.ag (streaming)
– Torrentz2, Torrentz2.me, Torrentz2.is (torrent site)
– Rarbg.to (torrent site)
– Rebel (domain company)
– Repelis.tv (movie and TV linking)
– RuTracker.org (torrent site)
– Rapidgator.net (cyberlocker)
– Taobao.com (e-commerce)
– The Pirate Bay (torrent site)
– TVPlus, TVBrowser, Kuaikan (streaming apps and addons, China)
– Uploaded.net (cyberlocker)
– VK.com (social networking)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Are Torrent Sites Using DMCA Notices to Quash Their Competition?

Post Syndicated from Ernesto original https://torrentfreak.com/are-torrent-sites-using-dmca-notices-to-quash-their-competition-180114/

Every day, copyright holders send out millions of takedown notices to various services, hoping to protect their works.

While most of these requests are legitimate, the process is also being abused. Google prominently features examples of such dubious DMCA requests in its transparency report.

This week we were contacted by the owner of YTS.me after he noticed some unusual activity. In recent weeks his domain name has been targeted with a series of takedown notices from rather unusual people.

Senders with names such as Niklas Glockner, Michelle Williams, Maria Baader, Stefan Kuefer, Anja Herzog, and Markus Ostermann asked Google to remove thousands of YTS.me URLs.

Every notice lists just one movie title, but hundreds of links, most of which have nothing to do with the movie in question.

A few URLs from a single notice

These submitters are all relatively new and there is no sign that they are authorized by the applicable copyright holder. This, and the long list of irrelevant URLs suggest that these DMCA notices are abusive.

The owner of YTS.me believes that the senders have a clear motive. The purpose of the notices is to remove well-ranked pages and push the targeted sites down in Google’s search results.

“These all are fake people names submitting fake DMCA complaints and are not authorized to submit complaints,” the YTS.me operator notes.

“Even if they are real people they would have submitted, or are authorized to submit, complaints for only a few titles. Instead, they submit fake complaints and submit all the URLs possible on our website to degrade its ranking.”

The question that remains is, who is responsible for these notices? Looking at the list of sites that are targeted by these abusive senders we see a pattern emerge. They all target copycats of defunct sites such as YTS and ExtraTorrent.

Markus Osterman’s activity

This leads the YTS.me operator to the conclusion that one of its main competitors is sending these notices. While there is no hard evidence, it seems plausible that another YTS copycat is attempting to take the competition out of Google’s search results to gain more exposure itself.

YTS.me has a good idea of who the perpetrator(s) are – a person or group that also operates several other copycat sites. Thus far there’s no bulletproof evidence though, but it’s a likely explanation.

In any case, the DMCA takedown requests are definitely out of order and warrant further investigation by Google.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

ISP: We’re Cooperating With Police Following Pirate IPTV Raid

Post Syndicated from Andy original https://torrentfreak.com/isp-were-cooperating-with-police-following-pirate-iptv-raid-180113/

This week, police forces around Europe took action against what is believed to be one of the world’s largest pirate IPTV networks.

The investigation, launched a year ago and coordinated by Europol, came to head on Tuesday when police carried out raids in Cyprus, Bulgaria, Greece, and the Netherlands. A fresh announcement from the crime-fighting group reveals the scale of the operation.

It was led by the Cypriot Police – Intellectual Property Crime Unit, with the support of the Cybercrime Division of the Greek Police, the Dutch Fiscal Investigative and Intelligence Service (FIOD), the Cybercrime Unit of the Bulgarian Police, Europol’s Intellectual Property Crime Coordinated Coalition (IPC³), and supported by members of the Audiovisual Anti-Piracy Alliance (AAPA).

In Cyprus, Bulgaria and Greece, 17 house searches were carried out. Three individuals aged 43, 44, and 53 were arrested in Cyprus and one was arrested in Bulgaria.

All stand accused of being involved in an international operation to illegally broadcast around 1,200 channels of pirated content to an estimated 500,000 subscribers. Some of the channels offered were illegally sourced from Sky UK, Bein Sports, Sky Italia, and Sky DE. On Thursday, the three individuals in Cyprus were remanded in custody for seven days.

“The servers used to distribute the channels were shut down, and IP addresses hosted by a Dutch company were also deactivated thanks to the cooperation of the authorities of The Netherlands,” Europol reports.

“In Bulgaria, 84 servers and 70 satellite receivers were seized, with decoders, computers and accounting documents.”

TorrentFreak was previously able to establish that Megabyte-Internet Ltd, an ISP located in the small Bulgarian town Petrich, was targeted by police. The provider went down on Tuesday but returned towards the end of the week. Responding to our earlier inquiries, the company told us more about the situation.

“We are an ISP provider located in Petrich, Bulgaria. We are selling services to around 1,500 end-clients in the Petrich area and surrounding villages,” a spokesperson explained.

“Another part of our business is internet services like dedicated unmanaged servers, hosting, email servers, storage services, and VPNs etc.”

The spokesperson added that some of Megabyte’s equipment is located at Telepoint, Bulgaria’s biggest datacenter, with connectivity to Petrich. During the raid the police seized the company’s hardware to check for evidence of illegal activity.

“We were informed by the police that some of our clients in Petrich and Sofia were using our service for illegal streaming and actions,” the company said.

“Of course, we were not able to know this because our services are unmanaged and root access [to servers] is given to our clients. For this reason any client and anyone that uses our services are responsible for their own actions.”

TorrentFreak asked many more questions, including how many police attended, what type and volume of hardware was seized, and whether anyone was arrested or taken for questioning. But, apart from noting that the police were friendly, the company declined to give us any additional information, revealing that it was not permitted to do so at this stage.

What is clear, however, is that Megabyte-Internet is offering its full cooperation to the authorities. The company says that it cannot be held responsible for the actions of its clients so their details will be handed over as part of the investigation.

“So now we will give to the police any details about these clients because we hold their full details by law. [The police] will find [out about] all the illegal actions from them,” the company concludes, adding that it’s fully operational once more and working with clients.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Coalition Against Piracy Launches Landmark Case Against ‘Pirate’ Android Box Sellers

Post Syndicated from Andy original https://torrentfreak.com/coalition-against-piracy-launches-landmark-case-against-pirate-android-box-sellers-180112/

In 2017, anti-piracy enforcement went global when companies including Disney, HBO, Netflix, Amazon and NBCUniversal formed the Alliance for Creativity and Entertainment (ACE).

Soon after the Coalition Against Piracy (CAP) was announced. With a focus on Asia and backed by CASBAA, CAP counts many of the same companies among its members in addition to local TV providers such as StarHub.

From the outset, CAP has shown a keen interest in tackling unlicensed streaming, particularly that taking place via illicit set-top boxes stuffed with copyright-infringing apps and add-ons. One country under CAP’s spotlight is Singapore, where relevant law is said to be fuzzy at best, insufficient at worst. Now, however, a line in the sand might not be far away.

According to a court listing discovered by Singapore’s TodayOnline, today will see the Coalition Against Piracy’s general manager Neil Kevin Gane attempt to launch a pioneering private prosecution against set-top box distributor Synnex Trading and its client and wholesale goods retailer, An-Nahl.

Gane and CAP are said to be acting on behalf of four parties, one which is TV giant StarHub, a company with a huge interest in bringing media piracy under control in the region. It’s reported that they have also named Synnex Trading director Jia Xiaofen and An-Nahl director Abdul Nagib as defendants in their private criminal case after the parties failed to reach a settlement in an earlier process.

Contacted by TodayOnline, an employee of An-Nahl said the company no longer sells the boxes. However, Synnex is reportedly still selling them for S$219 each ($164) plus additional fees for maintenance and access to VOD. The company’s Facebook page is still active with the relevant offer presented prominently.

The importance of the case cannot be understated. While StarHub and other broadcasters have successfully prosecuted cases where people unlawfully decrypted broadcast signals, the provision of unlicensed streams isn’t specifically tackled by Singapore’s legislation. It’s now a major source of piracy in the region, as it is elsewhere around the globe.

Only time will tell how the process will play out but it’s clear that CAP and its members are prepared to invest significant sums into a prosecution for a favorable outcome. CAP believes that the supply of the boxes falls under Section 136 (3A) of the Copyright Act but only time will tell.

Last December, CAP separately called on the Singapore government to not only block ‘pirate’ streaming software but also unlicensed streams from entering the country.

“Within the Asia-Pacific region, Singapore is the worst in terms of availability of illicit streaming devices,” said CAP General Manager Neil Gane. “They have access to hundreds of illicit broadcasts of channels and video-on-demand content.”

CAP’s 21 members want the authorities to block the software inside devices that enables piracy but it’s far from clear how that can be achieved.

Update: The four companies taking the action are confirmed as Singtel, Starhub, Fox Network, and the English Premier League

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Europol Hits Huge 500,000 Subscriber Pirate IPTV Operation

Post Syndicated from Andy original https://torrentfreak.com/europol-hits-huge-500000-subscriber-pirate-iptv-operation-180111/

Live TV is in massive demand but accessing all content in a particular region can be a hugely expensive proposition, with tradtional broadcasting monopolies demanding large subscription fees.

For millions around the world, this ‘problem’ can be easily circumvented. Pirate IPTV operations, which supply thousands of otherwise subscription channels via the Internet, are on the increase. They’re accessible for just a few dollars, euros, or pounds per month, slashing bills versus official providers on a grand scale.

This week, however, police forces around Europe coordinated to target what they claim is one of the world’s largest illicit IPTV operations. The investigation was launched last February by Europol and on Tuesday coordinated actions were carried out in Cyprus, Bulgaria, Greece, and the Netherlands.

Three suspects were arrested in Cyprus – two in Limassol (aged 43 and 44) and one in Larnaca (aged 53). All are alleged to be part of an international operation to illegally broadcast around 1,200 channels of pirated content worldwide. Some of the channels offered were illegally sourced from Sky UK, Bein Sports, Sky Italia, and Sky DE

If initial reports are to be believed, the reach of the IPTV service was huge. Figures usually need to be taken with a pinch of salt but information suggests the service had more than 500,000 subscribers, each paying around 10 euros per month. (Note: how that relates to the alleged five million euros per year in revenue is yet to be made clear)

Police action was spread across the continent, with at least nine separate raids, including in the Netherlands where servers were uncovered. However, it was determined that these were in place to hide the true location of the operation’s main servers. Similar ‘front’ servers were also deployed in other regions.

The main servers behind the IPTV operation were located in Petrich, a small town in Blagoevgrad Province, southwestern Bulgaria. No details have been provided by the authorities but TF is informed that the website of a local ISP, Megabyte-Internet, from where pirate IPTV has been broadcast for at least the past several months, disappeared on Tuesday. It remains offline this morning.

The company did not respond to our request for comment and there’s no suggestion that it’s directly involved in any illegal activity. However, its Autonomous System (AS) number reveals linked IPTV services, none of which appear to be operational today. The ISP is also listed on sites where ‘pirate’ IPTV channel playlists are compiled by users.

According to sources in Cyprus, police requested permission from the Larnaca District Court to detain the arrested individuals for eight days. However, local news outlet Philenews said that any decision would be postponed until this morning, since one of the three suspects, an English Cypriot, required an interpreter which caused a delay.

In addition to prosecutors and defense lawyers, two Dutch investigators from Europol were present in court yesterday. The hearing lasted for six hours and was said to be so intensive that the court stenographer had to be replaced due to overwork.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

12 B2 Power Tips for Experts and Developers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/advanced-cloud-storage-tips/

B2 Tips for Pros
If you’ve been using B2 Cloud Storage for a while, you probably think you know all that you can do with it. But do you?

We’ve put together a list of blazing power tips for experts and developers that will take you to the next level. Take a look below.

If you’re new to B2, we have a list of power tips for you, too.
Visit 12 Power Tips for New B2 Users.
Backblaze logo

1    Manage File Versions

Use Lifecycle Rules on a Bucket to set how many days to keep files that are no longer the current version. This is a great way to manage the amount of space your B2 account is using.

Backblaze logo

2    Easily Stay on Top of Your B2 Account Limits

Set usage caps and get text/email alerts for your B2 account when you approach limits that you define.

Backblaze logo

3    Bring on Your Big Files

You can upload files as large as 10TB to B2.

Backblaze logo

4    You Can Use FedEx to Get Your Data into B2

If you have over 20TB of data, you can use Backblaze’s Fireball hard disk array to load large volumes of data directly into your B2 account. We ship a Fireball to you and you ship it back.

Backblaze logo

5    You Have Command-Line Control of All B2 Functions

You have complete control over B2 using our command line tool that is available for Macintosh, Windows, and Linux.

Backblaze logo

6    You Can Use Your Own Domain Name To Front a Public B2 Bucket

You can create a vanity URL for your B2 account.

Backblaze logo

7    See What’s Happening in Your Account with Graphical Reports

You can view graphical reports summarizing your B2 usage — transactions, downloads, averages, data stored — in your B2 account dashboard.

Backblaze logo

8    Create a B2 SDK

You can build your own B2 SDK for JVM-based or JVM-compatible languages using our B2 Java SDK on Github.

Backblaze logo

9    B2’s API is Easy to Use

B2’s API is similar to, but simpler than Amazon’s S3 API, making it super easy for developers to integrate with B2 Cloud Storage.

Backblaze logo

10    View Code Examples To Get Your B2 Project Started

The B2 API is well documented and has code examples for cURL, Java, Python, Swift, Ruby, C#, and PHP. For example, here’s how to create a B2 Bucket.

Backblaze logo

11    Developers can set the B2 part size as low as 5 MB

When working with large files, the minimum file part size can be set as low as 5MB or as high as 5GB. This gives developers the ability to maximize the throughput of B2 data uploads and downloads. See Large Files and Downloading for more developer tips.

Backblaze logo

12    Your App or Device Can Work with B2, as well

Your B2 integration can be listed on Backblaze’s website. Visit Submit an Integration to get started.

Want to Learn More About B2?

You can find more information on B2 on our website and in our help pages.

The post 12 B2 Power Tips for Experts and Developers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

NSA Morale

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/nsa_morale.html

The Washington Post is reporting that poor morale at the NSA is causing a significant talent shortage. A November New York Times article said much the same thing.

The articles point to many factors: the recent reorganization, low pay, and the various leaks. I have been saying for a while that the Shadow Brokers leaks have been much more damaging to the NSA — both to morale and operating capabilities — than Edward Snowden. I think it’ll take most of a decade for them to recover.

Sky Hits Man With £5k ‘Fine’ For Pirating Boxing on Facebook

Post Syndicated from Andy original https://torrentfreak.com/sky-hits-man-with-5k-fine-for-pirating-boxing-on-facebook-180108/

When people download content online using BitTorrent, they also distribute that content to others. This unlawful distribution attracts negative attention from rightsholders, who have sued hundreds of thousands of individuals worldwide.

Streaming is considered a much safer method to obtain content, since it’s difficult for content owners to track downloaders. However, the same can’t be said about those who stream content to the web for the benefit of others, as an interesting case in the UK has just revealed.

It involves 34-year-old Craig Foster who received several scary letters from lawyers representing broadcaster Sky. The company alleged that during last April’s bout between Anthony Joshua’s and Wladimir Klitschko, Foster live-streamed the multiple world title fight on Facebook Live.

Financially, this was a major problem for Sky, law firm Foot Anstey LLP told Foster. According to their calculations, at least 4,250 people watched the stream without paying Sky Box Office the going rate of £19.95 each. Tapped into Sky’s computers, the broadcaster concluded that Foster owed the company £85,000.

But according to The Mirror, father-of-one Foster wasn’t actually to blame.

“I’d paid for the boxing, it wasn’t like I was making any money. My iPad was signed in to my Facebook account and my friend just started streaming the fight. I didn’t think anything of it, then a few days later they cut my subscription,” Foster said.

“They’re demanding the names and addresses of all my mates who were round that night but I’m not going to give them up. I said I’d take the rap.”

While Foster says he won’t turn in the culprit, there’s no doubt that the fight stream originated from his Sky account. The TV giant embeds watermarks in its broadcasts which enables it to see who paid for an event, should a copy of one turn up on the Internet.

As we reported last year following the Mayweather v McGregor super-fight, the codes are clearly visible with the naked eye.

Sky watermarks, as seen in the Mayweather v McGregor fight

While taking the rap for someone else’s infringing behavior isn’t something anyone should do lightly, it appears that Scarborough-based Foster did just that.

According to Neil Parkes, who specializes in media litigation, content protection and contentious IP at Foot Anstey, Foster accepted responsibility and agreed to pay a settlement.

“Mr Foster broke the law,” Parkes said. “He has acknowledged his wrongdoing, apologised and signed a legally binding agreement to pay a sum of £5,000 to Sky.”

The Mirror, however, has Foster backtracking. He says he wasn’t given enough time to consider his position and now wants to fight Sky in court.

“It’s heavy-handed. I’ve apologized and told them we were drunk,” Foster said.

“I know streaming the fight was wrong. I didn’t stop my friend but I was watching the boxing. I’m just a bloke who had a few drinks with his friends.”

Unless he can find a law firm willing to fight his corner at a hugely cut-down rate, Foster will find this kind of legal fisticuffs to be a massively expensive proposition, one in which he will start out as the clear underdog.

Not only was Foster’s Sky account the originating source, both his iPad and his Facebook account were used to stream the fight. On top of what appears to be a signed confession, he also promised not to do anything else like this in future. Furthermore, he even agreed to issue an apology that Sky can use in future anti-piracy messages.

Of course, Foster might indeed be a noble gentleman but he should be aware that as a civil matter, this fight would be decided on the balance of probabilities, not beyond reasonable doubt. If the judge decides 51% in Sky’s favor, he suffers a knockout along with a huge financial headache.

No one wants a £5,000 bill but that’s a drop in the ocean compared to the cost implications of losing this case.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Top 10 Most Popular Torrent Sites of 2018

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-most-popular-torrent-sites-of-2018-180107/

Torrent sites have come and gone over past year. Now, at the start of 2018, we take a look to see what the most-used sites are in the current landscape.

The Pirate Bay remains the undisputed number one. The site has weathered a few storms over the years, but it looks like it will be able to celebrate its 15th anniversary, which is coming up in a few months.

The list also includes various newcomers including Idope and Zooqle. While many people are happy to see new torrent sites emerge, this often means that others have called it quits.

Last year’s runner-up Extratorrent, for example, has shut down and left a gaping hole behind. And it wasn’t the only site that went away. TorrentProject also disappeared without a trace and the same was true for isohunt.to.

The unofficial Torrentz reincarnation Torrentz2.eu, the highest newcomer last year, is somewhat of an unusual entry. A few weeks ago all links to externally hosted torrents were removed, as was the list of indexed pages.

We decided to include the site nonetheless, given its history and because it’s still possible to find hashes through the site. As Torrentz2’s future is uncertain, we added an extra site (10.1) as compensation.

Finally, RuTracker also deserves a mention. The torrent site generates enough traffic to warrant a listing, but we traditionally limit the list to sites that are targeted primarily at an English or international audience.

Below is the full list of the ten most-visited torrent sites at the start of the new year. The list is based on various traffic reports and we display the Alexa rank for each. In addition, we include last year’s ranking.

Most Popular Torrent Sites

1. The Pirate Bay

The Pirate Bay is the “king of torrents” once again and also the oldest site in this list. The past year has been relatively quiet for the notorious torrent site, which is currently operating from its original .org domain name.

Alexa Rank: 104/ Last year #1

2. RARBG

RARBG, which started out as a Bulgarian tracker, has captured the hearts and minds of many video pirates. The site was founded in 2008 and specializes in high quality video releases.

Alexa Rank: 298 / Last year #3

3. 1337x

1337x continues where it left off last year. The site gained a lot of traffic and, unlike some other sites in the list, has a dedicated group of uploaders that provide fresh content.

Alexa Rank: 321 / Last year #6

4. Torrentz2

Torrentz2 launched as a stand-in for the original Torrentz.eu site, which voluntarily closed its doors in 2016. At the time of writing, the site only lists torrent hashes and no longer any links to external torrent sites. While browser add-ons and plugins still make the site functional, its future is uncertain.

Alexa Rank: 349 / Last year #5

5. YTS.ag

YTS.ag is the unofficial successors of the defunct YTS or YIFY group. Not all other torrent sites were happy that the site hijacked the popuar brand and several are actively banning its releases.

Alexa Rank: 563 / Last year #4

6. EZTV.ag

The original TV-torrent distribution group EZTV shut down after a hostile takeover in 2015, with new owners claiming ownership of the brand. The new group currently operates from EZTV.ag and releases its own torrents. These releases are banned on some other torrent sites due to this controversial history.

Alexa Rank: 981 / Last year #7

7. LimeTorrents

Limetorrents has been an established torrent site for more than half a decade. The site’s operator also runs the torrent cache iTorrents, which is used by several other torrent search engines.

Alexa Rank: 2,433 / Last year #10

8. NYAA.si

NYAA.si is a popular resurrection of the anime torrent site NYAA, which shut down last year. Previously we left anime-oriented sites out of the list, but since we also include dedicated TV and movie sites, we decided that a mention is more than warranted.

Alexa Rank: 1,575 / Last year #NA

9. Torrents.me

Torrents.me is one of the torrent sites that enjoyed a meteoric rise in traffic this year. It’s a meta-search engine that links to torrent files and magnet links from other torrent sites.

Alexa Rank: 2,045 / Last year #NA

10. Zooqle

Zooqle, which boasts nearly three million verified torrents, has stayed under the radar for years but has still kept growing. The site made it into the top 10 for the first time this year.

Alexa Rank: 2,347 / Last year #NA

10.1 iDope

The special 10.1 mention goes to iDope. Launched in 2016, the site is a relative newcomer to the torrent scene. The torrent indexer has steadily increased its audience over the past year. With similar traffic numbers to Zooqle, a listing is therefore warranted.

Alexa Rank: 2,358 / Last year #NA

Disclaimer: Yes, we know that Alexa isn’t perfect, but it helps to compare sites that operate in a similar niche. We also used other traffic metrics to compile the top ten. Please keep in mind that many sites have mirrors or alternative domains, which are not taken into account here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

 

 

 

More details about mitigations for the CPU Speculative Execution issue (Google Security Blog)

Post Syndicated from jake original https://lwn.net/Articles/743269/rss

One of the main concerns about the mitigations for the Meltdown/Spectre speculative execution bugs has been performance. The Google Security Blog is reporting negligible performance impact on Google systems for two of the mitigations (kernel page-table isolation and Retpoline): “In response to the vulnerabilities that were discovered we developed a novel mitigation called “Retpoline” — a binary modification technique that protects against “branch target injection” attacks. We shared Retpoline with our industry partners and have deployed it on Google’s systems, where we have observed negligible impact on performance.
In addition, we have deployed Kernel Page Table Isolation (KPTI) — a general purpose technique for better protecting sensitive information in memory from other software running on a machine — to the entire fleet of Google Linux production servers that support all of our products, including Search, Gmail, YouTube, and Google Cloud Platform.
There has been speculation that the deployment of KPTI causes significant performance slowdowns. Performance can vary, as the impact of the KPTI mitigations depends on the rate of system calls made by an application. On most of our workloads, including our cloud infrastructure, we see negligible impact on performance.

Three new stable kernels

Post Syndicated from jake original https://lwn.net/Articles/743246/rss

Greg Kroah-Hartman has announced the release of the 4.14.12, 4.9.75, and 4.4.110 stable kernels. The bulk of the
changes are either to fix the mitigations for Meltdown/Spectre (in 4.14.12) or to backport
those mitigations (in the two older kernels). There are apparently known (or
suspected) problems with
each of the releases, which Kroah-Hartman is hoping to get shaken out in
the near term. For example, the 4.4.110 announcement warns: “But be
careful, there have been some reports of problems with this
release during the -rc review cycle. Hopefully all of those issues are
now resolved.

So please test, as of right now, it should be ‘bug compatible’ with the
‘enterprise’ kernel releases with regards to the Meltdown bug and proper
support on all virtual platforms (meaning there is still a vdso issue
that might trip up some old binaries, again, please test!)”

The Top 10 Most Downloaded AWS Security and Compliance Documents in 2017

Post Syndicated from Sara Duffer original https://aws.amazon.com/blogs/security/the-top-10-most-downloaded-aws-security-and-compliance-documents-in-2017/

AWS download logo

The following list includes the ten most downloaded AWS security and compliance documents in 2017. Using this list, you can learn about what other AWS customers found most interesting about security and compliance last year.

  1. AWS Security Best Practices – This guide is intended for customers who are designing the security infrastructure and configuration for applications running on AWS. The guide provides security best practices that will help you define your Information Security Management System (ISMS) and build a set of security policies and processes for your organization so that you can protect your data and assets in the AWS Cloud.
  2. AWS: Overview of Security Processes – This whitepaper describes the physical and operational security processes for the AWS managed network and infrastructure, and helps answer questions such as, “How does AWS help me protect my data?”
  3. Architecting for HIPAA Security and Compliance on AWS – This whitepaper describes how to leverage AWS to develop applications that meet HIPAA and HITECH compliance requirements.
  4. Service Organization Controls (SOC) 3 Report – This publicly available report describes internal AWS security controls, availability, processing integrity, confidentiality, and privacy.
  5. Introduction to AWS Security –This document provides an introduction to AWS’s approach to security, including the controls in the AWS environment, and some of the products and features that AWS makes available to customers to meet your security objectives.
  6. AWS Best Practices for DDoS Resiliency – This whitepaper covers techniques to mitigate distributed denial of service (DDoS) attacks.
  7. AWS: Risk and Compliance – This whitepaper provides information to help customers integrate AWS into their existing control framework, including a basic approach for evaluating AWS controls and a description of AWS certifications, programs, reports, and third-party attestations.
  8. Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities – AWS WAF is a web application firewall that helps you protect your websites and web applications against various attack vectors at the HTTP protocol level. This whitepaper outlines how you can use AWS WAF to mitigate the application vulnerabilities that are defined in the Open Web Application Security Project (OWASP) Top 10 list of most common categories of application security flaws.
  9. Introduction to Auditing the Use of AWS – This whitepaper provides information, tools, and approaches for auditors to use when auditing the security of the AWS managed network and infrastructure.
  10. AWS Security and Compliance: Quick Reference Guide – By using AWS, you inherit the many security controls that we operate, thus reducing the number of security controls that you need to maintain. Your own compliance and certification programs are strengthened while at the same time lowering your cost to maintain and run your specific security assurance requirements. Learn more in this quick reference guide.

– Sara

Detecting Adblocker Blockers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/detecting_adblo.html

Interesting research on the prevalence of adblock blockers: “Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis“:

Abstract: Millions of people use adblockers to remove intrusive and malicious ads as well as protect themselves against tracking and pervasive surveillance. Online publishers consider adblockers a major threat to the ad-powered “free” Web. They have started to retaliate against adblockers by employing anti-adblockers which can detect and stop adblock users. To counter this retaliation, adblockers in turn try to detect and filter anti-adblocking scripts. This back and forth has prompted an escalating arms race between adblockers and anti-adblockers.

We want to develop a comprehensive understanding of anti-adblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. At a high level, we collect execution traces by visiting a website with and without adblockers. Through differential execution analysis, we are able to pinpoint the conditions that lead to the differences caused by anti-adblocking code. Using our system, we detect anti-adblockers on 30.5% of the Alexa top-10K websites which is 5-52 times more than reported in prior literature. Unlike prior work which is limited to detecting visible reactions (e.g., warning messages) by anti-adblockers, our system can discover attempts to detect adblockers even when there is no visible reaction. From manually checking one third of the detected websites, we find that the websites that have no visible reactions constitute over 90% of the cases, completely dominating the ones that have visible warning messages. Finally, based on our findings, we further develop JavaScript rewriting and API hooking based solutions (the latter implemented as a Chrome extension) to help adblockers bypass state-of-the-art anti-adblockers.

News article.

Spectre and Meltdown Attacks

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/spectre_and_mel.html

After a week or so of rumors, everyone is now reporting about the Spectre and Meltdown attacks against pretty much every modern processor out there.

These are side-channel attacks where one process can spy on other processes. They affect computers where an untrusted browser window can execute code, phones that have multiple apps running at the same time, and cloud computing networks that run lots of different processes at once. Fixing them either requires a patch that results in a major performance hit, or is impossible and requires a re-architecture of conditional execution in future CPU chips.

I’ll be writing something for publication over the next few days. This post is basically just a link repository.

EDITED TO ADD: Good technical explanation. And a Slashdot thread.

EDITED TO ADD (1/5): Another good technical description. And how the exploits work through browsers. A rundown of what vendors are doing. Nicholas Weaver on its effects on individual computers.

EDITED TO ADD (1/7): xkcd.

EDITED TO ADD (1/10): Another good technical description.