Tag Archives: CIS

Judge Tells Movie Company That it Can’t Sue Alleged BitTorrent Pirate

Post Syndicated from Andy original https://torrentfreak.com/judge-tells-movie-company-that-it-cant-sue-alleged-bittorrent-pirate-180118/

Despite a considerable migration towards streaming piracy in recent years, copyright trolls are still finding plenty of potential targets around the world. Alleged BitTorrent pirates are target number one since their activities are most easily tracked. However, it isn’t all plain sailing for the pirate hunters.

Last December we reported on the case of Lingfu Zhang, an Oregan resident accused by the makers of the 2015 drama film Fathers & Daughters (F&D) of downloading and sharing their content without permission. While these kinds of cases often disappear, with targets making confidential settlements to make a legal battle go away, Zhang chose to fight back.

Represented by attorney David Madden, Zhang not only denied downloading the movie in question but argued that the filmmakers had signed away their online distribution rights. He noted that (F&D), via an agent, had sold the online distribution rights to a third party not involved in the case.

So, if F&D no longer held the right to distribute the movie online, suing for an infringement of those rights would be impossible. With this in mind, Zhang’s attorney moved for a summary judgment in his client’s favor.

“ZHANG denies downloading the movie but Defendant’s current motion for summary judgment challenges a different portion of F&D’s case,” Madden wrote.

“Defendant argues that F&D has alienated all of the relevant rights necessary to sue for infringement under the Copyright Act.”

In response, F&D argued that they still held some rights, including the right to exploit the movie on “airlines and oceangoing vessels” but since Zhang wasn’t accused of being on either form of transport when the alleged offense occurred, the defense argued that point was moot.

Judge Michael H. Simon handed down his decision yesterday and it heralds bad news for F&D and celebration time for Zhang and his attorney. In a 17-page ruling first spotted by Fight Copyright Trolls, the Judge agrees that F&D has no standing to sue.

Citing the Righthaven LLC v. Hoehn case from 2013, the Judge notes that under the Copyright Act, only the “legal or beneficial owner of an exclusive right under a copyright” has standing to sue for infringement of that right.

Judge Simon notes that while F&D claims it is the ‘legal owner’ of the copyright to the Fathers & Daughters movie, the company “misstates the law”, adding that F&D also failed to present evidence that it is the ‘beneficial owner’ of the relevant exclusive right. On this basis, both claims are rejected.

The Judge noted that the exclusive rights to the movie were granted to a company called Vertical Entertainment which received the exclusive right to “manufacture, reproduce, sell, rent, exhibit, broadcast, transmit, stream, download, license, sub-license, distribute, sub-distribute, advertise, market, promote, publicize and exploit” the movie in the United States.

An exclusive license means that ownership of a copyright is transferred for the term of the license, meaning that Vertical – not F&D – is the legal owner under the Copyright Act. It matters not, the Judge says, that F&D retained the rights to display the movie “on airlines and ships” since only the transferee (Vertical) has standing to sue and those locations are irrelevant to the lawsuit.

“Under the Copyright Act, F&D is not the ‘legal owner’ with standing to sue for infringement relating to the rights that were transferred to Vertical through its exclusive license granted in the distribution agreement,” the Judge writes.

Also at issue was an undated document presented by F&D titled Anti-Piracy and Rights Enforcement Reservation of Rights Addendum. The document, relied upon by F&D, claimed that F&D is authorized to “enforce copyrights against Internet infringers” including those that use peer-to-peer technologies such as BitTorrent.

However, the Judge found that the peer-to-peer rights apparently reserved to F&D were infringing rights, not the display and distribution (exclusive rights) required to sue under the Copyright Act. Furthermore, the Judge determined that there was no evidence that this document existed before the lawsuit was filed. Zhang and his attorney previously asserted the addendum had been created afterwards and the Judge agrees.

“F&D did not dispute that the undated anti-piracy addendum was created after this lawsuit was filed, or otherwise respond to Defendant’s standing argument relating to the untimeliness of this document,” the Judge notes.

“Accordingly, because the only reasonable inference supported by the evidence is that this document was created after the filing of this lawsuit, it is not appropriate to consider for purposes of standing.”

So, with Vertical Entertainment the only company with the right to sue, could they be added to the lawsuit, F&D asked? Citing an earlier case, the Judge said ‘no’, noting that “summary judgment is not a procedural second chance to flesh out inadequate pleadings.”

With that, Judge Simon granted Lingfu Zhang’s request for summary judgment and dismissed F&D’s claims for lack of standing.

As noted by Fight Copyright Trolls, the movie licensing scheme employed by F&D is complex and, given the fact that notorious copyright troll outfit Guardaley is involved (Guardaley filed 24 cases in eight districts on behalf of F&D), it would be interesting if legal professionals could dig deeper, to see how far the rabbit hole goes.

The summary judgment can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Scale Your Web Application — One Step at a Time

Post Syndicated from Saurabh Shrivastava original https://aws.amazon.com/blogs/architecture/scale-your-web-application-one-step-at-a-time/

I often encounter people experiencing frustration as they attempt to scale their e-commerce or WordPress site—particularly around the cost and complexity related to scaling. When I talk to customers about their scaling plans, they often mention phrases such as horizontal scaling and microservices, but usually people aren’t sure about how to dive in and effectively scale their sites.

Now let’s talk about different scaling options. For instance if your current workload is in a traditional data center, you can leverage the cloud for your on-premises solution. This way you can scale to achieve greater efficiency with less cost. It’s not necessary to set up a whole powerhouse to light a few bulbs. If your workload is already in the cloud, you can use one of the available out-of-the-box options.

Designing your API in microservices and adding horizontal scaling might seem like the best choice, unless your web application is already running in an on-premises environment and you’ll need to quickly scale it because of unexpected large spikes in web traffic.

So how to handle this situation? Take things one step at a time when scaling and you may find horizontal scaling isn’t the right choice, after all.

For example, assume you have a tech news website where you did an early-look review of an upcoming—and highly-anticipated—smartphone launch, which went viral. The review, a blog post on your website, includes both video and pictures. Comments are enabled for the post and readers can also rate it. For example, if your website is hosted on a traditional Linux with a LAMP stack, you may find yourself with immediate scaling problems.

Let’s get more details on the current scenario and dig out more:

  • Where are images and videos stored?
  • How many read/write requests are received per second? Per minute?
  • What is the level of security required?
  • Are these synchronous or asynchronous requests?

We’ll also want to consider the following if your website has a transactional load like e-commerce or banking:

How is the website handling sessions?

  • Do you have any compliance requests—like the Payment Card Industry Data Security Standard (PCI DSS compliance) —if your website is using its own payment gateway?
  • How are you recording customer behavior data and fulfilling your analytics needs?
  • What are your loading balancing considerations (scaling, caching, session maintenance, etc.)?

So, if we take this one step at a time:

Step 1: Ease server load. We need to quickly handle spikes in traffic, generated by activity on the blog post, so let’s reduce server load by moving image and video to some third -party content delivery network (CDN). AWS provides Amazon CloudFront as a CDN solution, which is highly scalable with built-in security to verify origin access identity and handle any DDoS attacks. CloudFront can direct traffic to your on-premises or cloud-hosted server with its 113 Points of Presence (102 Edge Locations and 11 Regional Edge Caches) in 56 cities across 24 countries, which provides efficient caching.
Step 2: Reduce read load by adding more read replicas. MySQL provides a nice mirror replication for databases. Oracle has its own Oracle plug for replication and AWS RDS provide up to five read replicas, which can span across the region and even the Amazon database Amazon Aurora can have 15 read replicas with Amazon Aurora autoscaling support. If a workload is highly variable, you should consider Amazon Aurora Serverless database  to achieve high efficiency and reduced cost. While most mirror technologies do asynchronous replication, AWS RDS can provide synchronous multi-AZ replication, which is good for disaster recovery but not for scalability. Asynchronous replication to mirror instance means replication data can sometimes be stale if network bandwidth is low, so you need to plan and design your application accordingly.

I recommend that you always use a read replica for any reporting needs and try to move non-critical GET services to read replica and reduce the load on the master database. In this case, loading comments associated with a blog can be fetched from a read replica—as it can handle some delay—in case there is any issue with asynchronous reflection.

Step 3: Reduce write requests. This can be achieved by introducing queue to process the asynchronous message. Amazon Simple Queue Service (Amazon SQS) is a highly-scalable queue, which can handle any kind of work-message load. You can process data, like rating and review; or calculate Deal Quality Score (DQS) using batch processing via an SQS queue. If your workload is in AWS, I recommend using a job-observer pattern by setting up Auto Scaling to automatically increase or decrease the number of batch servers, using the number of SQS messages, with Amazon CloudWatch, as the trigger.  For on-premises workloads, you can use SQS SDK to create an Amazon SQS queue that holds messages until they’re processed by your stack. Or you can use Amazon SNS  to fan out your message processing in parallel for different purposes like adding a watermark in an image, generating a thumbnail, etc.

Step 4: Introduce a more robust caching engine. You can use Amazon Elastic Cache for Memcached or Redis to reduce write requests. Memcached and Redis have different use cases so if you can afford to lose and recover your cache from your database, use Memcached. If you are looking for more robust data persistence and complex data structure, use Redis. In AWS, these are managed services, which means AWS takes care of the workload for you and you can also deploy them in your on-premises instances or use a hybrid approach.

Step 5: Scale your server. If there are still issues, it’s time to scale your server.  For the greatest cost-effectiveness and unlimited scalability, I suggest always using horizontal scaling. However, use cases like database vertical scaling may be a better choice until you are good with sharding; or use Amazon Aurora Serverless for variable workloads. It will be wise to use Auto Scaling to manage your workload effectively for horizontal scaling. Also, to achieve that, you need to persist the session. Amazon DynamoDB can handle session persistence across instances.

If your server is on premises, consider creating a multisite architecture, which will help you achieve quick scalability as required and provide a good disaster recovery solution.  You can pick and choose individual services like Amazon Route 53, AWS CloudFormation, Amazon SQS, Amazon SNS, Amazon RDS, etc. depending on your needs.

Your multisite architecture will look like the following diagram:

In this architecture, you can run your regular workload on premises, and use your AWS workload as required for scalability and disaster recovery. Using Route 53, you can direct a precise percentage of users to an AWS workload.

If you decide to move all of your workloads to AWS, the recommended multi-AZ architecture would look like the following:

In this architecture, you are using a multi-AZ distributed workload for high availability. You can have a multi-region setup and use Route53 to distribute your workload between AWS Regions. CloudFront helps you to scale and distribute static content via an S3 bucket and DynamoDB, maintaining your application state so that Auto Scaling can apply horizontal scaling without loss of session data. At the database layer, RDS with multi-AZ standby provides high availability and read replica helps achieve scalability.

This is a high-level strategy to help you think through the scalability of your workload by using AWS even if your workload in on premises and not in the cloud…yet.

I highly recommend creating a hybrid, multisite model by placing your on-premises environment replica in the public cloud like AWS Cloud, and using Amazon Route53 DNS Service and Elastic Load Balancing to route traffic between on-premises and cloud environments. AWS now supports load balancing between AWS and on-premises environments to help you scale your cloud environment quickly, whenever required, and reduce it further by applying Amazon auto-scaling and placing a threshold on your on-premises traffic using Route 53.

[$] Deadline scheduling part 1 — overview and theory

Post Syndicated from corbet original https://lwn.net/Articles/743740/rss

The deadline scheduler enables the user to specify a realtime task’s
requirements
using well-defined realtime abstractions, allowing the system to make
the best scheduling decisions, guaranteeing the scheduling of realtime
tasks even in higher-load systems.
This article, the first in a series of two, provides an introduction to
realtime scheduling (deadline
scheduling in particular) and some of the theory behind it.

Early Challenges: Managing Cash Flow

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/managing-cash-flow/

Cash flow projection charts

This post by Backblaze’s CEO and co-founder Gleb Budman is the eighth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency
  8. Early Challenges: Managing Cash Flow

Use the Join button above to receive notification of new posts in this series.

Running out of cash is one of the quickest ways for a startup to go out of business. When you are starting a company the question of where to get cash is usually the top priority, but managing cash flow is critical for every stage in the lifecycle of a company. As a primarily bootstrapped but capital-intensive business, managing cash flow at Backblaze was and still is a key element of our success and requires continued focus. Let’s look at what we learned over the years.

Raising Your Initial Funding

When starting a tech business in Silicon Valley, the default assumption is that you will immediately try to raise venture funding. There are certainly many advantages to raising funding — not the least of which is that you don’t need to be cash-flow positive since you have cash in the bank and the expectation is that you will have a “burn rate,” i.e. you’ll be spending more than you make.

Note: While you’re not expected to be cash-flow positive, that doesn’t mean you don’t have to worry about cash. Cash-flow management will determine your burn rate. Whether you can get to cash-flow breakeven or need to raise another round of funding is a direct byproduct of your cash flow management.

Also, raising funding takes time (most successful fundraising cycles take 3-6 months start-to-finish), and time at a startup is in short supply. Constantly trying to raise funding can take away from product development and pursuing growth opportunities. If you’re not successful in raising funding, you then have to either shut down or find an alternate method of funding the business.

Sources of Funding

Depending on the stage of the company, type of company, and other factors, you may have access to different sources of funding. Let’s list a number of them:

Customers

Sales — the best kind of funding. It is non-dilutive, doesn’t have to be paid back, and is a direct metric of the success of your company.

Pre-Sales — some customers may be willing to pay you for a product in beta, a test, or pre-pay for a product they’ll receive when finished. Pre-Sales income also is great because it shares the characteristics of cash from sales, but you get the cash early. It also can be a good sign that the product you’re building fills a market need. We started charging for Backblaze computer backup while it was still in private beta, which allowed us to not only collect cash from customers, but also test the billing experience and users’ real desire for the service.

Services — if you’re a service company and customers are paying you for that, great. You can effectively scale for the number of hours available in a day. As demand grows, you can add more employees to increase the total number of billable hours.

Note: If you’re a product company and customers are paying you to consult, that can provide much needed cash, and could provide feedback toward the right product. However, it can also distract from your core business, send you down a path where you’re building a product for a single customer, and addict you to a path that prevents you from building a scalable business.

Investors

Yourself — you likely are putting your time into the business, and deferring salary in the process. You may also put your own cash into the business either as an investment or a loan.

Angels — angels are ideal as early investors since they are used to investing in businesses with little to no traction. AngelList is a good place to find them, though finding people you’re connected with through someone that knows you well is best.

Crowdfunding — a component of the JOBS Act permitted entrepreneurs to raise money from nearly anyone since May 2016. The SEC imposes limits on both investors and the companies. This article goes into some depth on the options and sites available.

VCs — VCs are ideal for companies that need to raise at least a few million dollars and intend to build a business that will be worth over $1 billion.

Debt

Friends & Family — F&F are often the first people to give you money because they are investing in you. It’s great to have some early supporters, but it also can be risky to take money from people who aren’t used to the risks. The key advice here is to only take money from people who won’t mind losing it. If someone is talking about using their children’s college funds or borrowing from their 401k, say ‘no thank you’ — even if they’re sure they want to loan you money.

Bank Loans — a variety of loan types exist, but most either require the company to have been operational for a couple years, be able to borrow against money the company has or is making, or be able to get a personal guarantee from the founders whereby their own credit is on the line. Fundera provides a good overview of loan options and can help secure some, but most will not be an option for a brand new startup.

Grants

Government — in some areas there is the potential for government grants to facilitate research. The SBIR program facilitates some such grants.

At Backblaze, we used a number of these options:

• Investors/Yourself
We loaned a cumulative total of a couple hundred thousand dollars to the company and invested our time by going without a salary for a year and a half.
• Customers/Pre-Sales
We started selling the Backblaze service while it was still in beta.
• Customers/Sales
We launched v1.0 and kept selling.
• Investors/Angels
After a year and a half, we raised $370k from 11 angels. All of them were either people whom we knew personally or were a strong recommendation from a mutual friend.
• Debt/Loans
After a couple years we were able to get equipment leases whereby the Storage Pods and hard drives were used as collateral to secure the lease on them.
• Investors/VCs
Ater five years we raised $5m from TMT Investments to add to the balance sheet and invest in growth.

The variety and quantity of sources we used is by no means uncommon.

GAAP vs. Cash

Most companies start tracking financials based on cash, and as they scale they switch to GAAP (Generally Accepted Accounting Principles). Cash is easier to track — we got paid $XXXX and spent $YYY — and as often mentioned, is required for the business to stay alive. GAAP has more subtlety and complexity, but provides a clearer picture of how the business is really doing. Backblaze was on a ‘cash’ system for the first few years, then switched to GAAP. For this post, I’m going to focus on things that help cash flow, not GAAP profitability.

Stages of Cash Flow Management

All-spend

In a pure service business (e.g. solo proprietor law firm), you may have no expenses other than your time, so this stage doesn’t exist. However, in a product business there is a period of time where you are building the product and have nothing to sell. You have zero cash coming in, but have cash going out. Your cash-flow is completely negative and you need funds to cover that.

Sales-generating

Starting to see cash come in from customers is thrilling. I initially had our system set up to email me with every $5 payment we received. You’re making sales, but not covering expenses.

Ramen-profitable

But it takes a lot of $5 payments to pay for servers and salaries, so for a while expenses are likely to outstrip sales. Getting to ramen-profitable is a critical stage where sales cover the business expenses and are “paying enough for the founders to eat ramen.” This extends the runway for a business, but is not completely sustainable, since presumably the founders can’t (or won’t) live forever on a subsistence salary.

Business-profitable

This is the ultimate stage whereby the business is truly profitable, including paying everyone market-rate salaries. A business at this stage is self-sustaining. (Of course, market shifts and plenty of other challenges can kill the business, but cash-flow issues alone will not.)

Note, I’m using the word ‘profitable’ here to mean this is still on a cash-basis.

Backblaze was in the all-spend stage for just over a year, during which time we built the service and hadn’t yet made the service available to customers. Backblaze was in the sales-generating stage for nearly another year before the company was barely ramen-profitable where sales were covering the company expenses and paying the founders minimum wage. (I say ‘barely’ since minimum wage in the SF Bay Area is arguably never subsistence.) It took almost three more years before the company was business-profitable, paying everyone including the founders market-rate.

Cash Flow Forecasting

When raising funding it’s helpful to think of milestones reached. You don’t necessarily need enough cash on day one to last for the next 100 years of the company. Some good milestones to consider are how much cash you need to prove there is a market need, prove you can build a product to meet that need, or get to ramen-profitable.

Two things to consider:

1) Unit Economics (COGS)

If your product is 100% software, this may not be relevant. Once software is built it costs effectively nothing to deliver the product to one customer or one million customers. However, in most businesses there is some incremental cost to provide the product. If you’re selling a hardware device, perhaps you sell it for $100 but it costs you $50 to make it. This is called “COGS” (Cost of Goods Sold).

Many products rely on cloud services where the costs scale with growth. That model works great, but it’s still important to understand what the costs are for the cloud service you use per unit of product you sell.

Support is often done by the founders early-on in a business, but that is another real cost to factor in and estimate on a per-user basis. Taking all of the per unit costs combined, you may charge $10/month/user for your service, but if it costs you $7/month/user in cloud services, you’re only netting $3/month/user.

2) Operating Expenses (OpEx)

These are expenses that don’t scale with the number of product units you sell. Typically this includes research & development, sales & marketing, and general & administrative expenses. Presumably there is a certain level of these functions required to build the product, market it, sell it, and run the organization. You can choose to invest or cut back on these, but you’ll still make the same amount per product unit.

Incremental Net Profit Per Unit

If you’ve calculated your COGS and your unit economics are “upside down,” where the amount you charge is less than that it costs you to provide your service, it’s worth thinking hard about how that’s going to change over time. If it will not change, there is no scale that will make the business work. Presuming you do make money on each unit of product you sell — what is sometimes referred to as “Contribution Margin” — consider how many of those product units you need to sell to cover your operating expenses as described above.

Calculating Your Profit

The math on getting to ramen-profitable is simple:

(Number of Product Units Sold x Contribution Margin) - Operating Expenses = Profit

If your operating expenses include subsistence salaries for the founders and profit > $0, you’re ramen-profitable.

Improving Cash Flow

Having access to sources of cash, whether from selling to customers or other methods, is excellent. But needing less cash gives you more choices and allows you to either dilute less, owe less, or invest more.

There are two ways to improve cash flow:

1) Collect More Cash

The best way to collect more cash is to provide more value to your customers and as a result have them pay you more. Additional features/products/services can allow this. However, you can also collect more cash by changing how you charge for your product. If you have a subscription, changing from charging monthly to yearly dramatically improves your cash flow. If you have a product that customers use up, selling a year’s supply instead of selling them one-by-one can help.

2) Spend Less Cash

Reducing COGS is a fantastic way to spend less cash in a scalable way. If you can do this without harming the product or customer experience, you win. There are a myriad of ways to also reduce operating expenses, including taking sub-market salaries, using your home instead of renting office space, staying focused on your core product, etc.

Ultimately, collecting more and spending less cash dramatically simplifies the process of getting to ramen-profitable and later to business-profitable.

Be Careful (Why GAAP Matters)

A word of caution: while running out of cash will put you out of business immediately, overextending yourself will likely put you out of business not much later. GAAP shows how a business is really doing; cash doesn’t. If you only focus on cash, it is possible to commit yourself to both delivering products and repaying loans in the future in an unsustainable fashion. If you’re taking out loans, watch the total balance and monthly payments you’re committing to. If you’re asking customers for pre-payment, make sure you believe you can deliver on what they’ve paid for.

Summary

There are numerous challenges to building a business, and ensuring you have enough cash is amongst the most important. Having the cash to keep going lets you keep working on all of the other challenges. The frameworks above were critical for maintaining Backblaze’s cash flow and cash balance. Hopefully you can take some of the lessons we learned and apply them to your business. Let us know what works for you in the comments below.

The post Early Challenges: Managing Cash Flow appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Weekly roundup: Happy birthday

Post Syndicated from Eevee original https://eev.ee/dev/2018/01/15/weekly-roundup-happy-birthday/

It was my birthday! I need to write a birthday post argh.

  • anise!!: Surprise! Mostly Anise. I refactored dialogue to be a bit less of a hairball; started making item pickups actually work; decided to reverse a former decision and expand the world a little bit (which unfortunately means the world map doesn’t quite fit all on the screen at once, oh well); finally got around to making animated tiles work (!!!); experimented with making sound effects in SunVox, with mixed success; and just general working on level design which takes incredibly far much longer than I ever expected.

  • misc: I wrote a userscript to highlight the game being currently played at GDQ, though it’s not quite so useful now that GDQ is over.

    I realize I don’t really know where a tiny oneoff thing like this should live, and I’ve left a trail of a good few of them. Hmm. I guess I could’ve written a release post for it, but it also seems like it should be in an index of stuff somewhere…?

  • ???: ???

US Govt Brands Torrent, Streaming & Cyberlocker Sites As Notorious Markets

Post Syndicated from Andy original https://torrentfreak.com/us-govt-brands-torrent-streaming-cyberlocker-sites-as-notorious-markets-180115/

In its annual “Out-of-Cycle Review of Notorious Markets” the office of the United States Trade Representative (USTR) has listed a long list of websites said to be involved in online piracy.

The list is compiled with high-level input from various trade groups, including the MPAA and RIAA who both submitted their recommendations (1,2) during early October last year.

With the word “allegedly” used more than two dozen times in the report, the US government notes that its report does not constitute cast-iron proof of illegal activity. However, it urges the countries from where the so-called “notorious markets” operate to take action where they can, while putting owners and facilitators on notice that their activities are under the spotlight.

“A goal of the List is to motivate appropriate action by owners, operators, and service providers in the private sector of these and similar markets, as well as governments, to reduce piracy and counterfeiting,” the report reads.

“USTR highlights the following marketplaces because they exemplify global counterfeiting and piracy concerns and because the scale of infringing activity in these marketplaces can cause significant harm to U.S. intellectual property (IP) owners, consumers, legitimate online platforms, and the economy.”

The report begins with a page titled “Issue Focus: Illicit Streaming Devices”. Unsurprisingly, particularly given their place in dozens of headlines last year, the segment focus on the set-top box phenomenon. The piece doesn’t list any apps or software tools as such but highlights the general position, claiming a cost to the US entertainment industry of $4-5 billion a year.

Torrent Sites

In common with previous years, the USTR goes on to list several of the world’s top torrent sites but due to changes in circumstances, others have been delisted. ExtraTorrent, which shut down May 2017, is one such example.

As the world’s most famous torrent site, The Pirate Bay gets a prominent mention, with the USTR noting that the site is of “symbolic importance as one of the longest-running and most vocal torrent sites. The USTR underlines the site’s resilience by noting its hydra-like form while revealing an apparent secret concerning its hosting arrangements.

“The Pirate Bay has allegedly had more than a dozen domains hosted in various countries around the world, applies a reverse proxy service, and uses a hosting provider in Vietnam to evade further enforcement action,” the USTR notes.

Other torrent sites singled out for criticism include RARBG, which was nominated for the listing by the movie industry. According to the USTR, the site is hosted in Bosnia and Herzegovina and has changed hosting services to prevent shutdowns in recent years.

1337x.to and the meta-search engine Torrentz2 are also given a prime mention, with the USTR noting that they are “two of the most popular torrent sites that allegedly infringe U.S. content industry’s copyrights.” Russia’s RuTracker is also targeted for criticism, with the government noting that it’s now one of the most popular torrent sites in the world.

Streaming & Cyberlockers

While torrent sites are still important, the USTR reserves considerable space in its report for streaming portals and cyberlocker-type services.

4Shared.com, a file-hosting site that has been targeted by dozens of millions of copyright notices, is reportedly no longer able to use major US payment providers. Nevertheless, the British Virgin Islands company still collects significant sums from premium accounts, advertising, and offshore payment processors, USTR notes.

Cyberlocker Rapidgator gets another prominent mention in 2017, with the USTR noting that the Russian-hosted platform generates millions of dollars every year through premium memberships while employing rewards and affiliate schemes.

Due to its increasing popularity as a hosting and streaming operation, Openload.co (Romania) is now a big target for the USTR. “The site is used frequently in combination with add-ons in illicit streaming devices. In November 2017, users visited Openload.co a staggering 270 million times,” the USTR writes.

Owned by a Swiss company and hosted in the Netherlands, the popular site Uploaded is also criticized by the US alongside France’s 1Fichier.com, which allegedly hosts pirate games while being largely unresponsive to takedown notices. Dopefile.pk, a Pakistan-based storage outfit, is also highlighted.

On the video streaming front, it’s perhaps no surprise that the USTR focuses on sites like FMovies (Sweden), GoStream (Vietnam), Movie4K.tv (Russia) and PrimeWire. An organization collectively known as the MovShare group which encompasses Nowvideo.sx, WholeCloud.net, NowDownload.cd, MeWatchSeries.to and WatchSeries.ac, among others, is also listed.

Unauthorized music / research papers

While most of the above are either focused on video or feature it as part of their repertoire, other sites are listed for their attention to music. Convert2MP3.net is named as one of the most popular stream-ripping sites in the world and is highlighted due to the prevalence of YouTube-downloader sites and the 2017 demise of YouTube-MP3.

“Convert2MP3.net does not appear to have permission from YouTube or other sites and does not have permission from right holders for a wide variety of music represented by major U.S. labels,” the USTR notes.

Given the amount of attention the site has received in 2017 as ‘The Pirate Bay of Research’, Libgen.io and Sci-Hub.io (not to mention the endless proxy and mirror sites that facilitate access) are given a detailed mention in this year’s report.

“Together these sites make it possible to download — all without permission and without remunerating authors, publishers or researchers — millions of copyrighted books by commercial publishers and university presses; scientific, technical and medical journal articles; and publications of technological standards,” the USTR writes.

Service providers

But it’s not only sites that are being put under pressure. Following a growing list of nominations in previous years, Swiss service provider Private Layer is again singled out as a rogue player in the market for hosting 1337x.to and Torrentz2.eu, among others.

“While the exact configuration of websites changes from year to year, this is the fourth consecutive year that the List has stressed the significant international trade impact of Private Layer’s hosting services and the allegedly infringing sites it hosts,” the USTR notes.

“Other listed and nominated sites may also be hosted by Private Layer but are using
reverse proxy services to obfuscate the true host from the public and from law enforcement.”

The USTR notes Switzerland’s efforts to close a legal loophole that restricts enforcement and looks forward to a positive outcome when the draft amendment is considered by parliament.

Perhaps a little surprisingly given its recent anti-piracy efforts and overtures to the US, Russia’s leading social network VK.com again gets a place on the new list. The USTR recognizes VK’s efforts but insists that more needs to be done.

Social networking and e-commerce

“In 2016, VK reached licensing agreements with major record companies, took steps to limit third-party applications dedicated to downloading infringing content from the site, and experimented with content recognition technologies,” the USTR writes.

“Despite these positive signals, VK reportedly continues to be a hub of infringing activity and the U.S. motion picture industry reports that they find thousands of infringing files on the site each month.”

Finally, in addition to traditional pirate sites, the US also lists online marketplaces that allegedly fail to meet appropriate standards. Re-added to the list in 2016 after a brief hiatus in 2015, China’s Alibaba is listed again in 2017. The development provoked an angry response from the company.

Describing his company as a “scapegoat”, Alibaba Group President Michael Evans said that his platform had achieved a 25% drop in takedown requests and has even been removing infringing listings before they make it online.

“In light of all this, it’s clear that no matter how much action we take and progress we make, the USTR is not actually interested in seeing tangible results,” Evans said in a statement.

The full list of sites in the Notorious Markets Report 2017 (pdf) can be found below.

– 1fichier.com – (cyberlocker)
– 4shared.com – (cyberlocker)
– convert2mp3.net – (stream-ripper)
– Dhgate.com (e-commerce)
– Dopefile.pl – (cyberlocker)
– Firestorm-servers.com (pirate gaming service)
– Fmovies.is, Fmovies.se, Fmovies.to – (streaming)
– Gostream.is, Gomovies.to, 123movieshd.to (streaming)
– Indiamart.com (e-commerce)
– Kinogo.club, kinogo.co (streaming host, platform)
– Libgen.io, sci-hub.io, libgen.pw, sci-hub.cc, sci-hub.bz, libgen.info, lib.rus.ec, bookfi.org, bookzz.org, booker.org, booksc.org, book4you.org, bookos-z1.org, booksee.org, b-ok.org (research downloads)
– Movshare Group – Nowvideo.sx, wholecloud.net, auroravid.to, bitvid.sx, nowdownload.ch, cloudtime.to, mewatchseries.to, watchseries.ac (streaming)
– Movie4k.tv (streaming)
– MP3VA.com (music)
– Openload.co (cyberlocker / streaming)
– 1337x.to (torrent site)
– Primewire.ag (streaming)
– Torrentz2, Torrentz2.me, Torrentz2.is (torrent site)
– Rarbg.to (torrent site)
– Rebel (domain company)
– Repelis.tv (movie and TV linking)
– RuTracker.org (torrent site)
– Rapidgator.net (cyberlocker)
– Taobao.com (e-commerce)
– The Pirate Bay (torrent site)
– TVPlus, TVBrowser, Kuaikan (streaming apps and addons, China)
– Uploaded.net (cyberlocker)
– VK.com (social networking)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Court Expands Dutch Pirate Bay Blockade to More ISPs, For Now

Post Syndicated from Ernesto original https://torrentfreak.com/court-expands-dutch-pirate-bay-blockade-to-more-isps-180113/

The Pirate Bay is arguably the most widely blocked website on the Internet.

ISPs from all over the world have been ordered by courts to prevent users from accessing the torrent site, and this week the list has grown a bit longer.

A Dutch court has ruled that local Internet providers KPN, Tele2, T-Mobile, Zeelandnet and CAIW must block the site within ten days. The verdict follows a similar decision from September last year, where Ziggo and XS4All were ordered to do the same.

The blockade applies to several IP addresses and more than 150 domain names that are used by the notorious torrent site. Several of the ISPs had warned the court about the dangers of overblocking, but these concerns were rejected.

While most Dutch customers will be unable to access The Pirate Bay directly, the decision is not final yet. Not until the Supreme Court issues its pending decision. That will be the climax of a legal battle that started eight years ago.

A Dutch court first issued an order to block The Pirate Bay in 2012, but this decision was overturned two years later. Anti-piracy group BREIN then took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.

After a careful review of the case, the EU Court of Justice decided last year that The Pirate Bay can indeed be blocked.

The top EU court ruled that although The Pirate Bay’s operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive.

This put the case back to the Dutch Supreme court, which has yet to decide on the matter.

BREIN, however, wanted a blocking decision more quickly and requested preliminary injunctions, like the one issued this week. These injunctions will only be valid until the final verdict is handed down.

A copy of the most recent court order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Announcing our new beta for the AWS Certified Security – Specialty exam

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-our-new-beta-for-the-aws-certified-security-specialty-exam/

Take the AWS Certified Security – Specialty beta exam for the chance to be among the first to hold this new AWS Certification. This beta exam allows experienced cloud security professionals to demonstrate and validate their expertise. Register today – this beta exam will only be available from January 15 to March 2!

About the exam

This beta exam validates that the successful candidate can effectively demonstrate knowledge of how to secure the AWS platform. The exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection.

The exam validates:

  • Familiarity with regional- and country-specific security and compliance regulations and meta issues that these regulations embody.
  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • Ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements.
  • An understanding of security operations and risk.

Learn more and register >>

Who is eligible

The beta is open to anyone who currently holds an Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

We have training and other resources to help you prepare for the beta exam:

AWS Security Fundamentals Digital| 3 Hours
This course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS Classroom | 3 Days
This course demonstrates how to efficiently use AWS security services to stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your data and systems in the cloud. The course highlights the security features of AWS key services including compute, storage, networking, and database services.

Online resources for Cloud Security and Compliance

Review documentation, whitepapers, and articles & tutorials related to cloud security and compliance.

Learn more and register >>

Please contact us if you have questions about exam registration.

Good luck!

Europol Hits Huge 500,000 Subscriber Pirate IPTV Operation

Post Syndicated from Andy original https://torrentfreak.com/europol-hits-huge-500000-subscriber-pirate-iptv-operation-180111/

Live TV is in massive demand but accessing all content in a particular region can be a hugely expensive proposition, with tradtional broadcasting monopolies demanding large subscription fees.

For millions around the world, this ‘problem’ can be easily circumvented. Pirate IPTV operations, which supply thousands of otherwise subscription channels via the Internet, are on the increase. They’re accessible for just a few dollars, euros, or pounds per month, slashing bills versus official providers on a grand scale.

This week, however, police forces around Europe coordinated to target what they claim is one of the world’s largest illicit IPTV operations. The investigation was launched last February by Europol and on Tuesday coordinated actions were carried out in Cyprus, Bulgaria, Greece, and the Netherlands.

Three suspects were arrested in Cyprus – two in Limassol (aged 43 and 44) and one in Larnaca (aged 53). All are alleged to be part of an international operation to illegally broadcast around 1,200 channels of pirated content worldwide. Some of the channels offered were illegally sourced from Sky UK, Bein Sports, Sky Italia, and Sky DE

If initial reports are to be believed, the reach of the IPTV service was huge. Figures usually need to be taken with a pinch of salt but information suggests the service had more than 500,000 subscribers, each paying around 10 euros per month. (Note: how that relates to the alleged five million euros per year in revenue is yet to be made clear)

Police action was spread across the continent, with at least nine separate raids, including in the Netherlands where servers were uncovered. However, it was determined that these were in place to hide the true location of the operation’s main servers. Similar ‘front’ servers were also deployed in other regions.

The main servers behind the IPTV operation were located in Petrich, a small town in Blagoevgrad Province, southwestern Bulgaria. No details have been provided by the authorities but TF is informed that the website of a local ISP, Megabyte-Internet, from where pirate IPTV has been broadcast for at least the past several months, disappeared on Tuesday. It remains offline this morning.

The company did not respond to our request for comment and there’s no suggestion that it’s directly involved in any illegal activity. However, its Autonomous System (AS) number reveals linked IPTV services, none of which appear to be operational today. The ISP is also listed on sites where ‘pirate’ IPTV channel playlists are compiled by users.

According to sources in Cyprus, police requested permission from the Larnaca District Court to detain the arrested individuals for eight days. However, local news outlet Philenews said that any decision would be postponed until this morning, since one of the three suspects, an English Cypriot, required an interpreter which caused a delay.

In addition to prosecutors and defense lawyers, two Dutch investigators from Europol were present in court yesterday. The hearing lasted for six hours and was said to be so intensive that the court stenographer had to be replaced due to overwork.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Validate Your IT Security Expertise with the New AWS Certified Security – Specialty Beta Exam

Post Syndicated from Sara Snedeker original https://aws.amazon.com/blogs/security/validate-your-it-security-expertise-with-the-new-aws-certified-security-specialty-beta-exam/

AWS Training and Certification image

If you are an experienced cloud security professional, you can demonstrate and validate your expertise with the new AWS Certified Security – Specialty beta exam. This exam allows you to demonstrate your knowledge of incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. Register today – this beta exam will be available only from January 15 to March 2, 2018.

By taking this exam, you can validate your:

  • Familiarity with region-specific and country-specific security and compliance regulations and meta issues that these regulations include.
  • Understanding of data encryption methods and secure internet protocols, and the AWS mechanisms to implement them.
  • Working knowledge of AWS security services to provide a secure production environment.
  • Ability to make trade-off decisions with regard to cost, security, and deployment complexity when given a set of application requirements.

See the full list of security knowledge you can validate by taking this beta exam.

Who is eligible?

The beta exam is open to anyone who currently holds an AWS Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

You can take the following courses and use AWS cloud security resources and compliance resources to prepare for this exam.

AWS Security Fundamentals (digital, 3 hours)
This digital course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS (classroom, 3 days)
This instructor-led course demonstrates how to efficiently use AWS security services to help stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your AWS resources. The course highlights the security features of AWS compute, storage, networking, and database services.

If you have questions about this new beta exam, contact us.

Good luck with the exam!

– Sara

RuTracker Reveals Innovative Plan For Users to Subvert ISP Blocking

Post Syndicated from Andy original https://torrentfreak.com/rutracker-reveals-innovative-plan-for-users-to-subvert-isp-blocking-180110/

As Russia’s largest torrent site and one that earned itself a mention in TF’s list of most popular torrent sites 2018, RuTracker is continuously under fire.

The site has an extremely dedicated following but Russia’s telecoms watchdog, spurred on by copyright holders brandishing court rulings, does everything in its power to ensure that people can’t access the site easily.

As a result, RuTracker’s main domains are blocked by all ISPs, meaning that people have to resort to VPNs or the many dozens of proxy and mirror sites that have been set up to facilitate access to the popular tracker.

While all of these methods used to work just fine, new legislation that came into force during October means that mirror and proxy sites can be added to block lists without copyright holders having to return to court. And, following legislation introduced in November, local VPN services are forbidden from providing access to blocked sites.

While RuTracker has always insisted that web blockades have little effect on the numbers of people sharing content, direct traffic to their main domains has definitely suffered. To solve this problem and go some way towards mitigating VPN and proxy bans, the site has just come up with a new plan to keep the torrents flowing.

The scheme was quietly announced, not on RuTracker’s main forum, but to a smaller set of users on local site Leprosorium. The idea was that a quieter launch there would allow for controlled testing before a release to the masses. The project is called My.RuTracker and here’s how it works.

Instead of blocked users fruitlessly trying to find public circumvention methods that once seen are immediately blocked, they are invited to register their own domains. These can be single use, for the person who registers them, but it’s envisioned that they’ll be shared out between friends, family, and online groups, to better make use of the resource.

Once domains are registered, users are invited to contact a special user account on the RuTracker site (operated by the site’s operators) which will provide them with precise technical details on how to set up their domain (.ru domains are not allowed) to gain access to RuTracker.

“In response, after a while (usually every other day), a list of NS-addresses will be sent to the registrar’s domain settings. Under this scheme, the user domain will be redirected to the RuTracker site via a dynamic IP address: this will avoid blocking the torrent tracker for a particular IP address,” the scheme envisages.

According to local news resource Tjournal, 62 personal mirrors were launched following the initial appeal, with the operators of RuTracker now planning to publicly announce the project to their community. As more are added, the site will keep track of traffic from each of the personal “mirrors” for balancing the load on the site.

At least in theory, this seems like a pretty innovative scheme. Currently, the authorities rely on the scale and public awareness of a particular proxy or mirror in order to earmark it for blocking. This much more decentralized plan, in which only small numbers of people should know each domain, seems like a much more robust system – at least until the authorities and indeed the law catches up.

And so the cat-and-mouse game continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Tech Companies Meet EC to Discuss Removal of Pirate & Illegal Content

Post Syndicated from Andy original https://torrentfreak.com/tech-companies-meet-ec-to-discuss-removal-of-pirate-illegal-content-180109/

Thousands perhaps millions of pieces of illegal content flood onto the Internet every single day, a problem that’s only increasing with each passing year.

In the early days of the Internet very little was done to combat the problem but with the rise of social media and millions of citizens using it to publish whatever they like – not least terrorist propaganda and racist speech – governments around the world are beginning to take notice.

Of course, running parallel is the multi-billion dollar issue of intellectual property infringement. Eighteen years on from the first wave of mass online piracy and the majority of popular movies, TV shows, games, software and books are still available to download.

Over the past couple of years and increasingly in recent months, there have been clear signs that the EU in particular wishes to collectively mitigate the spread of all illegal content – from ISIS videos to pirated Hollywood movies – with assistance from major tech companies.

Google, YouTube, Facebook and Twitter are all expected to do their part, with the looming stick of legislation behind the collaborative carrots, should they fail to come up with a solution.

To that end, five EU Commissioners – Dimitris Avramopoulos, Elżbieta Bieńkowska, Věra Jourová, Julian King and Mariya Gabriel – will meet today in Brussels with representatives of several online platforms to discuss progress made in dealing with the spread of the aforementioned material.

In a joint statement together with EC Vice-President Andrus Ansip, the Commissioners describe all illegal content as a threat to security, safety, and fundamental rights, demanding a “collective response – from all actors, including the internet industry.”

They note that online platforms have committed significant resources towards removing violent and extremist content, including via automated removal, but more needs to be done to tackle the issue.

“This is starting to achieve results. However, even if tens of thousands of pieces of illegal content have been taken down, there are still hundreds of thousands more out there,” the Commissioners writes.

“And removal needs to be speedy: the longer illegal material stays online, the greater its reach, the more it can spread and grow. Building on the current voluntary approach, more efforts and progress have to be made.”

The Commission says it is relying on online platforms such as Google and Facebook to “step up and speed up their efforts to tackle these threats quickly and comprehensively.” This should include closer cooperation with law enforcement, sharing of information with other online players, plus action to ensure that once taken down, illegal content does not simply reappear.

While it’s clear that that the EC would prefer to work collaboratively with the platforms to find a solution to the illegal content problem, as expected there’s the veiled threat of them being compelled by law to do so, should they fall short of their responsibilities.

“We will continue to promote cooperation with social media companies to detect and remove terrorist and other illegal content online, and if necessary, propose legislation to complement the existing regulatory framework,” the EC warns.

Today’s discussions run both in parallel and in tandem with others specifically targeted at intellectual property abuses. Late November the EC presented a set of new measures to ensure that copyright holders are well protected both online and in the physical realm.

A key aim is to focus on large-scale facilitators, such as pirate site operators, while cutting their revenue streams.

“The Commission seeks to deprive commercial-scale IP infringers of the revenue flows that make their criminal activity lucrative – this is the so-called ‘follow the money’ approach which focuses on the ‘big fish’ rather than individuals,” the Commission explained.

This presentation followed on the heels of a proposal last September which had the EC advocating the take-down-stay-down principle, with pirate content being taken down, automated filters ensuring infringement can be tackled proactively, with measures being taken against repeat infringers.

Again, the EC warned that should cooperation with Internet platforms fail to come up with results, future legislation cannot be ruled out.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

TVAddons and ZemTV Ask Court to Dismiss U.S. Piracy Lawsuit

Post Syndicated from Ernesto original https://torrentfreak.com/tvaddons-and-zemtv-ask-court-to-dismiss-u-s-piracy-lawsuit-180108/

Last year, American satellite and broadcast provider Dish Network targeted two well-known players in the third-party Kodi add-on ecosystem.

In a complaint filed in a federal court in Texas, add-on ZemTV and the TVAddons library were accused of copyright infringement. As a result, both are facing up to $150,000 in damages for each offense.

While the case was filed in Texas, neither of the defendants live there, or even in the United States. The owner and operator of TVAddons is Adam Lackman, who resides in Montreal, Canada. ZemTV’s developer Shahjahan Durrani is even further away in London, UK.

Their limited connection to Texas is reason for the case to be dismissed, according to the legal team of the two defendants. They are represented by attorneys Erin Russel and Jason Sweet, who asked the Court to drop the case late last week.

According to their motion, the Texas District Court does not have jurisdiction over the two defendants.

“Lackman and Durrani have never been residents or citizens of Texas; they have never owned property in Texas; they have never voted in Texas; they have never personally visited Texas; they have never directed any business activity of any kind to anyone in Texas […] and they have never earned income in Texas,” the motion reads.

Technically, defendants can be sued in a district they have never been, as long as they “directed actions” at the state or its citizens.

According to Dish, this is the case here since both defendants made their services available to local residents, among other things. However, the defense team argues that’s not enough to establish jurisdiction in this case.

“Plaintiff’s conclusory allegation that Lackman and Durrani marketed, made available, and distributed ZemTV service and the ZemTV add-on to consumers in the State of Texas and the Southern District of Texas is misleading at best,” the attorneys write.

If the case proceeds this would go against the US constitution, violating the defendants’ due process rights. Whether the infringement claims hold ground or not, Dish has no right to sue, according to the defense.

“Defendants are citizens of Canada and Great Britain and have not had sufficient contacts in the State of Texas for this Court to exercise personal jurisdiction over them. To do so would violate the Due Process Clause of the United States Constitution.”

The Court must now decide whether the case can proceed or not. TorrentFreak reached out to TVAddons but the service wishes to refrain from commenting on the proceeding at the moment.

Previously, TVAddons made it clear that it sees the Dish lawsuit as an attempt to destroy the Kodi addon community. One of the methods of attack it mentioned, was to sue people in foreign jurisdictions.

“Most people don’t have money lying around to hire lawyers in places they’ve never even visited. This means that if a company sues you in a foreign country and you can’t afford a lawyer, you’re screwed even if you did nothing wrong,” TVAddons wrote at the time.

A copy of the motion to dismiss is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

MusE 3.0.0 released

Post Syndicated from ris original https://lwn.net/Articles/743598/rss

Three years after the last stable release, version 3.0 of the MusE
MIDI/Audio sequencer is now available. As you might expect there many
changes since the last release including a switch to Qt5, a new Plugin Path
editor in Global Settings, a mixer makeover with lots of fixes, a
system-wide move to double precision of all audio paths, and much more.

No Level of Copyright Enforcement Will Ever Be Enough For Big Media

Post Syndicated from Andy original https://torrentfreak.com/no-level-of-copyright-enforcement-will-ever-be-enough-for-big-media-180107/

For more than ten years TorrentFreak has documented a continuous stream of piracy battles so it’s natural that, every now and then, we pause to consider when this war might stop. The answer is always “no time soon” and certainly not in 2018.

When swapping files over the Internet first began it wasn’t a particularly widespread activity. A reasonable amount of content was available, but it was relatively inaccessible. Then peer-to-peer came along and it sparked a revolution.

From the beginning, copyright holders felt that the law would answer their problems, whether that was by suing Napster, Kazaa, or even end users. Some industry players genuinely believed this strategy was just a few steps away from achieving its goals. Just a little bit more pressure and all would be under control.

Then, when the landmark MGM Studios v. Grokster decision was handed down in the studios’ favor during 2005, the excitement online was palpable. As copyright holders rejoiced in this body blow for the pirating masses, file-sharing communities literally shook under the weight of the ruling. For a day, maybe two.

For the majority of file-sharers, the ruling meant absolutely nothing. So what if some company could be held responsible for other people’s infringements? Another will come along, outside of the US if need be, people said. They were right not to be concerned – that’s exactly what happened.

Ever since, this cycle has continued. Eager to stem the tide of content being shared without their permission, rightsholders have advocated stronger anti-piracy enforcement and lobbied for more restrictive interpretations of copyright law. Thus far, however, literally nothing has provided a solution.

One would have thought that given the military-style raid on Kim Dotcom’s Megaupload, a huge void would’ve appeared in the sharing landscape. Instead, the file-locker business took itself apart and reinvented itself in jurisdictions outside the United States. Meanwhile, the BitTorrent scene continued in the background, somewhat obliviously.

With the SOPA debacle still fresh in relatively recent memory, copyright holders are still doggedly pursuing their aims. Site-blocking is rampant, advertisers are being pressured into compliance, and ISPs like Cox Communications now find themselves responsible for the infringements of their users. But has any of this caused any fatal damage to the sharing landscape? Not really.

Instead, we’re seeing a rise in the use of streaming sites, each far more accessible to the newcomer than their predecessors and vastly more difficult for copyright holders to police.

Systems built into Kodi are transforming these platforms into a plug-and-play piracy playground, one in which sites skirt US law and users can consume both at will and in complete privacy. Meanwhile, commercial and unauthorized IPTV offerings are gathering momentum, even as rightsholders try to pull them back.

Faced with problems like these we are now seeing calls for even tougher legislation. While groups like the RIAA dream of filtering the Internet, over in the UK a 2017 consultation had copyright holders excited that end users could be criminalized for simply consuming infringing content, let alone distributing it.

While the introduction of both or either of these measures would cause uproar (and rightly so), history tells us that each would fail in its stated aim of stopping piracy. With that eventuality all but guaranteed, calls for even tougher legislation are being readied for later down the line.

In short, there is no law that can stop piracy and therefore no law that will stop the entertainment industries coming back for harsher measures, pursuing the dream. This much we’ve established from close to two decades of litigation and little to no progress.

But really, is anyone genuinely surprised that they’re still taking this route? Draconian efforts to maintain control over the distribution of content predate the file-sharing wars by a couple of hundred years, at the very least. Why would rightsholders stop now, when the prize is even more valuable?

No one wants a minefield of copyright law. No one wants a restricted Internet. No one wants extended liability for innovators, service providers, or the public. But this is what we’ll get if this problem isn’t solved soon. Something drastic needs to happen, but who will be brave enough to admit it, let alone do something about it?

During a discussion about piracy last year on the BBC, the interviewer challenged a caller who freely admitted to pirating sports content online. The caller’s response was clear:

For far too long, broadcasters and rightsholders have abused their monopoly position, charging ever-increasing amounts for popular content, even while making billions. Piracy is a natural response to that, and effectively a chance for the little guy to get back some control, he argued.

Exactly the same happened in the music market during the late 1990s and 2000s. In response to artificial restriction of the market and the unrealistic hiking of prices, people turned to peer-to-peer networks for their fix. Thanks to this pressure but after years of turmoil, services like Spotify emerged, converting millions of former pirates in the process. Netflix, it appears, is attempting to do the same thing with video.

When people feel that they aren’t getting ripped off and that they have no further use for sub-standard piracy services in the face of stunning legal alternatives, things will change. But be under no illusion, people won’t be bullied there.

If we end up with an Internet stifled in favor of rightsholders, one in which service providers are too scared to innovate, the next generation of consumers will never forget. This will be a major problem for two key reasons. Not only will consumers become enemies but piracy will still exist. We will have come full circle, fueled only by division and hatred.

It’s a natural response to reject monopolistic behavior and it’s a natural response, for most, to be fair when treated with fairness. Destroying freedom is far from fair and will not create a better future – for anyone.

Laws have their place, no sane person will argue against that, but when the entertainment industries are making billions yet still want more, they’ll have to decide whether this will go on forever with building resentment, or if making a bit less profit now makes more sense longer term.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Physics cheats

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/06/physics-cheats/

Anonymous asks:

something about how we tweak physics to “work” better in games?

Ho ho! Work. Get it? Like in physics…?

Hitboxes

Hitbox” is perhaps not the most accurate term, since the shape used for colliding with the environment and the shape used for detecting damage might be totally different. They’re usually the same in simple platformers, though, and that’s what most of my games have been.

The hitbox is the biggest physics fudge by far, and it exists because of a single massive approximation that (most) games make: you’re controlling a single entity in the abstract, not a physical body in great detail.

That is: when you walk with your real-world meat shell, you perform a complex dance of putting one foot in front of the other, a motion you spent years perfecting. When you walk in a video game, you press a single “walk” button. Your avatar may play an animation that moves its legs back and forth, but since you’re not actually controlling the legs independently (and since simulating them is way harder), the game just treats you like a simple shape. Fairly often, this is a box, or something very box-like.

An Eevee sprite standing on faux ground; the size of the underlying image and the hitbox are outlined

Since the player has no direct control over the exact placement of their limbs, it would be slightly frustrating to have them collide with the world. This is especially true in cases like the above, where the tail and left ear protrude significantly out from the main body. If that Eevee wanted to stand against a real-world wall, she would simply tilt her ear or tail out of the way, so there’s no reason for the ear to block her from standing against a game wall. To compensate for this, the ear and tail are left out of the collision box entirely and will simply jut into a wall if necessary — a goofy affordance that’s so common it doesn’t even register as unusual. As a bonus (assuming this same box is used for combat), she won’t take damage from projectiles that merely graze past an ear.

(One extra consideration for sprite games in particular: the hitbox ought to be horizontally symmetric around the sprite’s pivot — i.e. the point where the entity is truly considered to be standing — so that the hitbox doesn’t abruptly move when the entity turns around!)

Corners

Treating the player (and indeed most objects) as a box has one annoying side effect: boxes have corners. Corners can catch on other corners, even by a single pixel. Real-world bodies tend to be a bit rounder and squishier and this can tolerate grazing a corner; even real-world boxes will simply rotate a bit.

Ah, but in our faux physics world, we generally don’t want conscious actors (such as the player) to rotate, even with a realistic physics simulator! Real-world bodies are made of parts that will generally try to keep you upright, after all; you don’t tilt back and forth much.

One way to handle corners is to simply remove them from conscious actors. A hitbox doesn’t have to be a literal box, after all. A popular alternative — especially in Unity where it’s a standard asset — is the pill-shaped capsule, which has semicircles/hemispheres on the top and bottom and a cylindrical body in 3D. No corners, no problem.

Of course, that introduces a new problem: now the player can’t balance precariously on edges without their rounded bottom sliding them off. Alas.

If you’re stuck with corners, then, you may want to use a corner bump, a term I just made up. If the player would collide with a corner, but the collision is only by a few pixels, just nudge them to the side a bit and carry on.

An Eevee sprite trying to move sideways into a shallow ledge; the game bumps her upwards slightly, so she steps onto it instead

When the corner is horizontal, this creates stairs! This is, more or less kinda, how steps work in Doom: when the player tries to cross from one sector into another, if the height difference is 24 units or less, the game simply bumps them upwards to the height of the new floor and lets them continue on.

Implementing this in a game without Doom’s notion of sectors is a little trickier. In fact, I still haven’t done it. Collision detection based on rejection gets it for free, kinda, but it’s not very deterministic and it breaks other things. But that’s a whole other post.

Gravity

Gravity is pretty easy. Everything accelerates downwards all the time. What’s interesting are the exceptions.

Jumping

Jumping is a giant hack.

Think about how actual jumping works: you tense your legs, which generally involves bending your knees first, and then spring upwards. In a platformer, you can just leap whenever you feel like it, which is nonsense. Also you go like twenty feet into the air?

Worse, most platformers allow variable-height jumping, where your jump is lower if you let go of the jump button while you’re in the air. Normally, one would expect to have to decide how much force to put into the jump beforehand.

But of course this is about convenience of controls: when jumping is your primary action, you want to be able to do it immediately, without any windup for how high you want to jump.

(And then there’s double jumping? Come on.)

Air control is a similar phenomenon: usually you’d jump in a particular direction by controlling how you push off the ground with your feet, but in a video game, you don’t have feet! You only have the box. The compromise is to let you control your horizontal movement to a limit degree in midair, even though that doesn’t make any sense. (It’s way more fun, though, and overall gives you more movement options, which are good to have in an interactive medium.)

Air control also exposes an obvious place that game physics collide with the realistic model of serious physics engines. I’ve mentioned this before, but: if you use Real Physics™ and air control yourself into a wall, you might find that you’ll simply stick to the wall until you let go of the movement buttons. Why? Remember, player movement acts as though an external force were pushing you around (and from the perspective of a Real™ physics engine, this is exactly how you’d implement it) — so air-controlling into a wall is equivalent to pushing a book against a wall with your hand, and the friction with the wall holds you in place. Oops.

Ground sticking

Another place game physics conflict with physics engines is with running to the top of a slope. On a real hill, of course, you land on top of the slope and are probably glad of it; slopes are hard to climb!

An Eevee moves to the top of a slope, and rather than step onto the flat top, she goes flying off into the air

In a video game, you go flying. Because you’re a box. With momentum. So you hit the peak and keep going in the same direction. Which is diagonally upwards.

Projectiles

To make them more predictable, projectiles generally aren’t subject to gravity, at least as far as I’ve seen. The real world does not have such an exemption. The real world imposes gravity even on sniper rifles, which in a video game are often implemented as an instant trace unaffected by anything in the world because the bullet never actually exists in the world.

Resistance

Ah. Welcome to hell.

Water

Water is an interesting case, and offhand I don’t know the gritty details of how games implement it. In the real world, water applies a resistant drag force to movement — and that force is proportional to the square of velocity, which I’d completely forgotten until right now. I am almost positive that no game handles that correctly. But then, in real-world water, you can push against the water itself for movement, and games don’t simulate that either. What’s the rough equivalent?

The Sonic Physics Guide suggests that Sonic handles it by basically halving everything: acceleration, max speed, friction, etc. When Sonic enters water, his speed is cut; when Sonic exits water, his speed is increased.

That last bit feels validating — I could swear Metroid Prime did the same thing, and built my own solution around it, but couldn’t remember for sure. It makes no sense, of course, for a jump to become faster just because you happened to break the surface of the water, but it feels fantastic.

The thing I did was similar, except that I didn’t want to add a multiplier in a dozen places when you happen to be underwater (and remember which ones need it to be squared, etc.). So instead, I calculate everything completely as normal, so velocity is exactly the same as it would be on dry land — but the distance you would move gets halved. The effect seems to be pretty similar to most platformers with water, at least as far as I can tell. It hasn’t shown up in a published game and I only added this fairly recently, so I might be overlooking some reason this is a bad idea.

(One reason that comes to mind is that velocity is now a little white lie while underwater, so anything relying on velocity for interesting effects might be thrown off. Or maybe that’s correct, because velocity thresholds should be halved underwater too? Hm!)

Notably, air is also a fluid, so it should behave the same way (just with different constants). I definitely don’t think any games apply air drag that’s proportional to the square of velocity.

Friction

Friction is, in my experience, a little handwaved. Probably because real-world friction is so darn complicated.

Consider that in the real world, we want very high friction on the surfaces we walk on — shoes and tires are explicitly designed to increase it, even. We move by bracing a back foot against the ground and using that to push ourselves forward, so we want the ground to resist our push as much as possible.

In a game world, we are a box. We move by being pushed by some invisible outside force, so if the friction between ourselves and the ground is too high, we won’t be able to move at all! That’s complete nonsense physically, but it turns out to be handy in some cases — for example, highish friction can simulate walking through deep mud, which should be difficult due to fluid drag and low friction.

But the best-known example of the fakeness of game friction is video game ice. Walking on real-world ice is difficult because the low friction means low grip; your feet are likely to slip out from under you, and you’ll simply fall down and have trouble moving at all. In a video game, you can’t fall down, so you have the opposite experience: you spend most of your time sliding around uncontrollably. Yet ice is so common in video games (and perhaps so uncommon in places I’ve lived) that I, at least, had never really thought about this disparity until an hour or so ago.

Game friction vs real-world friction

Real-world friction is a force. It’s the normal force (which is the force exerted by the object on the surface) times some constant that depends on how the two materials interact.

Force is mass times acceleration, and platformers often ignore mass, so friction ought to be an acceleration — applied against the object’s movement, but never enough to push it backwards.

I haven’t made any games where variable friction plays a significant role, but my gut instinct is that low friction should mean the player accelerates more slowly but has a higher max speed, and high friction should mean the opposite. I see from my own source code that I didn’t even do what I just said, so let’s defer to some better-made and well-documented games: Sonic and Doom.

In Sonic, friction is a fixed value subtracted from the player’s velocity (regardless of direction) each tic. Sonic has a fixed framerate, so the units are really pixels per tic squared (i.e. acceleration), multiplied by an implicit 1 tic per tic. So far, so good.

But Sonic’s friction only applies if the player isn’t pressing or . Hang on, that isn’t friction at all; that’s just deceleration! That’s equivalent to jogging to a stop. If friction were lower, Sonic would take longer to stop, but otherwise this is only tangentially related to friction.

(In fairness, this approach would decently emulate friction for non-conscious sliding objects, which are never going to be pressing movement buttons. Also, we don’t have the Sonic source code, and the name “friction” is a fan invention; the Sonic Physics Guide already uses “deceleration” to describe the player’s acceleration when turning around.)

Okay, let’s try Doom. In Doom, the default friction is 90.625%.

Hang on, what?

Yes, in Doom, friction is a multiplier applied every tic. Doom runs at 35 tics per second, so this is a multiplier of 0.032 per second. Yikes!

This isn’t anything remotely like real friction, but it’s much easier to implement. With friction as acceleration, the game has to know both the direction of movement (so it can apply friction in the opposite direction) and the magnitude (so it doesn’t overshoot and launch the object in the other direction). That means taking a semi-costly square root and also writing extra code to cap the amount of friction. With a multiplier, neither is necessary; just multiply the whole velocity vector and you’re done.

There are some downsides. One is that objects will never actually stop, since multiplying by 3% repeatedly will never produce a result of zero — though eventually the speed will become small enough to either slip below a “minimum speed” threshold or simply no longer fit in a float representation. Another is that the units are fairly meaningless: with Doom’s default friction of 90.625%, about how long does it take for the player to stop? I have no idea, partly because “stop” is ambiguous here! If friction were an acceleration, I could divide it into the player’s max speed to get a time.

All that aside, what are the actual effects of changing Doom’s friction? What an excellent question that’s surprisingly tricky to answer. (Note that friction can’t be changed in original Doom, only in the Boom port and its derivatives.) Here’s what I’ve pieced together.

Doom’s “friction” is really two values. “Friction” itself is a multiplier applied to moving objects on every tic, but there’s also a move factor which defaults to \(\frac{1}{32} = 0.03125\) and is derived from friction for custom values.

Every tic, the player’s velocity is multiplied by friction, and then increased by their speed times the move factor.

$$
v(n) = v(n – 1) \times friction + speed \times move factor
$$

Eventually, the reduction from friction will balance out the speed boost. That happens when \(v(n) = v(n – 1)\), so we can rearrange it to find the player’s effective max speed:

$$
v = v \times friction + speed \times move factor \\
v – v \times friction = speed \times move factor \\
v = speed \times \frac{move factor}{1 – friction}
$$

For vanilla Doom’s move factor of 0.03125 and friction of 0.90625, that becomes:

$$
v = speed \times \frac{\frac{1}{32}}{1 – \frac{29}{32}} = speed \times \frac{\frac{1}{32}}{\frac{3}{32}} = \frac{1}{3} \times speed
$$

Curiously, “speed” is three times the maximum speed an actor can actually move. Doomguy’s run speed is 50, so in practice he moves a third of that, or 16⅔ units per tic. (Of course, this isn’t counting SR40, a bug that lets Doomguy run ~40% faster than intended diagonally.)

So now, what if you change friction? Even more curiously, the move factor is calculated completely differently depending on whether friction is higher or lower than the default Doom amount:

$$
move factor = \begin{cases}
\frac{133 – 128 \times friction}{544} &≈ 0.244 – 0.235 \times friction & \text{ if } friction \ge \frac{29}{32} \\
\frac{81920 \times friction – 70145}{1048576} &≈ 0.078 \times friction – 0.067 & \text{ otherwise }
\end{cases}
$$

That’s pretty weird? Complicating things further is that low friction (which means muddy terrain, remember) has an extra multiplier on its move factor, depending on how fast you’re already going — the idea is apparently that you have a hard time getting going, but it gets easier as you find your footing. The extra multiplier maxes out at 8, which makes the two halves of that function meet at the vanilla Doom value.

A graph of the relationship between friction and move factor

That very top point corresponds to the move factor from the original game. So no matter what you do to friction, the move factor becomes lower. At 0.85 and change, you can no longer move at all; below that, you move backwards.

From the formula above, it’s easy to see what changes to friction and move factor will do to Doomguy’s stable velocity. Move factor is in the numerator, so increasing it will increase stable velocity — but it can’t increase, so stable velocity can only ever decrease. Friction is in the denominator, but it’s subtracted from 1, so increasing friction will make the denominator a smaller value less than 1, i.e. increase stable velocity. Combined, we get this relationship between friction and stable velocity.

A graph showing stable velocity shooting up dramatically as friction increases

As friction approaches 1, stable velocity grows without bound. This makes sense, given the definition of \(v(n)\) — if friction is 1, the velocity from the previous tic isn’t reduced at all, so we just keep accelerating freely.

All of this is why I’m wary of using multipliers.

Anyway, this leaves me with one last question about the effects of Doom’s friction: how long does it take to reach stable velocity? Barring precision errors, we’ll never truly reach stable velocity, but let’s say within 5%. First we need a closed formula for the velocity after some number of tics. This is a simple recurrence relation, and you can write a few terms out yourself if you want to be sure this is right.

$$
v(n) = v_0 \times friction^n + speed \times move factor \times \frac{friction^n – 1}{friction – 1}
$$

Our initial velocity is zero, so the first term disappears. Set this equal to the stable formula and solve for n:

$$
speed \times move factor \times \frac{friction^n – 1}{friction – 1} = (1 – 5\%) \times speed \times \frac{move factor}{1 – friction} \\
friction^n – 1 = -(1 – 5\%) \\
n = \frac{\ln 5\%}{\ln friction}
$$

Speed” and move factor disappear entirely, which makes sense, and this is purely a function of friction (and how close we want to get). For vanilla Doom, that comes out to 30.4, which is a little less than a second. For other values of friction:

A graph of time to stability which leaps upwards dramatically towards the right

As friction increases (which in Doom terms means the surface is more slippery), it takes longer and longer to reach stable speed, which is in turn greater and greater. For lesser friction (i.e. mud), stable speed is lower, but reached fairly quickly. (Of course, the extra “getting going” multiplier while in mud adds some extra time here, but including that in the graph is a bit more complicated.)

I think this matches with my instincts above. How fascinating!

What’s that? This is way too much math and you hate it? Then don’t use multipliers in game physics.

Uh

That was a hell of a diversion!

I guess the goofiest stuff in basic game physics is really just about mapping player controls to in-game actions like jumping and deceleration; the rest consists of hacks to compensate for representing everything as a box.

Torrent Pioneers: isoHunt’s Gary Fung, Ten Years Later

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-pioneers-isohunts-gary-fung-ten-years-later-180106/

Ten years ago, November 2007 to be precise, we published an article featuring the four leading torrent site admins at the time.

Niek van der Maas of Mininova, Justin Bunnell of TorrentSpy, Pirate Bay’s Peter Sunde and isoHunt’s Gary Fung were all kind enough to share their vision of BitTorrent’s future.

This future is the present today, and although the predictions were not all spot-on, there are a few interesting observations to make.

For one, these four men were all known by name, despite the uncertain legal situation they were in. How different is that today, when the operators of most of the world’s largest torrent sites are unknown to the broader public.

Another thing that stands out is that none of these pioneers are still active in the torrent space today. Niek and Justin have their own advertising businesses, Peter is a serial entrepreneur involved in various startups, while Gary works on his own projects.

While they have all moved on, they also remain a part of Internet history, which is why we decided to reach out to them ten years on.

Gary Fung was the first to reply. Those who’ve been following torrent news for a while know that isoHunt was shut down in 2013. The shutdown was the result of a lawsuit and came with a $110 million settlement with the MPAA, on paper.

Today the Canadian entrepreneur has other things on his hands, which includes “leveling up” his now one-year-old daughter. While that can be a day job by itself, he is also finalizing a mobile search app which will be released in the near future.

“The key is speed, and I can measure its speedup of the whole mobile search experience to be 10-100x that of conventional mobile web browsers,” Gary tells us, noting that after years of development, it’s almost ready.

The new search app is not one dedicated to torrents, as isoHunt once was. However, looking back, Gary is proud of what he accomplished with isoHunt, despite the bitter end.

“It was a humbling experience, in more ways than one. I’m proud that I participated and championed the rise of P2P content distribution through isoHunt as a search gateway,” Gary tells us.

“But I was also humbled by the responsibility and power at play, as seen in the lawsuits from the media industry giants, as well as the even larger picture of what P2P technologies were bringing, and still bring today.”

Decentralization has always been a key feature of BitTorrent and Gary sees this coming back in new trends. This includes the massive attention for blockchain related projects such as Bitcoin.

“2017 was the year Bitcoin became mainstream in a big way, and it’s feeling like the Internet before 2000. Decentralization is by nature disruptive, and I can’t wait to see what decentralizing money, governance, organizations and all kinds of applications will bring in the next few years.

“dApps [decentralized apps] made possible by platforms like Ethereum are like generalized BitTorrent for all kinds of applications, with ones we haven’t even thought of yet,” Gary adds.

Not everything is positive in hindsight, of course. Gary tells us that if he had to do it all over again he would take legal issues and lawyers more seriously. Not doing so led to more trouble than he imagined.

As a former torrent site admin, he has thought about the piracy issue quite a bit over the years. And unlike some sites today, he was happy to look for possible solutions to stop piracy.

One solution Gary suggested to Hollywood in the past was a hash recognition system for infringing torrents. A system to automatically filter known infringing files and remove these from cooperating torrent sites could still work today, he thinks.

“ContentID for all files shared on BitTorrent, similar to YouTube. I’ve proposed this to Hollywood studios before, as a better solution to suing their customers and potential P2P technology partners, but it obviously fell on deaf ears.”

In any case, torrent sites and similar services will continue to play an important role in how the media industry evolves. These platforms are showing Hollywood what the public wants, Gary believes.

“It has and will continue to play a role in showing the industry what consumers truly want: frictionless, convenient distribution, without borders of country or bundles. Bundles as in cable channels, but also in any way unwanted content is forced onto consumers without choice.”

While torrents were dominant in the past, the future will be streaming mostly, isoHunt’s founder says. He said this ten years ago, and he believes that in another decade it will have completely replaced cable TV.

Whether piracy will still be relevant then depends on how content is offered. More fragmentation will lead to more piracy, while easier access will make it less relevant.

“The question then will be, will streaming platforms be fragmented and exclusive content bundled into a hundred pieces besides Netflix, or will consumer choice and convenience win out in a cross-platform way?

“A piracy increase or reduction will depend on how that plays out because nobody wants to worry about ten monthly subscriptions to ten different streaming services, much less a hundred,” Gary concludes.

Perhaps we should revisit this again next decade…


The second post in this series, with Peter Sunde, will be published this weekend. The other two pioneers did not respond or declined to take part.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

 

 

 

Modding Legends Team-Xecuter Announce “Future-Proof” Nintendo Switch Hack

Post Syndicated from Andy original https://torrentfreak.com/modding-legends-team-xecuter-announce-future-proof-nintendo-switch-hack-180104/

Since the advent of the first truly mass-market videogames consoles, people have dreamed about removing the protection mechanisms that prevent users from tinkering with their machines.

These modifications – which are software, hardware, or combination of the two – facilitate the running of third-party or “homebrew” code. On this front, a notable mention must go to XBMC (now known as Kodi) which ran on the original Xbox after its copy protection mechanisms had been removed.

However, these same modifications regularly open the door to mass-market piracy too, with mod-chips (hardware devices) or soft-mods (software solutions) opening up machines so that consumers can run games obtained from the Internet or elsewhere.

For the Nintendo Switch, that prospect edged closer at the end of December when Wololo reported that hackers Plutoo, Derrek, and Naehrwert had given a long presentation (video) at the 34C3 hacking conference in Germany, revealing their kernel hack for the Nintendo Switch.

While this in itself is an exciting development, fresh news from a veteran hacking group suggests that Nintendo could be in big trouble on the piracy front in the not-too-distant future.

“In the light of a recent presentation at the Chaos Communication Congress in Germany we’ve decided to come out of the woodwork and tease you all a bit with our latest upcoming product,” the legendary Team-Xecutor just announced.

While the hack announced in December requires Switch firmware 3.0 (and a copy of Pokken Tournament DX), Team-Xecutor say that their product will be universal, something which tends to suggest a fundamental flaw in the Switch system.

“This solution will work on ANY Nintendo Switch console regardless of the currently installed firmware, and will be completely future proof,” the team explain.

Xecutor say that their solution opens up the possibility of custom firmware (CFW) on Nintendo’s console. In layman’s terms, this means that those with the technical ability will be able to dictate, at least to a point, how the console functions.

“We want to move the community forward and provide a persistent, stable and fast method of running your own code and custom firmware patches on Nintendo’s latest flagship product. And we think we’ve succeeded!” the team add.

The console-modding community thrives on rumors, with various parties claiming to have made progress here and there, on this console and that, so it’s natural for people to greet this kind of announcement with a degree of skepticism. That being said, Team-Xecutor is no regular group.

With a long history of console-based meddling, Team-Xecutor’s efforts include hardware solutions for the original Playstation and Playstation 2, an array of hacks for the original Xbox (Enigmah and various Xecuter-branded solutions), plus close involvement in prominent Xbox360 mods. Their pedigree is definitely not up for debate.

For now, the team isn’t releasing any more details on the nature of the hack but they have revealed when the public can expect to get their hands on it.

“Spring 2018 or there around,” they conclude.

Team-Xecutor demo

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons