Tag Archives: retail

Cybersecurity Insurance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/cybersecurity_i_1.html

Good article about how difficult it is to insure an organization against Internet attacks, and how expensive the insurance is.

Companies like retailers, banks, and healthcare providers began seeking out cyberinsurance in the early 2000s, when states first passed data breach notification laws. But even with 20 years’ worth of experience and claims data in cyberinsurance, underwriters still struggle with how to model and quantify a unique type of risk.

“Typically in insurance we use the past as prediction for the future, and in cyber that’s very difficult to do because no two incidents are alike,” said Lori Bailey, global head of cyberrisk for the Zurich Insurance Group. Twenty years ago, policies dealt primarily with data breaches and third-party liability coverage, like the costs associated with breach class-action lawsuits or settlements. But more recent policies tend to accommodate first-party liability coverage, including costs like online extortion payments, renting temporary facilities during an attack, and lost business due to systems failures, cloud or web hosting provider outages, or even IT configuration errors.

In my new book — out in September — I write:

There are challenges to creating these new insurance products. There are two basic models for insurance. There’s the fire model, where individual houses catch on fire at a fairly steady rate, and the insurance industry can calculate premiums based on that rate. And there’s the flood model, where an infrequent large-scale event affects large numbers of people — but again at a fairly steady rate. Internet+ insurance is complicated because it follows neither of those models but instead has aspects of both: individuals are hacked at a steady (albeit increasing) rate, while class breaks and massive data breaches affect lots of people at once. Also, the constantly changing technology landscape makes it difficult to gather and analyze the historical data necessary to calculate premiums.

BoingBoing article.

Forty Percent of All Mexican Roku Users are Pirates

Post Syndicated from Ernesto original https://torrentfreak.com/forty-percent-of-all-mexican-roku-users-are-pirates-180332/

In recent years it has become much easier to stream movies and TV-shows over the Internet.

Legal services such as Netflix and HBO are flourishing, but there’s also a darker side to this streaming epidemic.

Millions of people are streaming from unauthorized sources, often paired with perfectly legal streaming platforms and devices. This issue has become particularly problematic for Roku, which sells easy-to-use media players.

Last week federal judges in Mexico City and Torreón decided that Roku sales should remain banned there, keeping last year’s suspension in place. While the ruling can still be appealed, it hurts Roku’s bottom line.

The company has more than a million users in Mexico according to statistics released by the Competitive Intelligence Unit (CIU), a local market research firm. That’s a significant number, but so is the percentage of pirating Roku users in Mexico.

“Roku has 1.1 million users in the country, of which 40 percent use it to watch content illegally,” Gonzalo Rojon, ICU’s director of ICT research, writes.

“There are 575 thousand users who access the illegal content and that is comparable to the number of subscribers a small pay-TV operator has,” he adds.

While this is indeed a significant number, that doesn’t make the Roku boxes illegal by default. There are millions who use Windows to pirate stuff, or web browsers like Chrome and Firefox, but these are generally not seen as problematic.

Still, several Mexican judges have ruled that sales should be banned so for the time being it remains that way.

According to Rojon, these type of measures are imperative to ensure that copyright holders are protected from online piracy, now that more and more content is moving online.

“Although for some people this type of action seems radical, I think it is very important that the shift towards more digitalization is accompanied by copyright and intellectual property protection, so it continues to promote innovation and a healthy competitive environment in the digital world,” he notes.

Roku clearly disagrees and last week the company told us that it will do everything in its power to have the current sales ban overturned.

“While Roku’s devices have always been and remain legal to use in Mexico, the current ban harms consumers, the retail sector and the industry. We will vigorously pursue further legal actions with the aim of restoring sales of Roku devices in Mexico,” the company said.

Meanwhile, Roku is working hard to shake the piracy elements off its platform. Last year it began showing FBI warnings to users of ‘pirate channels’ and just this week removed the entire USTVnow service from its platform.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Controversial Roku ‘Piracy’ Ban Stays in Place in Mexico

Post Syndicated from Andy original https://torrentfreak.com/controversial-roku-piracy-ban-stays-in-place-in-mexico-180323/

‘Set-top’ devices such as Amazon’s Fire TV have sold in their millions in recent years as the stream-to-your-living room craze continues.

Many commercial devices are intended to receive official programming in a legal manner but most can be reprogrammed to do illegal things.

Of course, this behavior has nothing to do with the manufacturers of such devices but a case launched in Mexico last year really took things to the next level.

Following a complaint filed by cable TV provider Cablevision, the Superior Court of Justice of the City of Mexico handed down an order in June preventing the importation of Roku devices and prohibiting stores such as Amazon, Liverpool, El Palacio de Hierro, and Sears from putting them on sale.

The ban was handed down in an effort to tackle the amount of pirated content being viewed through the devices. News circulating at the time suggested that sellers on social media were providing more than 300 channels of unauthorized content for around US$8 per month.

Of course, the same illegal content consumption also takes place via regular PCs, tablet computers, and even mobile phones. No one would consider banning them but the court in Mexico clearly didn’t see the parallels when it dropped the hammer on Roku.

Later that month, however, a light appeared at the end of the tunnel. A federal judge decided to temporarily suspend the import and sales ban, which also instructed banks to stop processing payments from accounts linked to third-party pirate services.

“Roku is pleased with today’s court decision, which paves the way for sales of Roku devices to resume in Mexico,” Roku’s General Counsel Steve Kay informed TorrentFreak at the time.

“Piracy is a problem the industry at large is facing. We prohibit copyright infringement of any kind on the Roku platform. We actively work to prevent third-parties from using our platform to distribute copyright infringing content. Moreover, we have been actively working with other industry stakeholders on a wide range of anti-piracy initiatives.”

But just as the sales began to flow once more, the celebrations were almost immediately cut short.

On June 28, 2017, a Mexico City tribunal upheld the previous decision which banned importation and distribution of Roku devices, much to the disappointment of Roku’s General Counsel.

“Today’s decision is not the final word in this complex legal matter,” Steve Kay said.

Indeed, since that date, Roku and retailers including Amazon, Walmart, Best Buy, Office Depot, Radio Shack and Sears have been fighting to have Roku devices put back on sale again, with several courts ruling against the appeals. Then last week there was another blow when federal judges in Mexico City and Torreón decided to keep the original suspension in place.

Forbidding the “importation, commercialization and distribution” of Roku devices, the judges maintained that Roku devices could be used as an instrument for “dishonest commerce” in violation of Mexico’s copyright law.

The main argument in support of the ban is that Roku devices can still be used by people to gain access to infringing content. As a result, Cablevision believes that Roku should modify its devices to ensure that piracy isn’t possible in the future.

“It is necessary for Roku to make adjustments to its software, as other online content distribution platforms do, so that violations of copyrighted content do not take place,” a Cablevision spokesperson said.

The decision to ban Roku devices can still be appealed. The company informs TorrentFreak that further legal action is on the cards.

“There have been several recent court rulings related to the ban on the sale of Roku devices in Mexico. In fact, a Federal court in Mexico City has already determined that the ban was improper; however, the ban remains in place,” says Roku spokesperson Tricia Misfud.

“While Roku’s devices have always been and remain legal to use in Mexico, the current ban harms consumers, the retail sector and the industry. We will vigorously pursue further legal actions with the aim of restoring sales of Roku devices in Mexico.”

Despite a nationwide sales ban, people who already have a Roku in their possession remain unaffected by recent developments. Since the use of Roku devices in Mexico and elsewhere is completely legal, current users will still receive regular software updates.

In associated news, Mexico’s Telecommunications Law Institute (IDET) reports that the Mexican Institute of Industrial Property (IMPI) has been blocking URLs used to distribute unauthorized content and apps.

While that will undoubtedly prove unpopular with pirates, one hopes that its execution is somewhat more precise than the wholesale banning of the entire Roku platform.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

HackSpace magazine 5: Inside Adafruit

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-5/

There’s a new issue of HackSpace magazine on the shelves today, and as usual it’s full of things to make and do!

HackSpace magazine issue 5 Adafruit

Adafruit

We love making hardware, and we’d also love to turn this hobby into a way to make a living. So in the hope of picking up a few tips, we spoke to the woman behind Adafruit: Limor Fried, aka Ladyada.

HackSpace magazine issue 5 Adafruit

Adafruit has played a massive part in bringing the maker movement into homes and schools, so we’re chuffed to have Limor’s words of wisdom in the magazine.

Raspberry Pi 3B+

As you may have heard, there’s a new Pi in town, and that can only mean one thing for HackSpace magazine: let’s test it to its limits!

HackSpace magazine issue 5 Adafruit

The Raspberry Pi 3 Model B+ is faster, better, and stronger, but what does that mean in practical terms for your projects?

Toys

Kids are amazing! Their curious minds, untouched by mundane adulthood, come up with crazy stuff that no sensible grown-up would think to build. No sensible grown-up, that is, apart from the engineers behind Kids Invent Stuff, the brilliant YouTube channel that takes children’s inventions and makes them real.

So what is Kids Invent Stuff?!

Kids Invent Stuff is the YouTube channel where kids’ invention ideas get made into real working inventions. Learn more about Kids Invent Stuff at www.kidsinventstuff.com Have you seen Connor’s Crazy Car invention? https://youtu.be/4_sF6ZFNzrg Have you seen our Flamethrowing piano?

We spoke to Ruth Amos, entrepreneur, engineer, and one half of the Kids Invent Stuff team.

Buggy!

It shouldn’t just be kids who get to play with fun stuff! This month, in the name of research, we’ve brought a Stirling engine–powered buggy from Shenzhen.

HackSpace magazine issue 5 Adafruit

This ingenious mechanical engine is the closest you’ll get to owning a home-brew steam engine without running the risk of having a boiler explode in your face.

Tutorials

In this issue, turn a Dremel multitool into a workbench saw with some wood, perspex, and a bit of laser cutting; make a Starfleet com-badge and pretend you’re Captain Jean-Luc Picard (shaving your hair off not compulsory); add intelligence to builds the easy way with Node-RED; and get stuck into Cheerlights, one of the world’s biggest IoT project.


All this, plus your ultimate guide to blinkenlights, and the only knot you’ll ever need, in HackSpace magazine issue 5.

Subscribe, save, and get free stuff

Save up to 35% on the retail price by signing up to HackSpace magazine today. When you take out a 12-month subscription, you’ll also get a free Adafruit Circuit Playground Express!

HackSpace magazine issue 5 Adafruit

Individual copies of HackSpace magazine are available in selected stockists across the UK, including Tesco, WHSmith, and Sainsbury’s. They’ll also be making their way across the globe to USA, Canada, Australia, Brazil, Hong Kong, Singapore, and Belgium in the coming weeks, so ask your local retailer whether they’re getting a delivery.

You can also purchase your copy on the Raspberry Pi Press website, and browse our complete collection of other Raspberry Pi publications, such as The MagPi, Hello World, and Raspberry Pi Projects Books.

The post HackSpace magazine 5: Inside Adafruit appeared first on Raspberry Pi.

Fstoppers Uploaded a Brilliant Hoax ‘Anti-Piracy’ Tutorial to The Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/fstoppers-uploaded-a-brilliant-hoax-anti-piracy-tutorial-to-the-pirate-bay-180307/

Fstoppers is an online community that produces extremely high-quality photographic tutorials. One of its most popular series is called Photographing the World which sees photographer Elia Locardi travel to exotic locations to demonstrate landscape and cityscape photography.

These tutorials sell for almost $300, with two or three versions in a pack selling for up $700. Of course, like any other media they get pirated so when Fstoppers were ready to release Photographing the World 3, they released it themselves on torrent sites a few days before retail.

Well, that’s what they wanted the world to believe.

“I think it’s fair to say that we’ve all downloaded ‘something’ illegally in the past. Whether it’s an MP3 years ago or a movie or a TV show, and occasionally you download something and it turns out it was kinda like a Rick Roll,” says Locardi.

“So we kept talking and we thought it would be a good idea to create this dummy lesson or shadow tutorial that was actually a fake and then seed it on BitTorrent.”

Where Fstoppers normally go to beautiful and exotic international locations, for their fake they decided to go to an Olive Garden in Charleston, South Carolina. Yet despite the clear change of location, they wanted people to believe the tutorial was legitimate.

“We wanted to ride this constant line of ‘Is this for real? Could this possibly be real? Is Elia [Locardi] joking right now? I don’t think he’s joking, he’s being totally serious’,” says Lee Morris, one of the co-owners of Fstoppers.

People really have to watch the tutorial to see what a fantastic job Fstoppers did in achieving that goal. For anyone unfamiliar with their work, the tutorial is initially hard to spot as a fake and even for veterans the level of ambiguity is really impressive.

However, when the tutorial heads back to the studio, where the post-processing lesson gets underway, there can be no doubt that something is amiss.

Things start off normally with serious teaching, then over time, the tutorial gets more and more ridiculous. Then, when the camera cuts away to show Locardi forming a ‘mask’ on an Olive Garden image, there can be no confusion.

That’s a cool mask….wait..

In order to get the tutorial out to the world, the site created its own torrent. They had never done anything like it before so got some associates to upload the huge 25GB+ package to The Pirate Bay and have their friends seed it. Then, in order to get past more savvy users on the site, they had other people come in and give the torrent good (but fake) reviews.

The fake torrent on The Pirate Bay (as of yesterday)

Screenshots provided by Fstoppers taken months ago reveal hundreds of downloaders. And, according to Morris, the fake became the most-downloaded Photographing the World 3 torrent online, meaning that the “majority of downloaders” got the comedy version.

Also of interest is the feedback Fstoppers got following their special release. Emails flooded in from pirates, some of whom were confused while others were upset at the ‘quality’ of the tutorial.

“The whole time we were thinking: ‘This isn’t even on the market yet! You guys are totally stealing this and emailing us and complaining about it,” says Fstoppers co-owner Patrick Hall.

While the tutorial itself is brilliant, Fstoppers points to a certain hypocrisy within its target audience of photographers, who themselves have to put up with a lot of online piracy of their work. Yet, clearly, many are happy to pirate the work of other photographers in order to make their own art better.

All that being said, the exercise is certainly an interesting one and the creativity behind the hoax puts it head and shoulders above more aggressive anti-piracy campaigns. However, when TF tracked down the torrent on The Pirate Bay last evening, it’s popularity had nosedived.

While it was initially downloaded by a lot of eager photographers, probably encouraged by the fake comments placed on the site by Fstoppers, the torrent is now only being shared by less than 10 people. As usual, the Pirate Bay users appear to have caught on, flagging the torrent as a fake. The moderators, it seems, have also deleted the fake comments.

While most people won’t want to download a 25GB torrent to see what Fstoppers came up with, the site has uploaded the fake tutorial to YouTube. It’s best viewed alongside their other work, which is sensational, but people should get a good idea by watching the explanation below.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Virgin Media Store Caught Running Movie & TV Show Piracy Software (Updated)

Post Syndicated from Andy original https://torrentfreak.com/virgin-media-store-caught-running-movie-tv-show-piracy-software-180205/

While other providers in the UK and Ireland aim to compete, those requiring the absolute fastest fibre optic broadband coupled with a comprehensive TV package will probably find themselves considering Virgin Media.

Despite sporting Richard Branson’s Virgin brand, the company has been owned by US-based Liberty Global since 2013. It previously earned the title of first quad-play media company in the United Kingdom, offering broadband, TV, fixed-line and mobile telecoms packages.

Today, however, the company has a small piracy-related embarrassment to address.

Like several of the large telecoms companies in the region, Virgin Media operates a number of bricks-and-mortar stores which are used to drum up sales for Internet, TV and phone packages while offering support to new and existing customers. They typically look like the one in the image below.

Virgin Media store (credit: Virgin)

The outside windows of Virgin stores are usually covered with advertising for the company’s products and regularly carry digital displays which present the latest deals. However, one such display spotted by a passer-by carried a little extra.

In a now-deleted post on Reddit, a user explained that when out and about he’d passed a Virgin Media store which sported a digital display advertising the company’s impressive “Full House” package. However, intruding at the top of the screen was a notification from one of the most impressive piracy apps available, Terrarium TV.

Busted: Terrarium TV notification top and center (credit)

For those out of the loop, Terrarium TV is one of the most feature-rich Android-based applications available today. For reasons that aren’t exactly clear, it hasn’t received the attention of ‘rivals’ such as Popcorn Time and Showbox but its abilities are extremely impressive.

As the image shows, the notification is letting the user know that two new movies – The Star and The Stray – have been added to Terrarium’s repertoire. In other words, they’ve just been listed in the Terrarium app for streaming directly to the user’s installation (in this case one of Virgin’s own displays) for free, without permission from copyright holders.

Of course, Virgin Media definitely won’t have authorized the installation of Terrarium TV on any of its units, so it’s most likely down to someone in the store with access to the display, perhaps a staff member but possibly a mischievous customer. Whoever it was should probably uninstall it now though, if they’re able to. Virgin will not be happy about this.

The person who took the photo didn’t respond to TorrentFreak’s request for comment on where it was taken but from the information available in the image, it seems likely that it’s in Ireland. Virgin Media ads elsewhere in the region are priced in pounds – not in euros – so a retail outlet in the country is the most likely location. The same 99 euro “Full House” deal is also advertised on Virgin’s .ie website.

Terrarium TV

Terrarium TV

While a display running a piracy application over the top of an advert trying to sell premium access to movies and TV shows is embarrassing enough, Virgin and other ISPs including Eircom, Sky Ireland, and Vodafone Ireland are currently subject to a court order which compels them to block several pirate sites in Ireland.

The sources used by Terrarium to supply illicit copies of movies are not part of that order but since ISPs in the region don’t contest blocking orders when rightsholders apply for them, it’s reasonable to presume they’re broadly in favor of blocking pirate sites.

Of course, that makes perfect sense if you’re a company trying to make money from selling premium access to content.

Update: We have a lengthy statement from Virgin Media:

“Virgin Media takes copyright very seriously and does not condone illegal streaming.

Our new Tallaght Store is due to officially open later this month and currently does not currently have Virgin Media network connectivity.

Over the weekend, an advertising screen display in this Store was being set up by a contractor.

The contractor took it on themselves to use their own 4G device to set up the screen, ahead of the store being connected to our fibre services this week.

At some stage, it seems an unwanted pop-up appeared on the screen from an illegal streaming site. To be clear, this was not on the Virgin Media network.

Other than as outlined above, this occurrence has no connection whatsoever with Virgin Media. We have notified the contractor regarding this incident.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Coalition Against Piracy Launches Landmark Case Against ‘Pirate’ Android Box Sellers

Post Syndicated from Andy original https://torrentfreak.com/coalition-against-piracy-launches-landmark-case-against-pirate-android-box-sellers-180112/

In 2017, anti-piracy enforcement went global when companies including Disney, HBO, Netflix, Amazon and NBCUniversal formed the Alliance for Creativity and Entertainment (ACE).

Soon after the Coalition Against Piracy (CAP) was announced. With a focus on Asia and backed by CASBAA, CAP counts many of the same companies among its members in addition to local TV providers such as StarHub.

From the outset, CAP has shown a keen interest in tackling unlicensed streaming, particularly that taking place via illicit set-top boxes stuffed with copyright-infringing apps and add-ons. One country under CAP’s spotlight is Singapore, where relevant law is said to be fuzzy at best, insufficient at worst. Now, however, a line in the sand might not be far away.

According to a court listing discovered by Singapore’s TodayOnline, today will see the Coalition Against Piracy’s general manager Neil Kevin Gane attempt to launch a pioneering private prosecution against set-top box distributor Synnex Trading and its client and wholesale goods retailer, An-Nahl.

Gane and CAP are said to be acting on behalf of four parties, one which is TV giant StarHub, a company with a huge interest in bringing media piracy under control in the region. It’s reported that they have also named Synnex Trading director Jia Xiaofen and An-Nahl director Abdul Nagib as defendants in their private criminal case after the parties failed to reach a settlement in an earlier process.

Contacted by TodayOnline, an employee of An-Nahl said the company no longer sells the boxes. However, Synnex is reportedly still selling them for S$219 each ($164) plus additional fees for maintenance and access to VOD. The company’s Facebook page is still active with the relevant offer presented prominently.

The importance of the case cannot be understated. While StarHub and other broadcasters have successfully prosecuted cases where people unlawfully decrypted broadcast signals, the provision of unlicensed streams isn’t specifically tackled by Singapore’s legislation. It’s now a major source of piracy in the region, as it is elsewhere around the globe.

Only time will tell how the process will play out but it’s clear that CAP and its members are prepared to invest significant sums into a prosecution for a favorable outcome. CAP believes that the supply of the boxes falls under Section 136 (3A) of the Copyright Act but only time will tell.

Last December, CAP separately called on the Singapore government to not only block ‘pirate’ streaming software but also unlicensed streams from entering the country.

“Within the Asia-Pacific region, Singapore is the worst in terms of availability of illicit streaming devices,” said CAP General Manager Neil Gane. “They have access to hundreds of illicit broadcasts of channels and video-on-demand content.”

CAP’s 21 members want the authorities to block the software inside devices that enables piracy but it’s far from clear how that can be achieved.

Update: The four companies taking the action are confirmed as Singtel, Starhub, Fox Network, and the English Premier League

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

 

 

 

BitTorrent Inc. Emerges Victorious Following EU Trademark Dispute

Post Syndicated from Andy original https://torrentfreak.com/bittorrent-inc-emerges-victorious-following-eu-trademark-dispute-171213/

For anyone familiar with the BitTorrent brand, there can only be one company that springs to mind. BitTorrent Inc., the outfit behind uTorrent that still employs BitTorrent creator Bram Cohen, seems the logical choice, but not everything is straightforward.

Back in June 2003, a company called BitTorrent Marketing GmbH filed an application to register an EU trademark for the term ‘BitTorrent’ with the European Union Intellectual Property Office (EUIPO). The company hoped to exploit the trademark for a wide range of uses from marketing, advertising, retail, mail order and Internet sales, to film, television and video licensing plus “providing of memory space on the internet”.

The trademark application was published in Jul 2004 and registered in June 2006. However, in June 2011 BitTorrent Inc. filed an application for its revocation on the grounds that the trademark had not been “put to genuine use in the European Union in connection with the services concerned within a continuous period of five years.”

A year later, the EUIPO notified BitTorrent Marketing GmbH that it had three months to submit evidence of the trademark’s use. After an application from the company, more time was given to present evidence and a deadline was set for November 21, 2011. Things did not go to plan, however.

On the very last day, BitTorrent Marketing GmbH responded to the request by fax, noting that a five-page letter had been sent along with 69 pages of additional evidence. But something went wrong, with the fax machine continually reporting errors. Several days later, the evidence arrived by mail, but that was technically too late.

In September 2013, BitTorrent Inc.’s application for the trademark to be revoked was upheld but in November 2013, BitTorrent Marketing GmbH (by now known as Hochmann Marketing GmbH) appealed against the decision to revoke.

Almost two years later in August 2015, an EUIPO appeal held that Hochmann “had submitted no relevant proof” before the specified deadline that the trademark had been in previous use. On this basis, the evidence could not be taken into account.

“[The appeal] therefore concluded that genuine use of the mark at issue had not been proven, and held that the mark must be revoked with effect from 24 June 2011,” EUIPO documentation reads.

However, Hochmann Marketing GmbH wasn’t about to give up, demanding that the decision be annulled and that EUIPO and BitTorrent Inc. should pay the costs. In response, EUIPO and BitTorrent Inc. demanded the opposite, that Hochmann’s action should be dismissed and they should pay the costs instead.

In its decision published yesterday, the EU General Court (Third Chamber) clearly sided with EUIPO and BitTorrent Inc.

“The [evidence] document clearly contains only statements that are not substantiated by any supporting evidence capable of adducing proof of the place, time, extent and nature of use of the mark at issue, especially because the evidence in question was submitted, in the present case, three days after the prescribed period expired,” the decision reads.

The decision also notes that the company was given an additional month to come up with evidence and then some – the evidence was actually due on a Saturday so the period was extended until Monday for the convenience of the company.

“Next, EUIPO had duly informed the applicant, by letter of 19 July 2011, that it was ‘required to submit the required evidence of use in reply to the request within three months of receipt of this communication’ and that ‘if no evidence of use [was] submitted within this period, the [EU] mark w[ould] be revoked’,” the decision reads, adding;

“That letter also included guidance on how to provide evidence in a timely manner. Consequently, the applicant knew not only what documents it must submit, but also what the consequences of late submission of evidence were.”

All things considered, the Court rejected Hochmann Marketing GmbH’s application, ultimately deciding that not enough evidence was produced and what did appear was too late. For that, the trademark remains revoked and Hochmann Marketing must cover EUIPO and BitTorrent Inc.’s legal costs.

This isn’t the first time that BitTorrent Inc. has taken on BitTorrent/Hochmann Marketing GmbH and won. In 2014, it took the company to court in the United States and walked away with a $2.2m damages award.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Screener Piracy Season Kicks Off With Louis C.K.’s ‘I Love You, Daddy’

Post Syndicated from Ernesto original https://torrentfreak.com/screener-piracy-season-kicks-off-with-louis-c-k-s-i-love-you-daddy-171211/

Towards the end of the year, movie screeners are sent out to industry insiders who cast their votes for the Oscars and other awards.

It’s a highly anticipated time for pirates who hope to get copies of the latest blockbusters early, which is traditionally what happens.

Last year the action started relatively late. It took until January before the first leak surfaced – Denzel Washington’s Fences –
but more than a dozen made their way online soon after.

Today the first leak of the new screener season started to populate various pirate sites, Louis C.K.’s “I Love You, Daddy.” It was released by the infamous “Hive-CM8” group which also made headlines in previous years.

“I Love You, Daddy” was carefully chosen, according to a message posted in the release notes. Last month distributor The Orchard chose to cancel the film from its schedule after Louis C.K. was accused of sexual misconduct. With uncertainty surrounding the film’s release, “Hive-CM8” decided to get it out.

“We decided to let this one title go out this month, since it never made it to the cinema, and nobody knows if it ever will go to retail at all,” Hive-CM8 write in their NFO.

“Either way their is no perfect time to release it anyway, but we think it would be a waste to let a great Louis C.K. go unwatched and nobody can even see or buy it,” they add.

I Love You, Daddy

It is no surprise that the group put some thought into their decision. In 2015 they published several movies before their theatrical release, for which they later offered an apology, stating that this wasn’t acceptable.

Last year this stance was reiterated, noting that they would not leak any screeners before Christmas. Today’s release shows that this isn’t a golden rule, but it’s unlikely that they will push any big titles before they’re out in theaters.

“I Love You, Daddy” isn’t going to be seen in theaters anytime soon, but it might see an official release. This past weekend, news broke that Louis C.K. had bought back the rights from The Orchard and must pay back marketing costs, including a payment for the 12,000 screeners that were sent out.

Hive-CM8, meanwhile, suggest that they have more screeners in hand, although their collection isn’t yet complete.

“We are still missing some titles, anyone want to share for the collection? Yes we want to have them all if possible, we are collectors, we don’t want to release them all,” they write.

Finally, the group also has some disappointing news for Star Wars fans who are looking for an early copy of “The Last Jedi.” Hive-CM8 is not going to release it.

“Their will be no starwars from us, sorry wont happen,” they write.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Amazon QuickSight Update – Geospatial Visualization, Private VPC Access, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-quicksight-update-geospatial-visualization-private-vpc-access-and-more/

We don’t often recognize or celebrate anniversaries at AWS. With nearly 100 services on our list, we’d be eating cake and drinking champagne several times a week. While that might sound like fun, we’d rather spend our working hours listening to customers and innovating. With that said, Amazon QuickSight has now been generally available for a little over a year and I would like to give you a quick update!

QuickSight in Action
Today, tens of thousands of customers (from startups to enterprises, in industries as varied as transportation, legal, mining, and healthcare) are using QuickSight to analyze and report on their business data.

Here are a couple of examples:

Gemini provides legal evidence procurement for California attorneys who represent injured workers. They have gone from creating custom reports and running one-off queries to creating and sharing dynamic QuickSight dashboards with drill-downs and filtering. QuickSight is used to track sales pipeline, measure order throughput, and to locate bottlenecks in the order processing pipeline.

Jivochat provides a real-time messaging platform to connect visitors to website owners. QuickSight lets them create and share interactive dashboards while also providing access to the underlying datasets. This has allowed them to move beyond the sharing of static spreadsheets, ensuring that everyone is looking at the same and is empowered to make timely decisions based on current data.

Transfix is a tech-powered freight marketplace that matches loads and increases visibility into logistics for Fortune 500 shippers in retail, food and beverage, manufacturing, and other industries. QuickSight has made analytics accessible to both BI engineers and non-technical business users. They scrutinize key business and operational metrics including shipping routes, carrier efficient, and process automation.

Looking Back / Looking Ahead
The feedback on QuickSight has been incredibly helpful. Customers tell us that their employees are using QuickSight to connect to their data, perform analytics, and make high-velocity, data-driven decisions, all without setting up or running their own BI infrastructure. We love all of the feedback that we get, and use it to drive our roadmap, leading to the introduction of over 40 new features in just a year. Here’s a summary:

Looking forward, we are watching an interesting trend develop within our customer base. As these customers take a close look at how they analyze and report on data, they are realizing that a serverless approach offers some tangible benefits. They use Amazon Simple Storage Service (S3) as a data lake and query it using a combination of QuickSight and Amazon Athena, giving them agility and flexibility without static infrastructure. They also make great use of QuickSight’s dashboards feature, monitoring business results and operational metrics, then sharing their insights with hundreds of users. You can read Building a Serverless Analytics Solution for Cleaner Cities and review Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight if you are interested in this approach.

New Features and Enhancements
We’re still doing our best to listen and to learn, and to make sure that QuickSight continues to meet your needs. I’m happy to announce that we are making seven big additions today:

Geospatial Visualization – You can now create geospatial visuals on geographical data sets.

Private VPC Access – You can now sign up to access a preview of a new feature that allows you to securely connect to data within VPCs or on-premises, without the need for public endpoints.

Flat Table Support – In addition to pivot tables, you can now use flat tables for tabular reporting. To learn more, read about Using Tabular Reports.

Calculated SPICE Fields – You can now perform run-time calculations on SPICE data as part of your analysis. Read Adding a Calculated Field to an Analysis for more information.

Wide Table Support – You can now use tables with up to 1000 columns.

Other Buckets – You can summarize the long tail of high-cardinality data into buckets, as described in Working with Visual Types in Amazon QuickSight.

HIPAA Compliance – You can now run HIPAA-compliant workloads on QuickSight.

Geospatial Visualization
Everyone seems to want this feature! You can now take data that contains a geographic identifier (country, city, state, or zip code) and create beautiful visualizations with just a few clicks. QuickSight will geocode the identifier that you supply, and can also accept lat/long map coordinates. You can use this feature to visualize sales by state, map stores to shipping destinations, and so forth. Here’s a sample visualization:

To learn more about this feature, read Using Geospatial Charts (Maps), and Adding Geospatial Data.

Private VPC Access Preview
If you have data in AWS (perhaps in Amazon Redshift, Amazon Relational Database Service (RDS), or on EC2) or on-premises in Teradata or SQL Server on servers without public connectivity, this feature is for you. Private VPC Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows you to use AWS Direct Connect to create a secure, private link with your on-premises resources. Here’s what it looks like:

If you are ready to join the preview, you can sign up today.

Jeff;

 

The Truth Behind the “Kodi Boxes Can Kill Their Owners” Headlines

Post Syndicated from Andy original https://torrentfreak.com/the-truth-behind-the-kodi-boxes-can-kill-their-owners-headlines-171118/

Another week, another batch of ‘Kodi Box Armageddon’ stories. This time it hasn’t been directly about the content they can provide but the physical risks they pose to their owners.

After being primed in advance, the usual British tabloids jumped into action early Thursday, noting that following tests carried out on “illicit streaming devices” (aka Android set-top devices), 100% of them failed to meet UK national electrical safety regulations.

The tests were carried out by Electrical Safety First, a charity which was prompted into action by anti-piracy outfit Federation Against Copyright Theft.

“A series of product safety tests on popular illicit streaming devices entering the UK have found that 100% fail to meet national electrical safety regulations,” a FACT statement reads.

“The news is all the more significant as the Intellectual Property Office (IPO) estimates that more than one million of these illegal devices have been sold in the UK in the last two years, representing a significant risk to the general public.”

After reading many sensational headlines stating that “Kodi Boxes Might Kill Their Owners”, please excuse us for groaning. This story has absolutely nothing – NOTHING – to do with Kodi or any other piece of software. Quite obviously, software doesn’t catch fire.

So, suspecting that there might be more to this than meets the eye, we decided to look beyond the press releases into the actual Electrical Safety First (ESF) report. While we have no doubt that ESF is extremely competent in its field (it is, no question), the front page of its report is disappointing.

Despite the items sent for testing being straightforward Android-based media players, the ESF report clearly describes itself as examining “illicit streaming devices”. It’s terminology that doesn’t describe the subject matter from an electrical, safety or technical perspective but is pretty convenient for FACT clients Sky and the Premier League.

Nevertheless, the full picture reveals rather more than most of the headlines suggest.

First of all, it’s important to know that ESF tested just nine devices out of the million or so allegedly sold in the UK during the past two years. Even more importantly, every single one of those devices was supplied to ESF by FACT.

Now, we’re not suggesting they were hand-picked to fail but it’s clear that the samples weren’t provided from a neutral source. Also, as we’ll learn shortly, it’s possible to determine in advance if an item will fail to meet UK standards simply by looking at its packaging and casing.

But perhaps even more intriguing is that the electrical testing carried out by ESF related primarily not to the set-top boxes themselves, but to their power supplies. ESF say so themselves.

“The product review relates primarily to the switched mode power supply units for the connection to the mains supply, which were supplied with the devices, to identify any potential risks to consumers such as electric shocks, heating and resistance to fire,” ESF reports.

The set-top boxes themselves were only assessed “in terms of any faults in the marking, warnings and instructions,” the group adds.

So, what we’re really talking about here isn’t dangerous illicit streaming devices set-top boxes, but the power supply units that come with them. It might seem like a small detail but we’ll come to the vast importance of this later on.

Firstly, however, we should note that none of the equipment supplied by FACT complied with Schedule 1 of the Electrical Equipment (Safety) Regulations 1994. This means that they failed to have the “Conformité Européene” or CE logo present. That’s unacceptable.

In addition, none of them lived up the requirements of Schedule 3 of the Electrical Equipment (Safety) Regulations 1994 either, which in part requires the manufacturer’s brand name or trademark to be “clearly printed on the electrical equipment or, where that is not possible, on the packaging.” (That’s how you can tell they’ll definitely fail UK standards, before sending them for testing)

Also, none of the samples were supplied with “sufficient safety or warning information to ensure the safe and correct use, assembly, installation or maintenance of the equipment.” This represents ‘a technical breach’ of the regulations, ESF reports.

Finally, several of the samples were considered to be a potential risk to their users, either via electric shock and/or fire. That’s an important finding and people who suspect they have such devices at home should definitely take note.

However, the really important point isn’t mentioned in the tabloids, probably since it distracts from the “Kodi Armageddon” narrative which underlies the whole study and subsequent reports.

ESF says that one of the key issues is that the set-top boxes come unbranded, something which breaches safety regulations while making it difficult for consumers to assess whether they’re buying a quality product. Crucially, this is not exclusively a set-top box problem, it is much, MUCH bigger.

“Issues with power supply units or unbranded and counterfeit chargers go beyond illicit streaming devices. In the last year, issues have been reported with other consumer electrical devices, such as laptop chargers and counterfeit phone chargers,” the same ESF report reveals.

“The total annual online sales of mains plug-in chargers is estimated to be in the region of 1.8 million and according to Electrical Safety First, it is likely that most of these sales involve cheap, unbranded chargers.”

So, we looked into this issue of problem power supplies and chargers generally, to see where this report fits into the bigger picture. It transpires it’s a massive problem, all over the UK, across a wide range of products. In fact, Trading Standards reports that 99% of non-genuine Apple chargers bought online “fail a basic safety test”.

But buying from reputable High Street retailers doesn’t help either.

During the past year, Poundworld was fined for selling – wait for it – 72,000 dangerous chargers. Home Bargains was also fined for selling “thousands” of power adaptors that fail to meet UK standards.

“All samples provided failed to comply with Electrical Equipment Safety Regulations and were not marked with the manufacturer’s name,” Trading Standards reports.

That sounds familiar.

So, there you have it. Far from this being an isolated “Kodi Box Crisis” as some have proclaimed, this is a broad issue affecting imported electrical items in general. On this basis, one can’t help but think the tabloids missed a trick here. Think of the power of this headline:

ALL UNBRANDED ELECTRICAL EQUIPMENT CAN KILL, DISCONNECT EVERYTHING

or, alternatively:

PIRATES URGED TO SWITCH TO BRANDED AMAZON FIRESTICKS, SAFER FOR KODI

Perhaps not….

The ESF report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Event-Driven Computing with Amazon SNS and AWS Compute, Storage, Database, and Networking Services

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/event-driven-computing-with-amazon-sns-compute-storage-database-and-networking-services/

Contributed by Otavio Ferreira, Manager, Software Development, AWS Messaging

Like other developers around the world, you may be tackling increasingly complex business problems. A key success factor, in that case, is the ability to break down a large project scope into smaller, more manageable components. A service-oriented architecture guides you toward designing systems as a collection of loosely coupled, independently scaled, and highly reusable services. Microservices take this even further. To improve performance and scalability, they promote fine-grained interfaces and lightweight protocols.

However, the communication among isolated microservices can be challenging. Services are often deployed onto independent servers and don’t share any compute or storage resources. Also, you should avoid hard dependencies among microservices, to preserve maintainability and reusability.

If you apply the pub/sub design pattern, you can effortlessly decouple and independently scale out your microservices and serverless architectures. A pub/sub messaging service, such as Amazon SNS, promotes event-driven computing that statically decouples event publishers from subscribers, while dynamically allowing for the exchange of messages between them. An event-driven architecture also introduces the responsiveness needed to deal with complex problems, which are often unpredictable and asynchronous.

What is event-driven computing?

Given the context of microservices, event-driven computing is a model in which subscriber services automatically perform work in response to events triggered by publisher services. This paradigm can be applied to automate workflows while decoupling the services that collectively and independently work to fulfil these workflows. Amazon SNS is an event-driven computing hub, in the AWS Cloud, that has native integration with several AWS publisher and subscriber services.

Which AWS services publish events to SNS natively?

Several AWS services have been integrated as SNS publishers and, therefore, can natively trigger event-driven computing for a variety of use cases. In this post, I specifically cover AWS compute, storage, database, and networking services, as depicted below.

Compute services

  • Auto Scaling: Helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You can configure Auto Scaling lifecycle hooks to trigger events, as Auto Scaling resizes your EC2 cluster.As an example, you may want to warm up the local cache store on newly launched EC2 instances, and also download log files from other EC2 instances that are about to be terminated. To make this happen, set an SNS topic as your Auto Scaling group’s notification target, then subscribe two Lambda functions to this SNS topic. The first function is responsible for handling scale-out events (to warm up cache upon provisioning), whereas the second is in charge of handling scale-in events (to download logs upon termination).

  • AWS Elastic Beanstalk: An easy-to-use service for deploying and scaling web applications and web services developed in a number of programming languages. You can configure event notifications for your Elastic Beanstalk environment so that notable events can be automatically published to an SNS topic, then pushed to topic subscribers.As an example, you may use this event-driven architecture to coordinate your continuous integration pipeline (such as Jenkins CI). That way, whenever an environment is created, Elastic Beanstalk publishes this event to an SNS topic, which triggers a subscribing Lambda function, which then kicks off a CI job against your newly created Elastic Beanstalk environment.

  • Elastic Load Balancing: Automatically distributes incoming application traffic across Amazon EC2 instances, containers, or other resources identified by IP addresses.You can configure CloudWatch alarms on Elastic Load Balancing metrics, to automate the handling of events derived from Classic Load Balancers. As an example, you may leverage this event-driven design to automate latency profiling in an Amazon ECS cluster behind a Classic Load Balancer. In this example, whenever your ECS cluster breaches your load balancer latency threshold, an event is posted by CloudWatch to an SNS topic, which then triggers a subscribing Lambda function. This function runs a task on your ECS cluster to trigger a latency profiling tool, hosted on the cluster itself. This can enhance your latency troubleshooting exercise by making it timely.

Storage services

  • Amazon S3: Object storage built to store and retrieve any amount of data.You can enable S3 event notifications, and automatically get them posted to SNS topics, to automate a variety of workflows. For instance, imagine that you have an S3 bucket to store incoming resumes from candidates, and a fleet of EC2 instances to encode these resumes from their original format (such as Word or text) into a portable format (such as PDF).In this example, whenever new files are uploaded to your input bucket, S3 publishes these events to an SNS topic, which in turn pushes these messages into subscribing SQS queues. Then, encoding workers running on EC2 instances poll these messages from the SQS queues; retrieve the original files from the input S3 bucket; encode them into PDF; and finally store them in an output S3 bucket.

  • Amazon EFS: Provides simple and scalable file storage, for use with Amazon EC2 instances, in the AWS Cloud.You can configure CloudWatch alarms on EFS metrics, to automate the management of your EFS systems. For example, consider a highly parallelized genomics analysis application that runs against an EFS system. By default, this file system is instantiated on the “General Purpose” performance mode. Although this performance mode allows for lower latency, it might eventually impose a scaling bottleneck. Therefore, you may leverage an event-driven design to handle it automatically.Basically, as soon as the EFS metric “Percent I/O Limit” breaches 95%, CloudWatch could post this event to an SNS topic, which in turn would push this message into a subscribing Lambda function. This function automatically creates a new file system, this time on the “Max I/O” performance mode, then switches the genomics analysis application to this new file system. As a result, your application starts experiencing higher I/O throughput rates.

  • Amazon Glacier: A secure, durable, and low-cost cloud storage service for data archiving and long-term backup.You can set a notification configuration on an Amazon Glacier vault so that when a job completes, a message is published to an SNS topic. Retrieving an archive from Amazon Glacier is a two-step asynchronous operation, in which you first initiate a job, and then download the output after the job completes. Therefore, SNS helps you eliminate polling your Amazon Glacier vault to check whether your job has been completed, or not. As usual, you may subscribe SQS queues, Lambda functions, and HTTP endpoints to your SNS topic, to be notified when your Amazon Glacier job is done.

  • AWS Snowball: A petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data.You can leverage Snowball notifications to automate workflows related to importing data into and exporting data from AWS. More specifically, whenever your Snowball job status changes, Snowball can publish this event to an SNS topic, which in turn can broadcast the event to all its subscribers.As an example, imagine a Geographic Information System (GIS) that distributes high-resolution satellite images to users via Web browser. In this example, the GIS vendor could capture up to 80 TB of satellite images; create a Snowball job to import these files from an on-premises system to an S3 bucket; and provide an SNS topic ARN to be notified upon job status changes in Snowball. After Snowball changes the job status from “Importing” to “Completed”, Snowball publishes this event to the specified SNS topic, which delivers this message to a subscribing Lambda function, which finally creates a CloudFront web distribution for the target S3 bucket, to serve the images to end users.

Database services

  • Amazon RDS: Makes it easy to set up, operate, and scale a relational database in the cloud.RDS leverages SNS to broadcast notifications when RDS events occur. As usual, these notifications can be delivered via any protocol supported by SNS, including SQS queues, Lambda functions, and HTTP endpoints.As an example, imagine that you own a social network website that has experienced organic growth, and needs to scale its compute and database resources on demand. In this case, you could provide an SNS topic to listen to RDS DB instance events. When the “Low Storage” event is published to the topic, SNS pushes this event to a subscribing Lambda function, which in turn leverages the RDS API to increase the storage capacity allocated to your DB instance. The provisioning itself takes place within the specified DB maintenance window.

  • Amazon ElastiCache: A web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.ElastiCache can publish messages using Amazon SNS when significant events happen on your cache cluster. This feature can be used to refresh the list of servers on client machines connected to individual cache node endpoints of a cache cluster. For instance, an ecommerce website fetches product details from a cache cluster, with the goal of offloading a relational database and speeding up page load times. Ideally, you want to make sure that each web server always has an updated list of cache servers to which to connect.To automate this node discovery process, you can get your ElastiCache cluster to publish events to an SNS topic. Thus, when ElastiCache event “AddCacheNodeComplete” is published, your topic then pushes this event to all subscribing HTTP endpoints that serve your ecommerce website, so that these HTTP servers can update their list of cache nodes.

  • Amazon Redshift: A fully managed data warehouse that makes it simple to analyze data using standard SQL and BI (Business Intelligence) tools.Amazon Redshift uses SNS to broadcast relevant events so that data warehouse workflows can be automated. As an example, imagine a news website that sends clickstream data to a Kinesis Firehose stream, which then loads the data into Amazon Redshift, so that popular news and reading preferences might be surfaced on a BI tool. At some point though, this Amazon Redshift cluster might need to be resized, and the cluster enters a ready-only mode. Hence, this Amazon Redshift event is published to an SNS topic, which delivers this event to a subscribing Lambda function, which finally deletes the corresponding Kinesis Firehose delivery stream, so that clickstream data uploads can be put on hold.At a later point, after Amazon Redshift publishes the event that the maintenance window has been closed, SNS notifies a subscribing Lambda function accordingly, so that this function can re-create the Kinesis Firehose delivery stream, and resume clickstream data uploads to Amazon Redshift.

  • AWS DMS: Helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.DMS also uses SNS to provide notifications when DMS events occur, which can automate database migration workflows. As an example, you might create data replication tasks to migrate an on-premises MS SQL database, composed of multiple tables, to MySQL. Thus, if replication tasks fail due to incompatible data encoding in the source tables, these events can be published to an SNS topic, which can push these messages into a subscribing SQS queue. Then, encoders running on EC2 can poll these messages from the SQS queue, encode the source tables into a compatible character set, and restart the corresponding replication tasks in DMS. This is an event-driven approach to a self-healing database migration process.

Networking services

  • Amazon Route 53: A highly available and scalable cloud-based DNS (Domain Name System). Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources.You can set CloudWatch alarms and get automated Amazon SNS notifications when the status of your Route 53 health check changes. As an example, imagine an online payment gateway that reports the health of its platform to merchants worldwide, via a status page. This page is hosted on EC2 and fetches platform health data from DynamoDB. In this case, you could configure a CloudWatch alarm for your Route 53 health check, so that when the alarm threshold is breached, and the payment gateway is no longer considered healthy, then CloudWatch publishes this event to an SNS topic, which pushes this message to a subscribing Lambda function, which finally updates the DynamoDB table that populates the status page. This event-driven approach avoids any kind of manual update to the status page visited by merchants.

  • AWS Direct Connect (AWS DX): Makes it easy to establish a dedicated network connection from your premises to AWS, which can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.You can monitor physical DX connections using CloudWatch alarms, and send SNS messages when alarms change their status. As an example, when a DX connection state shifts to 0 (zero), indicating that the connection is down, this event can be published to an SNS topic, which can fan out this message to impacted servers through HTTP endpoints, so that they might reroute their traffic through a different connection instead. This is an event-driven approach to connectivity resilience.

More event-driven computing on AWS

In addition to SNS, event-driven computing is also addressed by Amazon CloudWatch Events, which delivers a near real-time stream of system events that describe changes in AWS resources. With CloudWatch Events, you can route each event type to one or more targets, including:

Many AWS services publish events to CloudWatch. As an example, you can get CloudWatch Events to capture events on your ETL (Extract, Transform, Load) jobs running on AWS Glue and push failed ones to an SQS queue, so that you can retry them later.

Conclusion

Amazon SNS is a pub/sub messaging service that can be used as an event-driven computing hub to AWS customers worldwide. By capturing events natively triggered by AWS services, such as EC2, S3 and RDS, you can automate and optimize all kinds of workflows, namely scaling, testing, encoding, profiling, broadcasting, discovery, failover, and much more. Business use cases presented in this post ranged from recruiting websites, to scientific research, geographic systems, social networks, retail websites, and news portals.

Start now by visiting Amazon SNS in the AWS Management Console, or by trying the AWS 10-Minute Tutorial, Send Fan-out Event Notifications with Amazon SNS and Amazon SQS.

 

Sky: People Can’t Pirate Live Soccer in the UK Anymore

Post Syndicated from Andy original https://torrentfreak.com/sky-people-cant-pirate-live-soccer-in-the-uk-anymore-171108/

The commotion over the set-top box streaming phenomenon is showing no signs of dying down and if day one at the Cable and Satellite Broadcasting Association of Asia (CASBAA) Conference 2017 was anything to go by, things are only heating up.

Held at Studio City in Macau, the conference has a strong anti-piracy element and was opened by Joe Welch, CASBAA Board Chairman and SVP Public Affairs Asia, 21st Century Fox. He began Tuesday by noting the important recent launch of a brand new anti-piracy initiative.

“CASBAA recently launched the Coalition Against Piracy, funded by 18 of the region’s content players and distribution partners,” he said.

TF reported on the formation of the coalition mid-October. It includes heavyweights such as Disney, Fox, HBO, NBCUniversal and BBC Worldwide, and will have a strong focus on the illicit set-top box market.

Illegal streaming devices (or ISDs, as the industry calls them), were directly addressed in a segment yesterday afternoon titled Face To Face. Led by Dr. Ros Lynch, Director of Copyright & IP Enforcement at the UK Intellectual Property Office, the session detailed the “onslaught of online piracy” and the rise of ISDs that is apparently “shaking the market”.

Given the apparent gravity of those statements, the following will probably come as a surprise. According to Lynch, the UK IPO sought the opinion of UK-based rightsholders about the pirate box phenomenon a while back after being informed of their popularity in the East. The response was that pirate boxes weren’t an issue. It didn’t take long, however, for things to blow up.

“The UKIPO provides intelligence and evidence to industry and the Police Intellectual Property Crime Unit (PIPCU) in London who then take enforcement actions,” Lynch explained.

“We first heard about the issues with ISDs from [broadcaster] TVB in Hong Kong and we then consulted the UK rights holders who responded that it wasn’t a problem. Two years later the issue just exploded.”

The evidence of that in the UK isn’t difficult to find. In addition to millions of devices with both free Kodi addon and subscription-based systems deployed, the app market has bloomed too, offering free or near to free content to all.

This caught the eye of the Premier League who this year obtained two pioneering injunctions (1,2) to tackle live streams of football games. Streams are blocked by local ISPs in real-time, making illicit online viewing a more painful experience than it ever has been. No doubt progress has been made on this front, with thousands of streams blocked, but according to broadcaster Sky, the results are unprecedented.

“Site-blocking has moved the goalposts significantly,” said Matthew Hibbert, head of litigation at Sky UK.

“In the UK you cannot watch pirated live Premier League content anymore,” he said.

While progress has been good, the statement is overly enthusiastic. TF sources have been monitoring the availability of pirate streams on around dozen illicit sites and services every Saturday (when it is actually illegal to broadcast matches in the UK) and service has been steady on around half of them and intermittent at worst on the rest.

There are hundreds of other platforms available so while many are definitely affected by Premier League blocking, it’s safe to assume that live football piracy hasn’t been wiped out. Nevertheless, it would be wrong to suggest that no progress has been made, in this and other related areas.

Kevin Plumb, Director of Legal Services at The Premier League, said that pubs showing football from illegal streams had also massively dwindled in numbers.

“In the past 18 months the illegal broadcasting of live Premier League matches in pubs in the UK has been decimated,” he said.

This result is almost certainly down to prosecutions taken in tandem with the Federation Against Copyright Theft (FACT), that have seen several landlords landed with large fines. Indeed, both sides of the market have been tackled, with both licensed premises and IPTV device sellers being targeted.

“The most successful thing we’ve done to combat piracy has been to undertake criminal prosecutions against ISD piracy,” said FACT chief Kieron Sharp yesterday. “Everyone is pleading guilty to these offenses.”

Most if not all of FACT-led prosecutions target device and subscription sellers under fraud legislation but that could change in the future, Lynch of the Intellectual Property Office said.

“While the UK works to update its legislation, we can’t wait for the new legislation to take enforcement actions and we rely heavily on ‘conspiracy to defraud’ charges, and have successfully prosecuted a number of ISD retailers,” she said.

Finally, information provided yesterday by network company CISCO shine light on what it costs to run a subscription-based pirate IPTV operation.

Director of Intelligence & Security Operations Avigail Gutman said a pirate IPTV server offering 1,000 channels to around 1,000 subscribers can cost as little as 2,000 euros per month to run but can generate 12,000 euros in revenue during the same period.

“In April of 2017, ten major paid TV and content providers had relinquished 3.09 million euros per month to 285 ISD-based streaming pirate syndicates,” she said.

There’s little doubt that IPTV piracy, both paid and free, is here to stay. The big question is how it will be tackled short and long-term and whether any changes in legislation will have any unintended knock-on effects.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon Redshift Dense Compute (DC2) Nodes Deliver Twice the Performance as DC1 at the Same Price

Post Syndicated from Quaseer Mujawar original https://aws.amazon.com/blogs/big-data/amazon-redshift-dense-compute-dc2-nodes-deliver-twice-the-performance-as-dc1-at-the-same-price/

Amazon Redshift makes analyzing exabyte-scale data fast, simple, and cost-effective. It delivers advanced data warehousing capabilities, including parallel execution, compressed columnar storage, and end-to-end encryption as a fully managed service, for less than $1,000/TB/year. With Amazon Redshift Spectrum, you can run SQL queries directly against exabytes of unstructured data in Amazon S3 for $5/TB scanned.

Today, we are making our Dense Compute (DC) family faster and more cost-effective with new second-generation Dense Compute (DC2) nodes at the same price as our previous generation DC1. DC2 is designed for demanding data warehousing workloads that require low latency and high throughput. DC2 features powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks.

We’ve tuned Amazon Redshift to take advantage of the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 at the same price. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30 percent better storage utilization.

Customer successes

Several flagship customers, ranging from fast growing startups to large Fortune 100 companies, previewed the new DC2 node type. In their tests, DC2 provided up to twice the performance as DC1. Our preview customers saw faster ETL (extract, transform, and load) jobs, higher query throughput, better concurrency, faster reports, and shorter data-to-insights—all at the same cost as DC1. DC2.8xlarge customers also noted that their databases used up to 30 percent less disk space due to our optimized storage format, reducing their costs.

4Cite Marketing, one of America’s fastest growing private companies, uses Amazon Redshift to analyze customer data and determine personalized product recommendations for retailers. “Amazon Redshift’s new DC2 node is giving us a 100 percent performance increase, allowing us to provide faster insights for our retailers, more cost-effectively, to drive incremental revenue,” said Jim Finnerty, 4Cite’s senior vice president of product.

BrandVerity, a Seattle-based brand protection and compliance‎ company, provides solutions to monitor, detect, and mitigate online brand, trademark, and compliance abuse. “We saw a 70 percent performance boost with the DC2 nodes for running Redshift Spectrum queries. As a result, we can analyze far more data for our customers and deliver results much faster,” said Hyung-Joon Kim, principal software engineer at BrandVerity.

“Amazon Redshift is at the core of our operations and our marketing automation tools,” said Jarno Kartela, head of analytics and chief data scientist at DNA Plc, one of the leading Finnish telecommunications groups and Finland’s largest cable operator and pay TV provider. “We saw a 52 percent performance gain in moving to Amazon Redshift’s DC2 nodes. We can now run queries in half the time, allowing us to provide more analytics power and reduce time-to-insight for our analytics and marketing automation users.”

You can read about their experiences on our Customer Success page.

Get started

You can try the new node type using our getting started guide. Just choose dc2.large or dc2.8xlarge in the Amazon Redshift console:

If you have a DC1.large Amazon Redshift cluster, you can restore to a new DC2.large cluster using an existing snapshot. To migrate from DS2.xlarge, DS2.8xlarge, or DC1.8xlarge Amazon Redshift clusters, you can use the resize operation to move data to your new DC2 cluster. For more information, see Clusters and Nodes in Amazon Redshift.

To get the latest Amazon Redshift feature announcements, check out our What’s New page, and subscribe to the RSS feed.

Popcorn Time Creator Readies BitTorrent & Blockchain-Powered Video Platform

Post Syndicated from Andy original https://torrentfreak.com/popcorn-time-creator-readies-bittorrent-blockchain-powered-youtube-competitor-171012/

Without a doubt, YouTube is one of the most important websites available on the Internet today.

Its massive archive of videos brings pleasure to millions on a daily basis but its centralized nature means that owner Google always exercises control.

Over the years, people have looked to decentralize the YouTube concept and the latest project hoping to shake up the market has a particularly interesting player onboard.

Until 2015, only insiders knew that Argentinian designer Federico Abad was actually ‘Sebastian’, the shadowy figure behind notorious content sharing platform Popcorn Time.

Now he’s part of the team behind Flixxo, a BitTorrent and blockchain-powered startup hoping to wrestle a share of the video market from YouTube. Here’s how the team, which features blockchain startup RSK Labs, hope things will play out.

The Flixxo network will have no centralized storage of data, eliminating the need for expensive hosting along with associated costs. Instead, transfers will take place between peers using BitTorrent, meaning video content will be stored on the machines of Flixxo users. In practice, the content will be downloaded and uploaded in much the same way as users do on The Pirate Bay or indeed Abad’s baby, Popcorn Time.

However, there’s a twist to the system that envisions content creators, content consumers, and network participants (seeders) making revenue from their efforts.

At the heart of the Flixxo system are digital tokens (think virtual currency), called Flixx. These Flixx ‘coins’, which will go on sale in 12 days, can be used to buy access to content. Creators can also opt to pay consumers when those people help to distribute their content to others.

“Free from structural costs, producers can share the earnings from their content with the network that supports them,” the team explains.

“This way you get paid for helping us improve Flixxo, and you earn credits (in the form of digital tokens called Flixx) for watching higher quality content. Having no intermediaries means that the price you pay for watching the content that you actually want to watch is lower and fairer.”

The Flixxo team

In addition to earning tokens from helping to distribute content, people in the Flixxo ecosystem can also earn currency by watching sponsored content, i.e advertisements. While in a traditional system adverts are often considered a nuisance, Flixx tokens have real value, with a promise that users will be able to trade their Flixx not only for videos, but also for tangible and semi-tangible goods.

“Use your Flixx to reward the producers you follow, encouraging them to create more awesome content. Or keep your Flixx in your wallet and use them to buy a movie ticket, a pair of shoes from an online retailer, a chest of coins in your favourite game or even convert them to old-fashioned cash or up-and-coming digital assets, like Bitcoin,” the team explains.

The Flixxo team have big plans. After foundation in early 2016, the second quarter of 2017 saw the completion of a functional alpha release. In a little under two weeks, the project will begin its token generation event, with new offices in Los Angeles planned for the first half of 2018 alongside a premiere of the Flixxo platform.

“A total of 1,000,000,000 (one billion) Flixx tokens will be issued. A maximum of 300,000,000 (three hundred million) tokens will be sold. Some of these tokens (not more than 33% or 100,000,000 Flixx) may be sold with anticipation of the token allocation event to strategic investors,” Flixxo states.

Like all content platforms, Flixxo will live or die by the quality of the content it provides and whether, at least in the first instance, it can persuade people to part with their hard-earned cash. Only time will tell whether its content will be worth a premium over readily accessible YouTube content but with much-reduced costs, it may tempt creators seeking a bigger piece of the pie.

“Flixxo will also educate its community, teaching its users that in this new internet era value can be held and transferred online without intermediaries, a value that can be earned back by participating in a community, by contributing, being rewarded for every single social interaction,” the team explains.

Of course, the elephant in the room is what will happen when people begin sharing copyrighted content via Flixxo. Certainly, the fact that Popcorn Time’s founder is a key player and rival streaming platform Stremio is listed as a partner means that things could get a bit spicy later on.

Nevertheless, the team suggests that piracy and spam content distribution will be limited by mechanisms already built into the system.

“[A]uthors have to time-block tokens in a smart contract (set as a warranty) in order to upload content. This contract will also handle and block their earnings for a certain period of time, so that in the case of a dispute the unfair-uploader may lose those tokens,” they explain.

That being said, Flixxo also says that “there is no way” for third parties to censor content “which means that anyone has the chance of making any piece of media available on the network.” However, Flixxo says it will develop tools for filtering what it describes as “inappropriate content.”

At this point, things start to become a little unclear. On the one hand Flixxo says it could become a “revolutionary tool for uncensorable and untraceable media” yet on the other it says that it’s necessary to ensure that adult content, for example, isn’t seen by kids.

“We know there is a thin line between filtering or curating content and censorship, and it is a fact that we have an open network for everyone to upload any content. However, Flixxo as a platform will apply certain filtering based on clear rules – there should be a behavior-code for uploaders in order to offer the right content to the right user,” Flixxo explains.

To this end, Flixxo says it will deploy a centralized curation function, carried out by 101 delegates elected by the community, which will become progressively decentralized over time.

“This curation will have a cost, paid in Flixx, and will be collected from the warranty blocked by the content uploaders,” they add.

There can be little doubt that if Flixxo begins ‘curating’ unsuitable content, copyright holders will call on it to do the same for their content too. And, if the platform really takes off, 101 curators probably won’t scratch the surface. There’s also the not inconsiderable issue of what might happen to curators’ judgment when they’re incentivized to block curate content.

Finally, for those sick of “not available in your region” messages, there’s good and bad news. Flixxo insists there will be no geo-blocking of content on its part but individual creators will still have that feature available to them, should they choose.

The Flixx whitepaper can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.