Tag Archives: retail

Coalition Against Piracy Launches Landmark Case Against ‘Pirate’ Android Box Sellers

Post Syndicated from Andy original https://torrentfreak.com/coalition-against-piracy-launches-landmark-case-against-pirate-android-box-sellers-180112/

In 2017, anti-piracy enforcement went global when companies including Disney, HBO, Netflix, Amazon and NBCUniversal formed the Alliance for Creativity and Entertainment (ACE).

Soon after the Coalition Against Piracy (CAP) was announced. With a focus on Asia and backed by CASBAA, CAP counts many of the same companies among its members in addition to local TV providers such as StarHub.

From the outset, CAP has shown a keen interest in tackling unlicensed streaming, particularly that taking place via illicit set-top boxes stuffed with copyright-infringing apps and add-ons. One country under CAP’s spotlight is Singapore, where relevant law is said to be fuzzy at best, insufficient at worst. Now, however, a line in the sand might not be far away.

According to a court listing discovered by Singapore’s TodayOnline, today will see the Coalition Against Piracy’s general manager Neil Kevin Gane attempt to launch a pioneering private prosecution against set-top box distributor Synnex Trading and its client and wholesale goods retailer, An-Nahl.

Gane and CAP are said to be acting on behalf of four parties, one which is TV giant StarHub, a company with a huge interest in bringing media piracy under control in the region. It’s reported that they have also named Synnex Trading director Jia Xiaofen and An-Nahl director Abdul Nagib as defendants in their private criminal case after the parties failed to reach a settlement in an earlier process.

Contacted by TodayOnline, an employee of An-Nahl said the company no longer sells the boxes. However, Synnex is reportedly still selling them for S$219 each ($164) plus additional fees for maintenance and access to VOD. The company’s Facebook page is still active with the relevant offer presented prominently.

The importance of the case cannot be understated. While StarHub and other broadcasters have successfully prosecuted cases where people unlawfully decrypted broadcast signals, the provision of unlicensed streams isn’t specifically tackled by Singapore’s legislation. It’s now a major source of piracy in the region, as it is elsewhere around the globe.

Only time will tell how the process will play out but it’s clear that CAP and its members are prepared to invest significant sums into a prosecution for a favorable outcome. CAP believes that the supply of the boxes falls under Section 136 (3A) of the Copyright Act but only time will tell.

Last December, CAP separately called on the Singapore government to not only block ‘pirate’ streaming software but also unlicensed streams from entering the country.

“Within the Asia-Pacific region, Singapore is the worst in terms of availability of illicit streaming devices,” said CAP General Manager Neil Gane. “They have access to hundreds of illicit broadcasts of channels and video-on-demand content.”

CAP’s 21 members want the authorities to block the software inside devices that enables piracy but it’s far from clear how that can be achieved.

Update: The four companies taking the action are confirmed as Singtel, Starhub, Fox Network, and the English Premier League

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

 

 

 

BitTorrent Inc. Emerges Victorious Following EU Trademark Dispute

Post Syndicated from Andy original https://torrentfreak.com/bittorrent-inc-emerges-victorious-following-eu-trademark-dispute-171213/

For anyone familiar with the BitTorrent brand, there can only be one company that springs to mind. BitTorrent Inc., the outfit behind uTorrent that still employs BitTorrent creator Bram Cohen, seems the logical choice, but not everything is straightforward.

Back in June 2003, a company called BitTorrent Marketing GmbH filed an application to register an EU trademark for the term ‘BitTorrent’ with the European Union Intellectual Property Office (EUIPO). The company hoped to exploit the trademark for a wide range of uses from marketing, advertising, retail, mail order and Internet sales, to film, television and video licensing plus “providing of memory space on the internet”.

The trademark application was published in Jul 2004 and registered in June 2006. However, in June 2011 BitTorrent Inc. filed an application for its revocation on the grounds that the trademark had not been “put to genuine use in the European Union in connection with the services concerned within a continuous period of five years.”

A year later, the EUIPO notified BitTorrent Marketing GmbH that it had three months to submit evidence of the trademark’s use. After an application from the company, more time was given to present evidence and a deadline was set for November 21, 2011. Things did not go to plan, however.

On the very last day, BitTorrent Marketing GmbH responded to the request by fax, noting that a five-page letter had been sent along with 69 pages of additional evidence. But something went wrong, with the fax machine continually reporting errors. Several days later, the evidence arrived by mail, but that was technically too late.

In September 2013, BitTorrent Inc.’s application for the trademark to be revoked was upheld but in November 2013, BitTorrent Marketing GmbH (by now known as Hochmann Marketing GmbH) appealed against the decision to revoke.

Almost two years later in August 2015, an EUIPO appeal held that Hochmann “had submitted no relevant proof” before the specified deadline that the trademark had been in previous use. On this basis, the evidence could not be taken into account.

“[The appeal] therefore concluded that genuine use of the mark at issue had not been proven, and held that the mark must be revoked with effect from 24 June 2011,” EUIPO documentation reads.

However, Hochmann Marketing GmbH wasn’t about to give up, demanding that the decision be annulled and that EUIPO and BitTorrent Inc. should pay the costs. In response, EUIPO and BitTorrent Inc. demanded the opposite, that Hochmann’s action should be dismissed and they should pay the costs instead.

In its decision published yesterday, the EU General Court (Third Chamber) clearly sided with EUIPO and BitTorrent Inc.

“The [evidence] document clearly contains only statements that are not substantiated by any supporting evidence capable of adducing proof of the place, time, extent and nature of use of the mark at issue, especially because the evidence in question was submitted, in the present case, three days after the prescribed period expired,” the decision reads.

The decision also notes that the company was given an additional month to come up with evidence and then some – the evidence was actually due on a Saturday so the period was extended until Monday for the convenience of the company.

“Next, EUIPO had duly informed the applicant, by letter of 19 July 2011, that it was ‘required to submit the required evidence of use in reply to the request within three months of receipt of this communication’ and that ‘if no evidence of use [was] submitted within this period, the [EU] mark w[ould] be revoked’,” the decision reads, adding;

“That letter also included guidance on how to provide evidence in a timely manner. Consequently, the applicant knew not only what documents it must submit, but also what the consequences of late submission of evidence were.”

All things considered, the Court rejected Hochmann Marketing GmbH’s application, ultimately deciding that not enough evidence was produced and what did appear was too late. For that, the trademark remains revoked and Hochmann Marketing must cover EUIPO and BitTorrent Inc.’s legal costs.

This isn’t the first time that BitTorrent Inc. has taken on BitTorrent/Hochmann Marketing GmbH and won. In 2014, it took the company to court in the United States and walked away with a $2.2m damages award.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Screener Piracy Season Kicks Off With Louis C.K.’s ‘I Love You, Daddy’

Post Syndicated from Ernesto original https://torrentfreak.com/screener-piracy-season-kicks-off-with-louis-c-k-s-i-love-you-daddy-171211/

Towards the end of the year, movie screeners are sent out to industry insiders who cast their votes for the Oscars and other awards.

It’s a highly anticipated time for pirates who hope to get copies of the latest blockbusters early, which is traditionally what happens.

Last year the action started relatively late. It took until January before the first leak surfaced – Denzel Washington’s Fences –
but more than a dozen made their way online soon after.

Today the first leak of the new screener season started to populate various pirate sites, Louis C.K.’s “I Love You, Daddy.” It was released by the infamous “Hive-CM8” group which also made headlines in previous years.

“I Love You, Daddy” was carefully chosen, according to a message posted in the release notes. Last month distributor The Orchard chose to cancel the film from its schedule after Louis C.K. was accused of sexual misconduct. With uncertainty surrounding the film’s release, “Hive-CM8” decided to get it out.

“We decided to let this one title go out this month, since it never made it to the cinema, and nobody knows if it ever will go to retail at all,” Hive-CM8 write in their NFO.

“Either way their is no perfect time to release it anyway, but we think it would be a waste to let a great Louis C.K. go unwatched and nobody can even see or buy it,” they add.

I Love You, Daddy

It is no surprise that the group put some thought into their decision. In 2015 they published several movies before their theatrical release, for which they later offered an apology, stating that this wasn’t acceptable.

Last year this stance was reiterated, noting that they would not leak any screeners before Christmas. Today’s release shows that this isn’t a golden rule, but it’s unlikely that they will push any big titles before they’re out in theaters.

“I Love You, Daddy” isn’t going to be seen in theaters anytime soon, but it might see an official release. This past weekend, news broke that Louis C.K. had bought back the rights from The Orchard and must pay back marketing costs, including a payment for the 12,000 screeners that were sent out.

Hive-CM8, meanwhile, suggest that they have more screeners in hand, although their collection isn’t yet complete.

“We are still missing some titles, anyone want to share for the collection? Yes we want to have them all if possible, we are collectors, we don’t want to release them all,” they write.

Finally, the group also has some disappointing news for Star Wars fans who are looking for an early copy of “The Last Jedi.” Hive-CM8 is not going to release it.

“Their will be no starwars from us, sorry wont happen,” they write.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Amazon QuickSight Update – Geospatial Visualization, Private VPC Access, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-quicksight-update-geospatial-visualization-private-vpc-access-and-more/

We don’t often recognize or celebrate anniversaries at AWS. With nearly 100 services on our list, we’d be eating cake and drinking champagne several times a week. While that might sound like fun, we’d rather spend our working hours listening to customers and innovating. With that said, Amazon QuickSight has now been generally available for a little over a year and I would like to give you a quick update!

QuickSight in Action
Today, tens of thousands of customers (from startups to enterprises, in industries as varied as transportation, legal, mining, and healthcare) are using QuickSight to analyze and report on their business data.

Here are a couple of examples:

Gemini provides legal evidence procurement for California attorneys who represent injured workers. They have gone from creating custom reports and running one-off queries to creating and sharing dynamic QuickSight dashboards with drill-downs and filtering. QuickSight is used to track sales pipeline, measure order throughput, and to locate bottlenecks in the order processing pipeline.

Jivochat provides a real-time messaging platform to connect visitors to website owners. QuickSight lets them create and share interactive dashboards while also providing access to the underlying datasets. This has allowed them to move beyond the sharing of static spreadsheets, ensuring that everyone is looking at the same and is empowered to make timely decisions based on current data.

Transfix is a tech-powered freight marketplace that matches loads and increases visibility into logistics for Fortune 500 shippers in retail, food and beverage, manufacturing, and other industries. QuickSight has made analytics accessible to both BI engineers and non-technical business users. They scrutinize key business and operational metrics including shipping routes, carrier efficient, and process automation.

Looking Back / Looking Ahead
The feedback on QuickSight has been incredibly helpful. Customers tell us that their employees are using QuickSight to connect to their data, perform analytics, and make high-velocity, data-driven decisions, all without setting up or running their own BI infrastructure. We love all of the feedback that we get, and use it to drive our roadmap, leading to the introduction of over 40 new features in just a year. Here’s a summary:

Looking forward, we are watching an interesting trend develop within our customer base. As these customers take a close look at how they analyze and report on data, they are realizing that a serverless approach offers some tangible benefits. They use Amazon Simple Storage Service (S3) as a data lake and query it using a combination of QuickSight and Amazon Athena, giving them agility and flexibility without static infrastructure. They also make great use of QuickSight’s dashboards feature, monitoring business results and operational metrics, then sharing their insights with hundreds of users. You can read Building a Serverless Analytics Solution for Cleaner Cities and review Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight if you are interested in this approach.

New Features and Enhancements
We’re still doing our best to listen and to learn, and to make sure that QuickSight continues to meet your needs. I’m happy to announce that we are making seven big additions today:

Geospatial Visualization – You can now create geospatial visuals on geographical data sets.

Private VPC Access – You can now sign up to access a preview of a new feature that allows you to securely connect to data within VPCs or on-premises, without the need for public endpoints.

Flat Table Support – In addition to pivot tables, you can now use flat tables for tabular reporting. To learn more, read about Using Tabular Reports.

Calculated SPICE Fields – You can now perform run-time calculations on SPICE data as part of your analysis. Read Adding a Calculated Field to an Analysis for more information.

Wide Table Support – You can now use tables with up to 1000 columns.

Other Buckets – You can summarize the long tail of high-cardinality data into buckets, as described in Working with Visual Types in Amazon QuickSight.

HIPAA Compliance – You can now run HIPAA-compliant workloads on QuickSight.

Geospatial Visualization
Everyone seems to want this feature! You can now take data that contains a geographic identifier (country, city, state, or zip code) and create beautiful visualizations with just a few clicks. QuickSight will geocode the identifier that you supply, and can also accept lat/long map coordinates. You can use this feature to visualize sales by state, map stores to shipping destinations, and so forth. Here’s a sample visualization:

To learn more about this feature, read Using Geospatial Charts (Maps), and Adding Geospatial Data.

Private VPC Access Preview
If you have data in AWS (perhaps in Amazon Redshift, Amazon Relational Database Service (RDS), or on EC2) or on-premises in Teradata or SQL Server on servers without public connectivity, this feature is for you. Private VPC Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows you to use AWS Direct Connect to create a secure, private link with your on-premises resources. Here’s what it looks like:

If you are ready to join the preview, you can sign up today.

Jeff;

 

The Truth Behind the “Kodi Boxes Can Kill Their Owners” Headlines

Post Syndicated from Andy original https://torrentfreak.com/the-truth-behind-the-kodi-boxes-can-kill-their-owners-headlines-171118/

Another week, another batch of ‘Kodi Box Armageddon’ stories. This time it hasn’t been directly about the content they can provide but the physical risks they pose to their owners.

After being primed in advance, the usual British tabloids jumped into action early Thursday, noting that following tests carried out on “illicit streaming devices” (aka Android set-top devices), 100% of them failed to meet UK national electrical safety regulations.

The tests were carried out by Electrical Safety First, a charity which was prompted into action by anti-piracy outfit Federation Against Copyright Theft.

“A series of product safety tests on popular illicit streaming devices entering the UK have found that 100% fail to meet national electrical safety regulations,” a FACT statement reads.

“The news is all the more significant as the Intellectual Property Office (IPO) estimates that more than one million of these illegal devices have been sold in the UK in the last two years, representing a significant risk to the general public.”

After reading many sensational headlines stating that “Kodi Boxes Might Kill Their Owners”, please excuse us for groaning. This story has absolutely nothing – NOTHING – to do with Kodi or any other piece of software. Quite obviously, software doesn’t catch fire.

So, suspecting that there might be more to this than meets the eye, we decided to look beyond the press releases into the actual Electrical Safety First (ESF) report. While we have no doubt that ESF is extremely competent in its field (it is, no question), the front page of its report is disappointing.

Despite the items sent for testing being straightforward Android-based media players, the ESF report clearly describes itself as examining “illicit streaming devices”. It’s terminology that doesn’t describe the subject matter from an electrical, safety or technical perspective but is pretty convenient for FACT clients Sky and the Premier League.

Nevertheless, the full picture reveals rather more than most of the headlines suggest.

First of all, it’s important to know that ESF tested just nine devices out of the million or so allegedly sold in the UK during the past two years. Even more importantly, every single one of those devices was supplied to ESF by FACT.

Now, we’re not suggesting they were hand-picked to fail but it’s clear that the samples weren’t provided from a neutral source. Also, as we’ll learn shortly, it’s possible to determine in advance if an item will fail to meet UK standards simply by looking at its packaging and casing.

But perhaps even more intriguing is that the electrical testing carried out by ESF related primarily not to the set-top boxes themselves, but to their power supplies. ESF say so themselves.

“The product review relates primarily to the switched mode power supply units for the connection to the mains supply, which were supplied with the devices, to identify any potential risks to consumers such as electric shocks, heating and resistance to fire,” ESF reports.

The set-top boxes themselves were only assessed “in terms of any faults in the marking, warnings and instructions,” the group adds.

So, what we’re really talking about here isn’t dangerous illicit streaming devices set-top boxes, but the power supply units that come with them. It might seem like a small detail but we’ll come to the vast importance of this later on.

Firstly, however, we should note that none of the equipment supplied by FACT complied with Schedule 1 of the Electrical Equipment (Safety) Regulations 1994. This means that they failed to have the “Conformité Européene” or CE logo present. That’s unacceptable.

In addition, none of them lived up the requirements of Schedule 3 of the Electrical Equipment (Safety) Regulations 1994 either, which in part requires the manufacturer’s brand name or trademark to be “clearly printed on the electrical equipment or, where that is not possible, on the packaging.” (That’s how you can tell they’ll definitely fail UK standards, before sending them for testing)

Also, none of the samples were supplied with “sufficient safety or warning information to ensure the safe and correct use, assembly, installation or maintenance of the equipment.” This represents ‘a technical breach’ of the regulations, ESF reports.

Finally, several of the samples were considered to be a potential risk to their users, either via electric shock and/or fire. That’s an important finding and people who suspect they have such devices at home should definitely take note.

However, the really important point isn’t mentioned in the tabloids, probably since it distracts from the “Kodi Armageddon” narrative which underlies the whole study and subsequent reports.

ESF says that one of the key issues is that the set-top boxes come unbranded, something which breaches safety regulations while making it difficult for consumers to assess whether they’re buying a quality product. Crucially, this is not exclusively a set-top box problem, it is much, MUCH bigger.

“Issues with power supply units or unbranded and counterfeit chargers go beyond illicit streaming devices. In the last year, issues have been reported with other consumer electrical devices, such as laptop chargers and counterfeit phone chargers,” the same ESF report reveals.

“The total annual online sales of mains plug-in chargers is estimated to be in the region of 1.8 million and according to Electrical Safety First, it is likely that most of these sales involve cheap, unbranded chargers.”

So, we looked into this issue of problem power supplies and chargers generally, to see where this report fits into the bigger picture. It transpires it’s a massive problem, all over the UK, across a wide range of products. In fact, Trading Standards reports that 99% of non-genuine Apple chargers bought online “fail a basic safety test”.

But buying from reputable High Street retailers doesn’t help either.

During the past year, Poundworld was fined for selling – wait for it – 72,000 dangerous chargers. Home Bargains was also fined for selling “thousands” of power adaptors that fail to meet UK standards.

“All samples provided failed to comply with Electrical Equipment Safety Regulations and were not marked with the manufacturer’s name,” Trading Standards reports.

That sounds familiar.

So, there you have it. Far from this being an isolated “Kodi Box Crisis” as some have proclaimed, this is a broad issue affecting imported electrical items in general. On this basis, one can’t help but think the tabloids missed a trick here. Think of the power of this headline:

ALL UNBRANDED ELECTRICAL EQUIPMENT CAN KILL, DISCONNECT EVERYTHING

or, alternatively:

PIRATES URGED TO SWITCH TO BRANDED AMAZON FIRESTICKS, SAFER FOR KODI

Perhaps not….

The ESF report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Event-Driven Computing with Amazon SNS and AWS Compute, Storage, Database, and Networking Services

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/event-driven-computing-with-amazon-sns-compute-storage-database-and-networking-services/

Contributed by Otavio Ferreira, Manager, Software Development, AWS Messaging

Like other developers around the world, you may be tackling increasingly complex business problems. A key success factor, in that case, is the ability to break down a large project scope into smaller, more manageable components. A service-oriented architecture guides you toward designing systems as a collection of loosely coupled, independently scaled, and highly reusable services. Microservices take this even further. To improve performance and scalability, they promote fine-grained interfaces and lightweight protocols.

However, the communication among isolated microservices can be challenging. Services are often deployed onto independent servers and don’t share any compute or storage resources. Also, you should avoid hard dependencies among microservices, to preserve maintainability and reusability.

If you apply the pub/sub design pattern, you can effortlessly decouple and independently scale out your microservices and serverless architectures. A pub/sub messaging service, such as Amazon SNS, promotes event-driven computing that statically decouples event publishers from subscribers, while dynamically allowing for the exchange of messages between them. An event-driven architecture also introduces the responsiveness needed to deal with complex problems, which are often unpredictable and asynchronous.

What is event-driven computing?

Given the context of microservices, event-driven computing is a model in which subscriber services automatically perform work in response to events triggered by publisher services. This paradigm can be applied to automate workflows while decoupling the services that collectively and independently work to fulfil these workflows. Amazon SNS is an event-driven computing hub, in the AWS Cloud, that has native integration with several AWS publisher and subscriber services.

Which AWS services publish events to SNS natively?

Several AWS services have been integrated as SNS publishers and, therefore, can natively trigger event-driven computing for a variety of use cases. In this post, I specifically cover AWS compute, storage, database, and networking services, as depicted below.

Compute services

  • Auto Scaling: Helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You can configure Auto Scaling lifecycle hooks to trigger events, as Auto Scaling resizes your EC2 cluster.As an example, you may want to warm up the local cache store on newly launched EC2 instances, and also download log files from other EC2 instances that are about to be terminated. To make this happen, set an SNS topic as your Auto Scaling group’s notification target, then subscribe two Lambda functions to this SNS topic. The first function is responsible for handling scale-out events (to warm up cache upon provisioning), whereas the second is in charge of handling scale-in events (to download logs upon termination).

  • AWS Elastic Beanstalk: An easy-to-use service for deploying and scaling web applications and web services developed in a number of programming languages. You can configure event notifications for your Elastic Beanstalk environment so that notable events can be automatically published to an SNS topic, then pushed to topic subscribers.As an example, you may use this event-driven architecture to coordinate your continuous integration pipeline (such as Jenkins CI). That way, whenever an environment is created, Elastic Beanstalk publishes this event to an SNS topic, which triggers a subscribing Lambda function, which then kicks off a CI job against your newly created Elastic Beanstalk environment.

  • Elastic Load Balancing: Automatically distributes incoming application traffic across Amazon EC2 instances, containers, or other resources identified by IP addresses.You can configure CloudWatch alarms on Elastic Load Balancing metrics, to automate the handling of events derived from Classic Load Balancers. As an example, you may leverage this event-driven design to automate latency profiling in an Amazon ECS cluster behind a Classic Load Balancer. In this example, whenever your ECS cluster breaches your load balancer latency threshold, an event is posted by CloudWatch to an SNS topic, which then triggers a subscribing Lambda function. This function runs a task on your ECS cluster to trigger a latency profiling tool, hosted on the cluster itself. This can enhance your latency troubleshooting exercise by making it timely.

Storage services

  • Amazon S3: Object storage built to store and retrieve any amount of data.You can enable S3 event notifications, and automatically get them posted to SNS topics, to automate a variety of workflows. For instance, imagine that you have an S3 bucket to store incoming resumes from candidates, and a fleet of EC2 instances to encode these resumes from their original format (such as Word or text) into a portable format (such as PDF).In this example, whenever new files are uploaded to your input bucket, S3 publishes these events to an SNS topic, which in turn pushes these messages into subscribing SQS queues. Then, encoding workers running on EC2 instances poll these messages from the SQS queues; retrieve the original files from the input S3 bucket; encode them into PDF; and finally store them in an output S3 bucket.

  • Amazon EFS: Provides simple and scalable file storage, for use with Amazon EC2 instances, in the AWS Cloud.You can configure CloudWatch alarms on EFS metrics, to automate the management of your EFS systems. For example, consider a highly parallelized genomics analysis application that runs against an EFS system. By default, this file system is instantiated on the “General Purpose” performance mode. Although this performance mode allows for lower latency, it might eventually impose a scaling bottleneck. Therefore, you may leverage an event-driven design to handle it automatically.Basically, as soon as the EFS metric “Percent I/O Limit” breaches 95%, CloudWatch could post this event to an SNS topic, which in turn would push this message into a subscribing Lambda function. This function automatically creates a new file system, this time on the “Max I/O” performance mode, then switches the genomics analysis application to this new file system. As a result, your application starts experiencing higher I/O throughput rates.

  • Amazon Glacier: A secure, durable, and low-cost cloud storage service for data archiving and long-term backup.You can set a notification configuration on an Amazon Glacier vault so that when a job completes, a message is published to an SNS topic. Retrieving an archive from Amazon Glacier is a two-step asynchronous operation, in which you first initiate a job, and then download the output after the job completes. Therefore, SNS helps you eliminate polling your Amazon Glacier vault to check whether your job has been completed, or not. As usual, you may subscribe SQS queues, Lambda functions, and HTTP endpoints to your SNS topic, to be notified when your Amazon Glacier job is done.

  • AWS Snowball: A petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data.You can leverage Snowball notifications to automate workflows related to importing data into and exporting data from AWS. More specifically, whenever your Snowball job status changes, Snowball can publish this event to an SNS topic, which in turn can broadcast the event to all its subscribers.As an example, imagine a Geographic Information System (GIS) that distributes high-resolution satellite images to users via Web browser. In this example, the GIS vendor could capture up to 80 TB of satellite images; create a Snowball job to import these files from an on-premises system to an S3 bucket; and provide an SNS topic ARN to be notified upon job status changes in Snowball. After Snowball changes the job status from “Importing” to “Completed”, Snowball publishes this event to the specified SNS topic, which delivers this message to a subscribing Lambda function, which finally creates a CloudFront web distribution for the target S3 bucket, to serve the images to end users.

Database services

  • Amazon RDS: Makes it easy to set up, operate, and scale a relational database in the cloud.RDS leverages SNS to broadcast notifications when RDS events occur. As usual, these notifications can be delivered via any protocol supported by SNS, including SQS queues, Lambda functions, and HTTP endpoints.As an example, imagine that you own a social network website that has experienced organic growth, and needs to scale its compute and database resources on demand. In this case, you could provide an SNS topic to listen to RDS DB instance events. When the “Low Storage” event is published to the topic, SNS pushes this event to a subscribing Lambda function, which in turn leverages the RDS API to increase the storage capacity allocated to your DB instance. The provisioning itself takes place within the specified DB maintenance window.

  • Amazon ElastiCache: A web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.ElastiCache can publish messages using Amazon SNS when significant events happen on your cache cluster. This feature can be used to refresh the list of servers on client machines connected to individual cache node endpoints of a cache cluster. For instance, an ecommerce website fetches product details from a cache cluster, with the goal of offloading a relational database and speeding up page load times. Ideally, you want to make sure that each web server always has an updated list of cache servers to which to connect.To automate this node discovery process, you can get your ElastiCache cluster to publish events to an SNS topic. Thus, when ElastiCache event “AddCacheNodeComplete” is published, your topic then pushes this event to all subscribing HTTP endpoints that serve your ecommerce website, so that these HTTP servers can update their list of cache nodes.

  • Amazon Redshift: A fully managed data warehouse that makes it simple to analyze data using standard SQL and BI (Business Intelligence) tools.Amazon Redshift uses SNS to broadcast relevant events so that data warehouse workflows can be automated. As an example, imagine a news website that sends clickstream data to a Kinesis Firehose stream, which then loads the data into Amazon Redshift, so that popular news and reading preferences might be surfaced on a BI tool. At some point though, this Amazon Redshift cluster might need to be resized, and the cluster enters a ready-only mode. Hence, this Amazon Redshift event is published to an SNS topic, which delivers this event to a subscribing Lambda function, which finally deletes the corresponding Kinesis Firehose delivery stream, so that clickstream data uploads can be put on hold.At a later point, after Amazon Redshift publishes the event that the maintenance window has been closed, SNS notifies a subscribing Lambda function accordingly, so that this function can re-create the Kinesis Firehose delivery stream, and resume clickstream data uploads to Amazon Redshift.

  • AWS DMS: Helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.DMS also uses SNS to provide notifications when DMS events occur, which can automate database migration workflows. As an example, you might create data replication tasks to migrate an on-premises MS SQL database, composed of multiple tables, to MySQL. Thus, if replication tasks fail due to incompatible data encoding in the source tables, these events can be published to an SNS topic, which can push these messages into a subscribing SQS queue. Then, encoders running on EC2 can poll these messages from the SQS queue, encode the source tables into a compatible character set, and restart the corresponding replication tasks in DMS. This is an event-driven approach to a self-healing database migration process.

Networking services

  • Amazon Route 53: A highly available and scalable cloud-based DNS (Domain Name System). Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources.You can set CloudWatch alarms and get automated Amazon SNS notifications when the status of your Route 53 health check changes. As an example, imagine an online payment gateway that reports the health of its platform to merchants worldwide, via a status page. This page is hosted on EC2 and fetches platform health data from DynamoDB. In this case, you could configure a CloudWatch alarm for your Route 53 health check, so that when the alarm threshold is breached, and the payment gateway is no longer considered healthy, then CloudWatch publishes this event to an SNS topic, which pushes this message to a subscribing Lambda function, which finally updates the DynamoDB table that populates the status page. This event-driven approach avoids any kind of manual update to the status page visited by merchants.

  • AWS Direct Connect (AWS DX): Makes it easy to establish a dedicated network connection from your premises to AWS, which can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.You can monitor physical DX connections using CloudWatch alarms, and send SNS messages when alarms change their status. As an example, when a DX connection state shifts to 0 (zero), indicating that the connection is down, this event can be published to an SNS topic, which can fan out this message to impacted servers through HTTP endpoints, so that they might reroute their traffic through a different connection instead. This is an event-driven approach to connectivity resilience.

More event-driven computing on AWS

In addition to SNS, event-driven computing is also addressed by Amazon CloudWatch Events, which delivers a near real-time stream of system events that describe changes in AWS resources. With CloudWatch Events, you can route each event type to one or more targets, including:

Many AWS services publish events to CloudWatch. As an example, you can get CloudWatch Events to capture events on your ETL (Extract, Transform, Load) jobs running on AWS Glue and push failed ones to an SQS queue, so that you can retry them later.

Conclusion

Amazon SNS is a pub/sub messaging service that can be used as an event-driven computing hub to AWS customers worldwide. By capturing events natively triggered by AWS services, such as EC2, S3 and RDS, you can automate and optimize all kinds of workflows, namely scaling, testing, encoding, profiling, broadcasting, discovery, failover, and much more. Business use cases presented in this post ranged from recruiting websites, to scientific research, geographic systems, social networks, retail websites, and news portals.

Start now by visiting Amazon SNS in the AWS Management Console, or by trying the AWS 10-Minute Tutorial, Send Fan-out Event Notifications with Amazon SNS and Amazon SQS.

 

Sky: People Can’t Pirate Live Soccer in the UK Anymore

Post Syndicated from Andy original https://torrentfreak.com/sky-people-cant-pirate-live-soccer-in-the-uk-anymore-171108/

The commotion over the set-top box streaming phenomenon is showing no signs of dying down and if day one at the Cable and Satellite Broadcasting Association of Asia (CASBAA) Conference 2017 was anything to go by, things are only heating up.

Held at Studio City in Macau, the conference has a strong anti-piracy element and was opened by Joe Welch, CASBAA Board Chairman and SVP Public Affairs Asia, 21st Century Fox. He began Tuesday by noting the important recent launch of a brand new anti-piracy initiative.

“CASBAA recently launched the Coalition Against Piracy, funded by 18 of the region’s content players and distribution partners,” he said.

TF reported on the formation of the coalition mid-October. It includes heavyweights such as Disney, Fox, HBO, NBCUniversal and BBC Worldwide, and will have a strong focus on the illicit set-top box market.

Illegal streaming devices (or ISDs, as the industry calls them), were directly addressed in a segment yesterday afternoon titled Face To Face. Led by Dr. Ros Lynch, Director of Copyright & IP Enforcement at the UK Intellectual Property Office, the session detailed the “onslaught of online piracy” and the rise of ISDs that is apparently “shaking the market”.

Given the apparent gravity of those statements, the following will probably come as a surprise. According to Lynch, the UK IPO sought the opinion of UK-based rightsholders about the pirate box phenomenon a while back after being informed of their popularity in the East. The response was that pirate boxes weren’t an issue. It didn’t take long, however, for things to blow up.

“The UKIPO provides intelligence and evidence to industry and the Police Intellectual Property Crime Unit (PIPCU) in London who then take enforcement actions,” Lynch explained.

“We first heard about the issues with ISDs from [broadcaster] TVB in Hong Kong and we then consulted the UK rights holders who responded that it wasn’t a problem. Two years later the issue just exploded.”

The evidence of that in the UK isn’t difficult to find. In addition to millions of devices with both free Kodi addon and subscription-based systems deployed, the app market has bloomed too, offering free or near to free content to all.

This caught the eye of the Premier League who this year obtained two pioneering injunctions (1,2) to tackle live streams of football games. Streams are blocked by local ISPs in real-time, making illicit online viewing a more painful experience than it ever has been. No doubt progress has been made on this front, with thousands of streams blocked, but according to broadcaster Sky, the results are unprecedented.

“Site-blocking has moved the goalposts significantly,” said Matthew Hibbert, head of litigation at Sky UK.

“In the UK you cannot watch pirated live Premier League content anymore,” he said.

While progress has been good, the statement is overly enthusiastic. TF sources have been monitoring the availability of pirate streams on around dozen illicit sites and services every Saturday (when it is actually illegal to broadcast matches in the UK) and service has been steady on around half of them and intermittent at worst on the rest.

There are hundreds of other platforms available so while many are definitely affected by Premier League blocking, it’s safe to assume that live football piracy hasn’t been wiped out. Nevertheless, it would be wrong to suggest that no progress has been made, in this and other related areas.

Kevin Plumb, Director of Legal Services at The Premier League, said that pubs showing football from illegal streams had also massively dwindled in numbers.

“In the past 18 months the illegal broadcasting of live Premier League matches in pubs in the UK has been decimated,” he said.

This result is almost certainly down to prosecutions taken in tandem with the Federation Against Copyright Theft (FACT), that have seen several landlords landed with large fines. Indeed, both sides of the market have been tackled, with both licensed premises and IPTV device sellers being targeted.

“The most successful thing we’ve done to combat piracy has been to undertake criminal prosecutions against ISD piracy,” said FACT chief Kieron Sharp yesterday. “Everyone is pleading guilty to these offenses.”

Most if not all of FACT-led prosecutions target device and subscription sellers under fraud legislation but that could change in the future, Lynch of the Intellectual Property Office said.

“While the UK works to update its legislation, we can’t wait for the new legislation to take enforcement actions and we rely heavily on ‘conspiracy to defraud’ charges, and have successfully prosecuted a number of ISD retailers,” she said.

Finally, information provided yesterday by network company CISCO shine light on what it costs to run a subscription-based pirate IPTV operation.

Director of Intelligence & Security Operations Avigail Gutman said a pirate IPTV server offering 1,000 channels to around 1,000 subscribers can cost as little as 2,000 euros per month to run but can generate 12,000 euros in revenue during the same period.

“In April of 2017, ten major paid TV and content providers had relinquished 3.09 million euros per month to 285 ISD-based streaming pirate syndicates,” she said.

There’s little doubt that IPTV piracy, both paid and free, is here to stay. The big question is how it will be tackled short and long-term and whether any changes in legislation will have any unintended knock-on effects.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon Redshift Dense Compute (DC2) Nodes Deliver Twice the Performance as DC1 at the Same Price

Post Syndicated from Quaseer Mujawar original https://aws.amazon.com/blogs/big-data/amazon-redshift-dense-compute-dc2-nodes-deliver-twice-the-performance-as-dc1-at-the-same-price/

Amazon Redshift makes analyzing exabyte-scale data fast, simple, and cost-effective. It delivers advanced data warehousing capabilities, including parallel execution, compressed columnar storage, and end-to-end encryption as a fully managed service, for less than $1,000/TB/year. With Amazon Redshift Spectrum, you can run SQL queries directly against exabytes of unstructured data in Amazon S3 for $5/TB scanned.

Today, we are making our Dense Compute (DC) family faster and more cost-effective with new second-generation Dense Compute (DC2) nodes at the same price as our previous generation DC1. DC2 is designed for demanding data warehousing workloads that require low latency and high throughput. DC2 features powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks.

We’ve tuned Amazon Redshift to take advantage of the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 at the same price. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30 percent better storage utilization.

Customer successes

Several flagship customers, ranging from fast growing startups to large Fortune 100 companies, previewed the new DC2 node type. In their tests, DC2 provided up to twice the performance as DC1. Our preview customers saw faster ETL (extract, transform, and load) jobs, higher query throughput, better concurrency, faster reports, and shorter data-to-insights—all at the same cost as DC1. DC2.8xlarge customers also noted that their databases used up to 30 percent less disk space due to our optimized storage format, reducing their costs.

4Cite Marketing, one of America’s fastest growing private companies, uses Amazon Redshift to analyze customer data and determine personalized product recommendations for retailers. “Amazon Redshift’s new DC2 node is giving us a 100 percent performance increase, allowing us to provide faster insights for our retailers, more cost-effectively, to drive incremental revenue,” said Jim Finnerty, 4Cite’s senior vice president of product.

BrandVerity, a Seattle-based brand protection and compliance‎ company, provides solutions to monitor, detect, and mitigate online brand, trademark, and compliance abuse. “We saw a 70 percent performance boost with the DC2 nodes for running Redshift Spectrum queries. As a result, we can analyze far more data for our customers and deliver results much faster,” said Hyung-Joon Kim, principal software engineer at BrandVerity.

“Amazon Redshift is at the core of our operations and our marketing automation tools,” said Jarno Kartela, head of analytics and chief data scientist at DNA Plc, one of the leading Finnish telecommunications groups and Finland’s largest cable operator and pay TV provider. “We saw a 52 percent performance gain in moving to Amazon Redshift’s DC2 nodes. We can now run queries in half the time, allowing us to provide more analytics power and reduce time-to-insight for our analytics and marketing automation users.”

You can read about their experiences on our Customer Success page.

Get started

You can try the new node type using our getting started guide. Just choose dc2.large or dc2.8xlarge in the Amazon Redshift console:

If you have a DC1.large Amazon Redshift cluster, you can restore to a new DC2.large cluster using an existing snapshot. To migrate from DS2.xlarge, DS2.8xlarge, or DC1.8xlarge Amazon Redshift clusters, you can use the resize operation to move data to your new DC2 cluster. For more information, see Clusters and Nodes in Amazon Redshift.

To get the latest Amazon Redshift feature announcements, check out our What’s New page, and subscribe to the RSS feed.

Popcorn Time Creator Readies BitTorrent & Blockchain-Powered Video Platform

Post Syndicated from Andy original https://torrentfreak.com/popcorn-time-creator-readies-bittorrent-blockchain-powered-youtube-competitor-171012/

Without a doubt, YouTube is one of the most important websites available on the Internet today.

Its massive archive of videos brings pleasure to millions on a daily basis but its centralized nature means that owner Google always exercises control.

Over the years, people have looked to decentralize the YouTube concept and the latest project hoping to shake up the market has a particularly interesting player onboard.

Until 2015, only insiders knew that Argentinian designer Federico Abad was actually ‘Sebastian’, the shadowy figure behind notorious content sharing platform Popcorn Time.

Now he’s part of the team behind Flixxo, a BitTorrent and blockchain-powered startup hoping to wrestle a share of the video market from YouTube. Here’s how the team, which features blockchain startup RSK Labs, hope things will play out.

The Flixxo network will have no centralized storage of data, eliminating the need for expensive hosting along with associated costs. Instead, transfers will take place between peers using BitTorrent, meaning video content will be stored on the machines of Flixxo users. In practice, the content will be downloaded and uploaded in much the same way as users do on The Pirate Bay or indeed Abad’s baby, Popcorn Time.

However, there’s a twist to the system that envisions content creators, content consumers, and network participants (seeders) making revenue from their efforts.

At the heart of the Flixxo system are digital tokens (think virtual currency), called Flixx. These Flixx ‘coins’, which will go on sale in 12 days, can be used to buy access to content. Creators can also opt to pay consumers when those people help to distribute their content to others.

“Free from structural costs, producers can share the earnings from their content with the network that supports them,” the team explains.

“This way you get paid for helping us improve Flixxo, and you earn credits (in the form of digital tokens called Flixx) for watching higher quality content. Having no intermediaries means that the price you pay for watching the content that you actually want to watch is lower and fairer.”

The Flixxo team

In addition to earning tokens from helping to distribute content, people in the Flixxo ecosystem can also earn currency by watching sponsored content, i.e advertisements. While in a traditional system adverts are often considered a nuisance, Flixx tokens have real value, with a promise that users will be able to trade their Flixx not only for videos, but also for tangible and semi-tangible goods.

“Use your Flixx to reward the producers you follow, encouraging them to create more awesome content. Or keep your Flixx in your wallet and use them to buy a movie ticket, a pair of shoes from an online retailer, a chest of coins in your favourite game or even convert them to old-fashioned cash or up-and-coming digital assets, like Bitcoin,” the team explains.

The Flixxo team have big plans. After foundation in early 2016, the second quarter of 2017 saw the completion of a functional alpha release. In a little under two weeks, the project will begin its token generation event, with new offices in Los Angeles planned for the first half of 2018 alongside a premiere of the Flixxo platform.

“A total of 1,000,000,000 (one billion) Flixx tokens will be issued. A maximum of 300,000,000 (three hundred million) tokens will be sold. Some of these tokens (not more than 33% or 100,000,000 Flixx) may be sold with anticipation of the token allocation event to strategic investors,” Flixxo states.

Like all content platforms, Flixxo will live or die by the quality of the content it provides and whether, at least in the first instance, it can persuade people to part with their hard-earned cash. Only time will tell whether its content will be worth a premium over readily accessible YouTube content but with much-reduced costs, it may tempt creators seeking a bigger piece of the pie.

“Flixxo will also educate its community, teaching its users that in this new internet era value can be held and transferred online without intermediaries, a value that can be earned back by participating in a community, by contributing, being rewarded for every single social interaction,” the team explains.

Of course, the elephant in the room is what will happen when people begin sharing copyrighted content via Flixxo. Certainly, the fact that Popcorn Time’s founder is a key player and rival streaming platform Stremio is listed as a partner means that things could get a bit spicy later on.

Nevertheless, the team suggests that piracy and spam content distribution will be limited by mechanisms already built into the system.

“[A]uthors have to time-block tokens in a smart contract (set as a warranty) in order to upload content. This contract will also handle and block their earnings for a certain period of time, so that in the case of a dispute the unfair-uploader may lose those tokens,” they explain.

That being said, Flixxo also says that “there is no way” for third parties to censor content “which means that anyone has the chance of making any piece of media available on the network.” However, Flixxo says it will develop tools for filtering what it describes as “inappropriate content.”

At this point, things start to become a little unclear. On the one hand Flixxo says it could become a “revolutionary tool for uncensorable and untraceable media” yet on the other it says that it’s necessary to ensure that adult content, for example, isn’t seen by kids.

“We know there is a thin line between filtering or curating content and censorship, and it is a fact that we have an open network for everyone to upload any content. However, Flixxo as a platform will apply certain filtering based on clear rules – there should be a behavior-code for uploaders in order to offer the right content to the right user,” Flixxo explains.

To this end, Flixxo says it will deploy a centralized curation function, carried out by 101 delegates elected by the community, which will become progressively decentralized over time.

“This curation will have a cost, paid in Flixx, and will be collected from the warranty blocked by the content uploaders,” they add.

There can be little doubt that if Flixxo begins ‘curating’ unsuitable content, copyright holders will call on it to do the same for their content too. And, if the platform really takes off, 101 curators probably won’t scratch the surface. There’s also the not inconsiderable issue of what might happen to curators’ judgment when they’re incentivized to block curate content.

Finally, for those sick of “not available in your region” messages, there’s good and bad news. Flixxo insists there will be no geo-blocking of content on its part but individual creators will still have that feature available to them, should they choose.

The Flixx whitepaper can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Roku Shows FBI Warning to Pirate Channel Users

Post Syndicated from Ernesto original https://torrentfreak.com/roku-shows-fbi-warning-to-pirate-channel-users-171009/

In recent years it has become much easier to stream movies and TV-shows over the Internet.

Legal services such as Netflix and HBO are flourishing, but at the same time millions of people are streaming from unauthorized sources, often paired with perfectly legal streaming platforms and devices.

Hollywood insiders have dubbed this trend “Piracy 3.0” and are actively working with stakeholders to address the threat. One of the companies rightsholders are working with is Roku, known for its easy-to-use media players.

Earlier this year a Mexican court ordered retailers to take the Roku media player off the shelves. This legal battle is still ongoing, but it was a clear signal to the company, which now has its own anti-piracy team.

Several third-party “private” channels have been removed from the player in recent weeks as they violate Roku’s terms and conditions. These include the hugely popular streaming channel XTV, which offered access to infringing content.

After its removal, XTV briefly returned as XTV 2, but that didn’t last for long. The infringing channel was soon removed again, this time showing the FBI’s anti-piracy seal followed by a rather ominous message.

“FBI Anti-Piracy Warning: Unauthorized copying is punishable under federal law,” it reads. “Roku has removed this unauthorized service due to repeated claims of copyright infringement.”

FBI Warning (via Cordcuttersnews)

The unusual warning was picked up by Cordcuttersnews and states that Roku itself removed the channel.

To some it may seem that the FBI is cracking down on Roku channels, but this is not the case. The anti-piracy seal and associated warning are often used in cases where the organization is not actively involved, to add extra weight. The FBI supports this, as long as certain standards are met.

A Roku spokesperson confirmed to TorrentFreak that they’re using it on their own accord here.

“We want to send a clear message to Roku customers and to publishers that any publication of pirated content on our platform is a violation of law and our platform rules,” the company says.

“We have recently expanded the messaging that we display to customers that install non-certified channels to alert them to the associated risks, and we display the FBI’s publicly available warning when we remove channels for copyright violations.”

The strong language shows that Roku is taking its efforts to crack down on infringing channels very seriously. A few weeks ago the company started to warn users that pirate channels may be removed without prior notice.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

‘China Should Crack Down on Pirate Streaming Box Distributors’

Post Syndicated from Ernesto original https://torrentfreak.com/china-should-crack-down-on-pirate-streaming-box-distributors-171001/

The International Intellectual Property Alliance (IIPA) has informed the U.S. Government that China must step up its game to better protect the interests of copyright holders.

The US Trade Representative is reviewing whether China has done enough to comply with its WTO obligations, but IIPA members including RIAA and MPAA believe there is still work to be done.

One of the areas to which the Chinese Government should pay more attention is enforcement. Although a lot of progress has been made in recent years, especially in combating music piracy, new threats have emerged.

One of the areas highlighted by IIPA is the streaming box ecosystem, aptly dubbed as “piracy 3.0” by the Motion Picture Association. This appeals to a new breed of pirates who rely on set-top boxes which are filled with pirate add-ons.

Industry groups often refer to these boxes as Illicit Streaming Devices (ISDs) and they see China as a major hub through which these are shipped around the world.

“ISDs are media boxes, set-top boxes or other devices that allow users, through the use of piracy apps, to stream, download, or otherwise access unauthorized content from the Internet,” IIPA writes.

“These devices have emerged as a significant means through which pirated motion picture and television content is accessed on televisions in homes in China as well as elsewhere in Asia and increasingly around the world. China is a hub for the manufacture of these devices.”

Although the hardware and media players are perfectly legal, things get problematic when they’re loaded with pirate add-ons and promoted as tools to facilitate copyright infringement.

IIPA states that the Chinese Government should do more to stop these devices from being sold. Cracking down on the main distribution points would be a good start, they say.

“However it is done, the Chinese government must increase enforcement efforts, including cracking down on piracy apps and on device retailers and/or distributors who preload the devices with apps that facilitate infringement.

“Moreover, because China is the main source of this problem spreading across Asia, the Chinese government should take immediate actions against key distribution points for devices that are being used illegally,” IIPA adds.

In addition to pirate boxes, the industry groups also want China to beef up its enforcement against online journal piracy, pirate apps, unauthorized camcording, and unlicensed streaming platforms.

IIPA intends to explain the above and several other shortcomings in detail during a hearing in Washington, DC, next Wednesday. The group has submitted an overview of its testimony to the Trade Representative, which is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Roku Is Building Its Own Anti-Piracy Team

Post Syndicated from Ernesto original https://torrentfreak.com/roku-building-anti-piracy-team/

Online streaming piracy is on the rise and many people use dedicated media players to watch unauthorized content through their regular TV.

Although the media players themselves can be used for perfectly legal means, third-party add-ons turn them into pirate machines, providing access to movies, TV-shows and more.

The entertainment industry isn’t happy with this development and is trying to halt further growth wherever possible.

Just a few months ago, Roku was harshly confronted with this new reality when a Mexican court ordered local retailers to take its media player off the shelves. This legal battle is still ongoing, but it’s clear that Roku itself is now taking a more proactive role.

While Roku never permitted any infringing content, the company is taking steps to better deal with the problem. The company has already begun warning users of copyright-infringing third-party channels, but that was only the beginning.

Two new job applications posted by Roku a few days ago reveal that the company is putting together an in-house anti-piracy team to keep the problem under control.

One of the new positions is that of Director Anti-Piracy and Content Security. Roku stresses that this is a brand new position, which involves shaping the company’s anti-piracy strategy.

“The Director, Anti-Piracy and Content Security is responsible for defining the technology roadmap and overseeing implementation of anti-piracy and content security initiatives at Roku,” the application reads.

“This role requires ability to benchmark Roku against best practices (i.e. MPAA, Studio & Customer) but also requires an emphasis on maintaining deep insight into the evolving threat landscape and technical challenges of combating piracy.”

The job posting

The second job listed by Roku is that of an anti-piracy software engineer. One of the main tasks of this position is to write software for the Roku to monitor and prevent piracy.

“In this role, you will be responsible for implementing anti-piracy and content protection technology as it pertains to Roku OS,” the application explains.

“This entails developing software features, conducting forensic investigations and mining Roku’s big data platform and other threat intelligence sources for copyright infringement activities on and off platform.”

While a two-person team is relatively small, it’s possible that this will grow in the future, if there aren’t people in a similar role already. What’s clear, however, is that Roku takes piracy very seriously.

With Hollywood closely eyeing the streaming box landscape, the company is doing its best to keep copyright holders onside.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Prime Day 2017 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prime-day-2017-powered-by-aws/

The third annual Prime Day set another round of records for global orders, topping Black Friday and Cyber Monday, making it the biggest day in Amazon retail history. Over the course of the 30 hour event, tens of millions of Prime members purchased things like Echo Dots, Fire tablets, programmable pressure cookers, espresso machines, rechargeable batteries, and much more! July 11th also set a record for the number of new Prime memberships, as people signed up in order to take advantage of hundreds of thousands of deals. Amazon customers shopped online and made heavy use of the Amazon App, with mobile orders more than doubling from last Prime Day.

Powered by AWS
Last year I told you about How AWS Powered Amazon’s Biggest Day Ever, and shared what the team had learned with regard to preparation, automation, monitoring, and thinking big. All of those lessons still apply and you can read that post to learn more. Preparation for this year’s Prime Day (which started just days after Prime Day 2016 wrapped up) started by collecting and sharing best practices and identifying areas for improvement, proceeding to implementation and stress testing as the big day approached. Two of the best practices involve auditing and GameDay:

Auditing – This is a formal way for us to track preparations, identify risks, and to track progress against our objectives. Each team must respond to a series of detailed technical and operational questions that are designed to help them determine their readiness. On the technical side, questions could revolve around time to recovery after a database failure, including the all-important check of the TTL (time to live) for the CNAME. Operational questions address schedules for on-call personnel, points of contact, and ownership of services & instances.

GameDay – This practice (which I believe originated with former Amazonian Jesse Robbins), is intended to validate all of the capacity planning & preparation and to verify that all of the necessary operational practices are in place and work as expected. It introduces simulated failures and helps to train the team to identify and quickly resolve issues, building muscle memory in the process. It also tests failover and recovery capabilities, and can expose latent defects that are lurking under the covers. GameDays help teams to understand scaling drivers (page views, orders, and so forth) and gives them an opportunity to test their scaling practices. To learn more, read Resilience Engineering: Learning to Embrace Failure or watch the video: GameDay: Creating Resiliency Through Destruction.

Prime Day 2017 Metrics
So, how did we do this year?

The AWS teams checked their dashboards and log files, and were happy to share their metrics with me. Here are a few of the most interesting ones:

Block Storage – Use of Amazon Elastic Block Store (EBS) grew by 40% year-over-year, with aggregate data transfer jumping to 52 petabytes (a 50% increase) for the day and total I/O requests rising to 835 million (a 30% increase). The team told me that they loved the elasticity of EBS, and that they were able to ramp down on capacity after Prime Day concluded instead of being stuck with it.

NoSQL Database – Amazon DynamoDB requests from Alexa, the Amazon.com sites, and the Amazon fulfillment centers totaled 3.34 trillion, peaking at 12.9 million per second. According to the team, the extreme scale, consistent performance, and high availability of DynamoDB let them meet needs of Prime Day without breaking a sweat.

Stack Creation – Nearly 31,000 AWS CloudFormation stacks were created for Prime Day in order to bring additional AWS resources on line.

API Usage – AWS CloudTrail processed over 50 billion events and tracked more than 419 billion calls to various AWS APIs, all in support of Prime Day.

Configuration TrackingAWS Config generated over 14 million Configuration items for AWS resources.

You Can Do It
Running an event that is as large, complex, and mission-critical as Prime Day takes a lot of planning. If you have an event of this type in mind, please take a look at our new Infrastructure Event Readiness white paper. Inside, you will learn how to design and provision your applications to smoothly handle planned scaling events such as product launches or seasonal traffic spikes, with sections on automation, resiliency, cost optimization, event management, and more.

Jeff;

 

A printing GIF camera? Is that even a thing?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/printing-gif-camera/

Abhishek Singh’s printing GIF camera uses two Raspberry Pis, the Model 3 and the Zero W, to take animated images and display them on an ejectable secondary screen.

Instagif – A DIY Camera that prints GIFs instantly

I built a camera that snaps a GIF and ejects a little cartridge so you can hold a moving photo in your hand! I’m calling it the “Instagif NextStep”.

The humble GIF

Created in 1987, Graphics Interchange Format files, better known as GIFs, have somewhat taken over the internet. And whether you pronounce it G-IF or J-IF, you’ve probably used at least one to express an emotion, animate images on your screen, or create small, movie-like memories of events.

In 2004, all patents on the humble GIF expired, which added to the increased usage of the file format. And by the early 2010s, sites such as giphy.com and phone-based GIF keyboards were introduced into our day-to-day lives.

A GIF from a scene in The Great Gatsby - Raspberry Pi GIF Camera

Welcome to the age of the GIF

Polaroid cameras

Polaroid cameras have a somewhat older history. While the first documented instant camera came into existence in 1923, commercial iterations made their way to market in the 1940s, with Polaroid’s model 95 Land Camera.

In recent years, the instant camera has come back into fashion, with camera stores and high street fashion retailers alike stocking their shelves with pastel-coloured, affordable models. But nothing beats the iconic look of the Polaroid Spirit series, and the rainbow colour stripe that separates it from its competitors.

Polaroid Spirit Camera - Raspberry Pi GIF Camera

Shake it like a Polaroid picture…

And if you’re one of our younger readers and find yourself wondering where else you’ve seen those stripes, you’re probably more familiar with previous versions of the Instagram logo, because, well…

Instagram Logo - Raspberry Pi GIF Camera

I’m sorry for the comment on the previous image. It was just too easy.

Abhishek Singh’s printing GIF camera

Abhishek labels his creation the Instagif NextStep, and cites his inspiration for the project as simply wanting to give it a go, and to see if he could hold a ‘moving photo’.

“What I love about these kinds of projects is that they involve a bunch of different skill sets and disciplines”, he explains at the start of his lengthy, highly GIFed and wonderfully detailed imugr tutorial. “Hardware, software, 3D modeling, 3D printing, circuit design, mechanical/electrical engineering, design, fabrication etc. that need to be integrated for it to work seamlessly. Ironically, this is also what I hate about these kinds of projects”

Care to see how the whole thing comes together? Well, in the true spirit of the project, Abhishek created this handy step-by-step GIF.

Piecing it together

I thought I’ll start off with the entire assembly and then break down the different elements. As you can see, everything is assembled from the base up in layers helping in easy assembly and quick disassembly for troubleshooting

The build comes in two parts – the main camera housing a Raspberry Pi 3 and Camera Module V2, and the ejectable cartridge fitted with Raspberry Pi Zero W and Adafruit PiTFT screen.

When the capture button is pressed, the camera takes 3 seconds’ worth of images and converts them into .gif format via a Python script. Once compressed and complete, the Pi 3 sends the file to the Zero W via a network connection. When it is satisfied that the Zero W has the image, the Pi 3 automatically ejects the ‘printed GIF’ cartridge, and the image is displayed.

A demonstration of how the GIF is displayed on the Raspberry Pi GIF Camera

For a full breakdown of code, 3D-printable files, and images, check out the full imgur post. You can see more of Abhishek’s work at his website here.

Create GIFs with a Raspberry Pi

Want to create GIFs with your Raspberry Pi? Of course you do. Who wouldn’t? So check out our free time-lapse animations resource. As with all our learning resources, the project is free for you to use at home and in your clubs or classrooms. And once you’ve mastered the art of Pi-based GIF creation, why not incorporate it into another project? Say, a motion-detecting security camera or an on-the-go tweeting GIF camera – the possibilities are endless.

And make sure you check out Abhishek’s other Raspberry Pi GIF project, Peeqo, who we covered previously in the blog. So cute. SO CUTE.

The post A printing GIF camera? Is that even a thing? appeared first on Raspberry Pi.

How Aussie ecommerce stores can compete with the retail giant Amazon

Post Syndicated from chris desantis original https://www.anchor.com.au/blog/2017/08/aussie-ecommerce-stores-vs-amazon/

The powerhouse Amazon retail store is set to launch in Australia toward the end of 2018 and Aussie ecommerce retailers need to ready themselves for the competition storm ahead.

2018 may seem a while away but getting your ecommerce site in tip top shape and ready to compete can take time. Check out these helpful hints from the Anchor crew.

Speed kills

If you’ve ever heard of the tale of the tortoise and the hare, the moral is that “slow and steady wins the race”. This is definitely not the place for that phrase, because if your site loads as slowly as a 1995 dial up connection, your ecommerce store will not, I repeat, will not win the race.

Site speed can be impacted by a number of factors and getting the balance right between a site that loads at lightning speed and delivering engaging content to your audience. There are many ways to check the performance of your site including Anchor’s free hosting check up or pingdom.

Taking action can boost the performance of your site:

Here’s an interesting blog from the WebCEO team about site speed’s impact on conversion rates on-page, or check out our previous blog on maximising site performance.

Show me the money

As an ecommerce store, getting credit card details as fast as possible is probably at the top of your list, but it’s important to remember that it’s an actual person that needs to hand over the details.

Consider the customer’s experience whilst checking out. Making people log in to their account before checkout, can lead to abandoned carts as customers try to remember the vital details. Similarly, making a customer enter all their details before displaying shipping costs is more of an annoyance than a benefit.

Built for growth

Before you blast out a promo email to your entire database or spend up big on PPC, consider what happens when this 5 fold increase in traffic, all jumps onto your site at around the same time.

Will your site come screeching to a sudden halt with a 504 or 408 error message, or ride high on the wave of increased traffic? If you have fixed infrastructure such as a dedicated server, or are utilising a VPS, then consider the maximum concurrent users that your site can handle.

Consider this. Amazon.com.au will be built on the scalable cloud infrastructure of Amazon Web Services and will utilise all the microservices and data mining technology to offer customers a seamless, personalised shopping experience. How will your business compete?

Search ready

Being found online is important for any business, but for ecommerce sites, it’s essential. Gaining results from SEO practices can take time so beware of ‘quick fix guarantees’ from outsourced agencies.

Search Engine Optimisation (SEO) practices can have lasting effects. Good practices can ensure your site is found via organic search without huge advertising budgets, on the other hand ‘black hat’ practices can push your ecommerce store into search oblivion.

SEO takes discipline and focus to get right. Here are some of our favourite hints for SEO greatness from those who live and breathe SEO:

  • Optimise your site for mobile
  • Use Meta Tags wisely
  • Leverage Descriptive alt tags and image file names
  • Create content for people, not bots (keyword stuffing is a no no!)

SEO best practices are continually evolving, but creating a site that is designed to give users a great experience and give them the content they expect to find.

Google My Business is a free service that EVERY business should take advantage of. It is a listing service where your business can provide details such as address, phone number, website, and trading hours. It’s easy to update and manage, you can add photos, a physical address (if applicable), and display shopper reviews.

Get your site ship shape

Overwhelmed by these starter tips? If you are ready to get your site into tip top shape–get in touch. We work with awesome partners like eWave who can help create a seamless online shopping experience.

 

The post How Aussie ecommerce stores can compete with the retail giant Amazon appeared first on AWS Managed Services by Anchor.

Roku Gets Tough on Pirate Channels, Warns Users

Post Syndicated from Ernesto original https://torrentfreak.com/roku-gets-tough-on-pirate-channels-warns-users-170815/

In recent years it has become much easier to stream movies and TV-shows over the Internet.

Legal services such as Netflix and HBO are flourishing, but there’s also a darker side to this streaming epidemic. Millions of people are streaming from unauthorized sources, often paired with perfectly legal streaming platforms and devices.

Hollywood insiders have dubbed this trend “Piracy 3.0” are actively working with stakeholders to address the threat. One of the companies rightsholders are working with is Roku, known for its easy-to-use media players.

Earlier this year Roku was harshly confronted with this new piracy crackdown when a Mexican court ordered local retailers to take its media player off the shelves. While this legal battle isn’t over yet, it was clear to Roku that misuse of its platform wasn’t without consequences.

While Roku never permitted any infringing content, it appears that the company has recently made some adjustments to better deal with the problem, or at least clarify its stance.

Pirate content generally doesn’t show up in the official Roku Channel Store but is directly loaded onto the device through third-party “private” channels. A few weeks ago, Roku renamed these “private” channels to “non-certified” channels, while making it very clear that copyright infringement is not allowed.

A “WARNING!” message that pops up during the installation of these third-party channels stresses that Roku has no control over the content. In addition, the company notes that these channels may be removed if it links to copyright infringing content.

Roku Warning

“By continuing, you acknowledge you are accessing a non-certified channel that may include content that is offensive or inappropriate for some audiences,” Roku’s warning reads.

“Moreover, if Roku determines that this channel violates copyright, contains illegal content, or otherwise violates Roku’s terms and conditions, then ROKU MAY REMOVE THIS CHANNEL WITHOUT PRIOR NOTICE.”

TorrentFreak reached out to Roku to find out how they plan to enforce this policy, but we have yet to hear back. According to Cord Cutters News, several piracy channels have already been removed recently, with other developers opting to leave the platform.

Roku’s General Counsel Steve Kay previously informed us that the company is taking the piracy problem seriously. Together with various stakeholders, they are working hard to address the problem.

“We actively work to prevent third-parties from using our platform to distribute copyright infringing content. Moreover, we have been actively working with other industry stakeholders on a wide range of anti-piracy initiatives,” Kay said.

Roku is not the only platform dealing with the piracy epidemic, the popular media player software Kodi is in the same boat. Kodi has also taken an active anti-piracy stance but they’re not banning any add-ons. They believe it would be pointless due to the open source nature of their software.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pimoroni is 5 now!

Post Syndicated from guru original https://www.raspberrypi.org/blog/pimoroni-is-5-now/

Long read written by Pimoroni’s Paul Beech, best enjoyed over a cup o’ grog.

Every couple of years, I’ve done a “State of the Fleet” update here on the Raspberry Pi blog to tell everyone how the Sheffield Pirates are doing. Half a decade has gone by in a blink, but reading back over the previous posts shows that a lot has happened in that time!

TL;DR We’re an increasingly medium-sized design/manufacturing/e-commerce business with workshops in Sheffield, UK, and Essen, Germany, and we employ almost 40 people. We’re totally lovely. Thanks for supporting us!

 

We’ve come a long way, baby

I’m sitting looking out the window at Sheffield-on-Sea and feeling pretty lucky about how things are going. In the morning, I’ll be flying east for Maker Faire Tokyo with Niko (more on him later), and to say hi to some amazing people in Shenzhen (and to visit Huaqiangbei, of course). This is after I’ve already visited this year’s Maker Faires in New York, San Francisco, and Berlin.

Pimoroni started out small, but we’ve grown like weeds, and we’re steadily sauntering towards becoming a medium-sized business. That’s thanks to fantastic support from the people who buy our stuff and spread the word. In return, we try to be nice, friendly, and human in everything we do, and to make exciting things, ideally with our own hands here in Sheffield.

Pimoroni soldering

Handmade with love

We’ve made it onto a few ‘fastest-growing’ lists, and we’re in the top 500 of the Inc. 5000 Europe list. Adafruit did it first a few years back, and we’ve never gone wrong when we’ve followed in their footsteps.

The slightly weird nature of Pimoroni means we get listed as either a manufacturing or e-commerce business. In reality, we’re about four or five companies in one shell, which is very much against the conventions of “how business is done”. However, having seen what Adafruit, SparkFun, and Seeed do, we’re more than happy to design, manufacture, and sell our stuff in-house, as well as stocking the best stuff from across the maker community.

Pimoroni stocks

Product and process

The whole process of expansion has not been without its growing pains. We’re just under 40 people strong now, and have an outpost in Germany (also hilariously far from the sea for piratical activities). This means we’ve had to change things quickly to improve and automate processes, so that the wheels won’t fall off as things get bigger. Process optimization is incredibly interesting to a geek, especially the making sure that things are done well, that mistakes are easy to spot and to fix, and that nothing is missed.

At the end of 2015, we had a step change in how busy we were, and our post room and support started to suffer. As a consequence, we implemented measures to become more efficient, including small but important things like checking in parcels with a barcode scanner attached to a Raspberry Pi. That Pi has been happily running on the same SD card for a couple of years now without problems 😀

Pimoroni post room

Going postal?

We also hired a full-time support ninja, Matt, to keep the experience of getting stuff from us light and breezy and to ensure that any problems are sorted. He’s had hugely positive impact already by making the emails and replies you see more friendly. Of course, he’s also started using the laser cutters for tinkering projects. It’d be a shame to work at Pimoroni and not get to use all the wonderful toys, right?

Employing all the people

You can see some of the motley crew we employ here and there on the Pimoroni website. And if you drop by at the Raspberry Pi Birthday Party, Pi Wars, Maker Faires, Deer Shed Festival, or New Scientist Live in September, you’ll be seeing new Pimoroni faces as we start to engage with people more about what we do. On top of that, we’re starting to make proper videos (like Sandy’s soldering guide), as opposed to the 101 episodes of Bilge Tank we recorded in a rather off-the-cuff and haphazard fashion. Although that’s the beauty of Bilge Tank, right?

Pimoroni soldering

Such soldering setup

As Emma, Sandy, Lydia, and Tanya gel as a super creative team, we’re starting to create more formal educational resources, and to make kits that are suitable for a wider audience. Things like our Pi Zero W kits are products of their talents.

Emma is our new Head of Marketing. She’s really ‘The Only Marketing Person Who Would Ever Fit In At Pimoroni’, having been a core part of the Sheffield maker scene since we hung around with one Ben Nuttall, in the dark days before Raspberry Pi was a thing.

Through a series of fortunate coincidences, Niko and his equally talented wife Mena were there when we cut the first Pibow in 2012. They immediately pitched in to help us buy our second laser cutter so we could keep up with demand. They have been supporting Pimoroni with sourcing in East Asia, and now Niko has become a member of the Pirates’ Council and the Head of Engineering as we’re increasing the sophistication and scale of the things we do. The Unicorn HAT HD is one of his masterpieces.

Pimoroni devices

ALL the HATs!

We see ourselves as a wonderful island of misfit toys, and it feels good to have the best toy shop ever, and to support so many lovely people. Business is about more than just profits.

Where do we go to, me hearties?

So what are our plans? At the moment we’re still working absolutely flat-out as demand from wholesalers, retailers, and customers increases. We thought Raspberry Pi was big, but it turns out it’s just getting started. Near the end of 2016, it seemed to reach a whole new level of popularityand still we continue to meet people to whom we have to explain what a Pi is. It’s a good problem to have.

We need a bigger space, but it’s been hard to find somewhere suitable in Sheffield that won’t mean we’re stuck on an industrial estate miles from civilisation. That would be bad for the crewwe like having world-class burritos on our doorstep.

The good news is, it looks like our search is at an end! Just in time for the arrival of our ‘Super-Turbo-Death-Star’ new production line, which will enable to make devices in a bigger, better, faster, more ‘Now now now!’ fashion \o/

Pimoroni warehouse

Spacious, but not spacious enough!

We’ve got lots of treasure in the pipeline, but we want to pick up the pace of development even more and create many new HATs, pHATs, and SHIMs, e.g. for environmental sensing and audio applications. Picade will also be getting some love to make it slicker and more hackable.

We’re also starting to flirt with adding more engineering and production capabilities in-house. The plan is to try our hand at anodising, powder-coating, and maybe even injection-moulding if we get the space and find the right machine. Learning how to do things is amazing, and we love having an idea and being able to bring it to life in almost no time at all.

Pimoroni production

This is where the magic happens

Fanks!

There are so many people involved in supporting our success, and some people we love for just existing and doing wonderful things that make us want to do better. The biggest shout-outs go to Liz, Eben, Gordon, James, all the Raspberry Pi crew, and Limor and pt from Adafruit, for being the most supportive guiding lights a young maker company could ever need.

A note from us

It is amazing for us to witness the growth of businesses within the Raspberry Pi ecosystem. Pimoroni is a wonderful example of an organisation that is creating opportunities for makers within its local community, and the company is helping to reinvigorate Sheffield as the heart of making in the UK.

If you’d like to take advantage of the great products built by the Pirates, Monkeys, Robots, and Ninjas of Sheffield, you should do it soon: Pimoroni are giving everyone 20% off their homemade tech until 6 August.

Pimoroni, from all of us here at Pi Towers (both in the UK and USA), have a wonderful birthday, and many a grog on us!

The post Pimoroni is 5 now! appeared first on Raspberry Pi.