Tag Archives: developer

Epic Responds to Cheating Fortnite Kid’s Mom in Court

Post Syndicated from Ernesto original https://torrentfreak.com/epic-responds-to-cheating-fortnite-kids-mom-in-court-180424/

Last fall, Epic Games released Fortnite’s free-to-play “Battle Royale” game mode, generating massive interest among gamers.

This also included thousands of cheaters, many of whom were subsequently banned. Epic Games then went a step further by taking several cheaters to court for copyright infringement.

One of the alleged cheaters turned out to be a minor, who’s referred to by his initials C.R. in the Carolina District Court. Epic Games wasn’t aware of this when it filed the lawsuit, but the kid’s mother let the company know, loud and clear.

“This company is in the process of attempting to sue a 14-year-old child,” the mother informed the Court last fall.

Among other defenses, the mother highlighted that the EULA, which the game publisher relies heavily upon in the complaint, isn’t legally binding. The EULA states that minors require permission from a parent or legal guardian, which was not the case here.

“Please note parental consent was not issued to [my son] to play this free game produced by Epic Games, INC,” the mother wrote in her letter.

After this letter, things went quiet. Epic managed to locate and serve the defendant with help from a private investigator, but no official response to the complaint was filed. This eventually prompted Epic to request an entry of default.

However, US District Court Malcolm Howard wouldn’t allow Epic to cruise to a win that easily. Instead, he ruled that the mother’s letter should be seen as a motion to dismiss the case.

“While it is true that defendant has not responded since proper service was effectuated, the letter from defendant’s mother detailing why this matter should be dismissed cannot be ignored,” Judge Howard wrote earlier this month.

As a result, Epic Games had to reply to the letter, which it did yesterday. In a redacted motion the game publisher argues that most of the mother’s arguments failed to state a claim and are therefore irrelevant.

Epic argues that the only issue that remains is the lack of parental consent when C.R. agreed to the EULA and the Terms. The mother argued that these are not valid agreements because her son is a minor, but Epic disagrees.

“This ‘infancy defense’ is not available to C.R,” Epic writes, pointing to jurisprudence where another court ruled that a minor can’t use the infancy defense to void contractual obligations while keeping the benefits of the same contract.

“C.R. affirmatively agreed to abide by Epic’s Terms and EULA, and ‘retained the benefits’ of the contracts he entered into with Epic. Accordingly, C.R. should not be able to ‘use the infancy defense to void [his] contractual obligations by retaining the benefits of the contract[s]’.”

Epic further argues that it’s clear that the cheater infringed on Epic’s copyrights and facilitated others to do the same. As such, the company asks the Court to deny the mother’s motion to dismiss.

If the Court agrees, Epic can request an entry of default. It did the same in a related case against another minor defendant earlier, which was granted by the Court late last week.

If that happens, the underage defendants risk a default judgment. This is likely to include a claim for monetary damages as well as an injunction prohibiting the minors from any copyright infringement or cheating in the future.

A copy of Epic Games’ redacted reply is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Serverless Architectures with AWS Lambda: Overview and Best Practices

Post Syndicated from Andrew Baird original https://aws.amazon.com/blogs/architecture/serverless-architectures-with-aws-lambda-overview-and-best-practices/

For some organizations, the idea of “going serverless” can be daunting. But with an understanding of best practices – and the right tools — many serverless applications can be fully functional with only a few lines of code and little else.

Examples of fully-serverless-application use cases include:

  • Web or mobile backends – Create fully-serverless, mobile applications or websites by creating user-facing content in a native mobile application or static web content in an S3 bucket. Then have your front-end content integrate with Amazon API Gateway as a backend service API. Lambda functions will then execute the business logic you’ve written for each of the API Gateway methods in your backend API.
  • Chatbots and virtual assistants – Build new serverless ways to interact with your customers, like customer support assistants and bots ready to engage customers on your company-run social media pages. The Amazon Alexa Skills Kit (ASK) and Amazon Lex have the ability to apply natural-language understanding to user-voice and freeform-text input so that a Lambda function you write can intelligently respond and engage with them.
  • Internet of Things (IoT) backends – AWS IoT has direct-integration for device messages to be routed to and processed by Lambda functions. That means you can implement serverless backends for highly secure, scalable IoT applications for uses like connected consumer appliances and intelligent manufacturing facilities.

Using AWS Lambda as the logic layer of a serverless application can enable faster development speed and greater experimentation – and innovation — than in a traditional, server-based environment.

We recently published the “Serverless Architectures with AWS Lambda: Overview and Best Practices” whitepaper to provide the guidance and best practices you need to write better Lambda functions and build better serverless architectures.

Once you’ve finished reading the whitepaper, below are a couple additional resources I recommend as your next step:

  1. If you would like to better understand some of the architecture pattern possibilities for serverless applications: Thirty Serverless Architectures in 30 Minutes (re:Invent 2017 video)
  2. If you’re ready to get hands-on and build a sample serverless application: AWS Serverless Workshops (GitHub Repository)
  3. If you’ve already built a serverless application and you’d like to ensure your application has been Well Architected: The Serverless Application Lens: AWS Well Architected Framework (Whitepaper)

About the Author

 

Andrew Baird is a Sr. Solutions Architect for AWS. Prior to becoming a Solutions Architect, Andrew was a developer, including time as an SDE with Amazon.com. He has worked on large-scale distributed systems, public-facing APIs, and operations automation.

Court Denies TVAddons’ Request to Dismiss U.S. Piracy Lawsuit

Post Syndicated from Ernesto original https://torrentfreak.com/court-denies-tvaddons-request-to-dismiss-u-s-piracy-lawsuit-180423/

Last year, American satellite and broadcast provider Dish Network targeted two well-known players in the third-party Kodi add-on ecosystem.

In a complaint filed in a federal court in Texas, add-on ZemTV and the TVAddons library were accused of copyright infringement. As a result, both are facing up to $150,000 in damages for each offense.

While the case was filed in Texas, neither of the defendants live there, or even in the United States. The owner and operator of TVAddons is Adam Lackman, who resides in Montreal, Canada. ZemTV’s developer Shahjahan Durrani is even further away in London, UK.

According to the legal team of the two defendants, this limited connection to Texas is reason for the case to be dismissed. They filed a motion to dismiss in January, asking the court to drop the case.

“Lackman and Durrani have never been residents or citizens of Texas; they have never owned property in Texas; they have never voted in Texas; they have never personally visited Texas; they have never directed any business activity of any kind to anyone in Texas […] and they have never earned income in Texas,” the motion reads.

Dish saw things differently, however. The broadcast provider replied to the motion, submitting hundreds of pages of evidence documenting TVAddons and ZemTV’s ties to the United States.

Among other things, Dish pointed the court towards TVAddons own data, which showed that most of its users came from the United States. More than one-third of the total user base were American, it argued.

“The United States was Defendants’ largest market with approximately 34% of all TV Addons traffic coming from users located in the United States, which was three times the traffic from the second largest market.”

Late last week District Court Judge Vanessa Gilmore ruled on the motion to dismiss from both defendants, which is denied.

Denied

At the time of writing, there is no additional information available as to how Judge Gilmore reached her decision. However, it is clear that the case will now move forward.

This lawsuit is one of several related to Kodi-powered pirate steaming boxes. While TVAddons and ZemTV didn’t sell any fully loaded boxes directly, Dish argues that they both played a significant role in making copyright-infringing content available.

Earlier this year, the manufacturer of the streaming device DragonBox was sued in a separate case by Netflix, Amazon and several major Hollywood studios.

A few days ago Dragon Media denied all piracy allegations in the complaint, but the lawsuit remains ongoing. The same is true for a related case against Tickbox, another Kodi-powered box manufacturer.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

StaCoAn – Mobile App Static Analysis Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/04/stacoan-mobile-app-static-analysis-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

StaCoAn – Mobile App Static Analysis Tool

StaCoAn is a cross-platform tool which aids developers, bug bounty hunters and ethical hackers performing mobile app static analysis on the code of the application for both native Android and iOS applications.

This tool will look for interesting lines in the code which can contain:

  • Hardcoded credentials
  • API keys
  • URL’s of API’s
  • Decryption keys
  • Major coding mistakes

This tool was created with a big focus on usability and graphical guidance in the user interface.

Read the rest of StaCoAn – Mobile App Static Analysis Tool now! Only available at Darknet.

[$] Rewiring x86 system-call dispatch

Post Syndicated from corbet original https://lwn.net/Articles/752422/rss

Each kernel development cycle includes a vast number of changes that are
not intended to change visible behavior and which, as a result, go
unnoticed by most users and developers. One such change in 4.17 is a
rewiring of how system-call implementations are invoked within the kernel.
The change is interesting, though, and provides an opportunity to look at
the macro magic that handles system-call definitions.

Steam Censors MEGA.nz Links in Chats and Forum Posts

Post Syndicated from Ernesto original https://torrentfreak.com/steam-censors-mega-nz-links-in-chats-and-forum-posts-180421/

With more than 150 million registered accounts, Steam is much more than just a game distribution platform.

For many people, it’s also a social hangout and a communication channel.

Steam’s instant messaging tool, for example, is widely used for chats with friends. About games of course, but also to discuss lots of other stuff.

While Valve doesn’t mind people socializing on its platform, there are certain things the company doesn’t want Steam users to share. This includes links to the cloud hosting service Mega.

Users who’d like to show off some gaming footage, or even a collection of cat pictures they stored on Mega, are unable to do so. As it turns out, Steam actively censors these type of links from forum posts and chats.

In forum posts, these offending links are replaced by the text {LINK REMOVED} and private chats get the same treatment. Instead of the Mega link, people on the other end only get a mention that a link was removed.

Mega link removed from chat

While Mega operates as a regular company that offers cloud hosting services, Steam notes on their website that the website is “potentially malicious.”

“The site could contain malicious content or be known for stealing user credentials,” Steam’s link checker warns.

Potentially malicious…

It’s unclear what malicious means in this context. Mega has never been flagged by Google’s Safe Browsing program, which is regarded as one of the industry standards for malware and other unwanted software.

What’s more likely is that Mega’s piracy stigma has something to do with the censoring. As it turns out, Steam also censors 4shared.com, as well as Pirate Bay’s former .se domain name.

Other “malicious sites” which get the same treatment are more game oriented, such as cheathappens.com and the CSGO Skin Screenshot site metjm.net. While it’s understandable some game developers don’t like these, malicious is a rather broad term in this regard.

Mega clearly refutes that they are doing anything wrong. Mega Chairman Stephen Hall tells TorrentFreak that the company swiftly removes any malicious content, once it receives an abuse notice.

“It is crazy for sites to block Mega links as we respond very quickly to disable any links that are reported as malware, generally much quicker than our competitors,” Hall says.

Valve did not immediately reply to our request for clarification so the precise reason for the link censoring remains unknown.

That said, when something’s censored the public tends to work around any restrictions. Mega links are still being shared on Steam, with a slightly altered URL. In addition, Mega’s backup domain Mega.co.nz still works fine too.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Implement continuous integration and delivery of serverless AWS Glue ETL applications using AWS Developer Tools

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-serverless-aws-glue-etl-applications-using-aws-developer-tools/

AWS Glue is an increasingly popular way to develop serverless ETL (extract, transform, and load) applications for big data and data lake workloads. Organizations that transform their ETL applications to cloud-based, serverless ETL architectures need a seamless, end-to-end continuous integration and continuous delivery (CI/CD) pipeline: from source code, to build, to deployment, to product delivery. Having a good CI/CD pipeline can help your organization discover bugs before they reach production and deliver updates more frequently. It can also help developers write quality code and automate the ETL job release management process, mitigate risk, and more.

AWS Glue is a fully managed data catalog and ETL service. It simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, and job scheduling. AWS Glue crawls your data sources and constructs a data catalog using pre-built classifiers for popular data formats and data types, including CSV, Apache Parquet, JSON, and more.

When you are developing ETL applications using AWS Glue, you might come across some of the following CI/CD challenges:

  • Iterative development with unit tests
  • Continuous integration and build
  • Pushing the ETL pipeline to a test environment
  • Pushing the ETL pipeline to a production environment
  • Testing ETL applications using real data (live test)
  • Exploring and validating data

In this post, I walk you through a solution that implements a CI/CD pipeline for serverless AWS Glue ETL applications supported by AWS Developer Tools (including AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild) and AWS CloudFormation.

Solution overview

The following diagram shows the pipeline workflow:

This solution uses AWS CodePipeline, which lets you orchestrate and automate the test and deploy stages for ETL application source code. The solution consists of a pipeline that contains the following stages:

1.) Source Control: In this stage, the AWS Glue ETL job source code and the AWS CloudFormation template file for deploying the ETL jobs are both committed to version control. I chose to use AWS CodeCommit for version control.

To get the ETL job source code and AWS CloudFormation template, download the gluedemoetl.zip file. This solution is developed based on a previous post, Build a Data Lake Foundation with AWS Glue and Amazon S3.

2.) LiveTest: In this stage, all resources—including AWS Glue crawlers, jobs, S3 buckets, roles, and other resources that are required for the solution—are provisioned, deployed, live tested, and cleaned up.

The LiveTest stage includes the following actions:

  • Deploy: In this action, all the resources that are required for this solution (crawlers, jobs, buckets, roles, and so on) are provisioned and deployed using an AWS CloudFormation template.
  • AutomatedLiveTest: In this action, all the AWS Glue crawlers and jobs are executed and data exploration and validation tests are performed. These validation tests include, but are not limited to, record counts in both raw tables and transformed tables in the data lake and any other business validations. I used AWS CodeBuild for this action.
  • LiveTestApproval: This action is included for the cases in which a pipeline administrator approval is required to deploy/promote the ETL applications to the next stage. The pipeline pauses in this action until an administrator manually approves the release.
  • LiveTestCleanup: In this action, all the LiveTest stage resources, including test crawlers, jobs, roles, and so on, are deleted using the AWS CloudFormation template. This action helps minimize cost by ensuring that the test resources exist only for the duration of the AutomatedLiveTest and LiveTestApproval

3.) DeployToProduction: In this stage, all the resources are deployed using the AWS CloudFormation template to the production environment.

Try it out

This code pipeline takes approximately 20 minutes to complete the LiveTest test stage (up to the LiveTest approval stage, in which manual approval is required).

To get started with this solution, choose Launch Stack:

This creates the CI/CD pipeline with all of its stages, as described earlier. It performs an initial commit of the sample AWS Glue ETL job source code to trigger the first release change.

In the AWS CloudFormation console, choose Create. After the template finishes creating resources, you see the pipeline name on the stack Outputs tab.

After that, open the CodePipeline console and select the newly created pipeline. Initially, your pipeline’s CodeCommit stage shows that the source action failed.

Allow a few minutes for your new pipeline to detect the initial commit applied by the CloudFormation stack creation. As soon as the commit is detected, your pipeline starts. You will see the successful stage completion status as soon as the CodeCommit source stage runs.

In the CodeCommit console, choose Code in the navigation pane to view the solution files.

Next, you can watch how the pipeline goes through the LiveTest stage of the deploy and AutomatedLiveTest actions, until it finally reaches the LiveTestApproval action.

At this point, if you check the AWS CloudFormation console, you can see that a new template has been deployed as part of the LiveTest deploy action.

At this point, make sure that the AWS Glue crawlers and the AWS Glue job ran successfully. Also check whether the corresponding databases and external tables have been created in the AWS Glue Data Catalog. Then verify that the data is validated using Amazon Athena, as shown following.

Open the AWS Glue console, and choose Databases in the navigation pane. You will see the following databases in the Data Catalog:

Open the Amazon Athena console, and run the following queries. Verify that the record counts are matching.

SELECT count(*) FROM "nycitytaxi_gluedemocicdtest"."data";
SELECT count(*) FROM "nytaxiparquet_gluedemocicdtest"."datalake";

The following shows the raw data:

The following shows the transformed data:

The pipeline pauses the action until the release is approved. After validating the data, manually approve the revision on the LiveTestApproval action on the CodePipeline console.

Add comments as needed, and choose Approve.

The LiveTestApproval stage now appears as Approved on the console.

After the revision is approved, the pipeline proceeds to use the AWS CloudFormation template to destroy the resources that were deployed in the LiveTest deploy action. This helps reduce cost and ensures a clean test environment on every deployment.

Production deployment is the final stage. In this stage, all the resources—AWS Glue crawlers, AWS Glue jobs, Amazon S3 buckets, roles, and so on—are provisioned and deployed to the production environment using the AWS CloudFormation template.

After successfully running the whole pipeline, feel free to experiment with it by changing the source code stored on AWS CodeCommit. For example, if you modify the AWS Glue ETL job to generate an error, it should make the AutomatedLiveTest action fail. Or if you change the AWS CloudFormation template to make its creation fail, it should affect the LiveTest deploy action. The objective of the pipeline is to guarantee that all changes that are deployed to production are guaranteed to work as expected.

Conclusion

In this post, you learned how easy it is to implement CI/CD for serverless AWS Glue ETL solutions with AWS developer tools like AWS CodePipeline and AWS CodeBuild at scale. Implementing such solutions can help you accelerate ETL development and testing at your organization.

If you have questions or suggestions, please comment below.

 


Additional Reading

If you found this post useful, be sure to check out Implement Continuous Integration and Delivery of Apache Spark Applications using AWS and Build a Data Lake Foundation with AWS Glue and Amazon S3.

 


About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 
Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

[$] Finding Spectre vulnerabilities with smatch

Post Syndicated from corbet original https://lwn.net/Articles/752408/rss

The furor over the Meltdown and Spectre vulnerabilities has calmed a bit —
for now, at least — but that does not mean that developers have stopped
worrying about them. Spectre variant 1 (the bounds-check bypass
vulnerability) has been of particular concern because, while the kernel is
thought to contain numerous vulnerable spots, nobody really knows how to
find them all. As a result, the defenses that have been developed for
variant 1 have only been deployed in a few places. Recently, though,
Dan Carpenter has enhanced the smatch tool to enable it to find possibly
vulnerable code in the kernel.

The End of Google Cloud Messaging, and What it Means for Your Apps

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/the-end-of-google-cloud-messaging-and-what-it-means-for-your-apps/

On April 10, 2018, Google announced the deprecation of its Google Cloud Messaging (GCM) platform. Specifically, the GCM server and client APIs are deprecated and will be removed as soon as April 11, 2019.  What does this mean for you and your applications that use Amazon Simple Notification Service (Amazon SNS) or Amazon Pinpoint?

First, nothing will break now or after April 11, 2019. GCM device tokens are completely interchangeable with the newer Firebase Cloud Messaging (FCM) device tokens. If you have existing GCM tokens, you’ll still be able to use them to send notifications. This statement is also true for GCM tokens that you generate in the future.

On the back end, we’ve already migrated Amazon SNS and Amazon Pinpoint to the server endpoint for FCM (https://fcm.googleapis.com/fcm/send). As a developer, you don’t need to make any changes as a result of this deprecation.

We created the following mini-FAQ to address some of the questions you may have as a developer who uses Amazon SNS or Amazon Pinpoint.

If I migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. Your ability to connect to your applications and send messages through both Amazon SNS and Amazon Pinpoint doesn’t change. We’ll update the documentation for Amazon SNS and Amazon Pinpoint soon to reflect these changes.

If I don’t migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. If you do nothing, your existing credentials and GCM tokens will still be valid. All applications that you previously set up to use Amazon Pinpoint or Amazon SNS will continue to work normally. When you call the API for Amazon Pinpoint or Amazon SNS, we initiate a request to the FCM server endpoint directly.

What are the differences between Amazon SNS and Amazon Pinpoint?

Amazon SNS makes it easy for developers to set up, operate, and send notifications at scale, affordably and with a high degree of flexibility. Amazon Pinpoint has many of the same messaging capabilities as Amazon SNS, with the same levels of scalability and flexibility.

The main difference between the two services is that Amazon Pinpoint provides both transactional and targeted messaging capabilities. By using Amazon Pinpoint, marketers and developers can not only send transactional messages to their customers, but can also segment their audiences, create campaigns, and analyze both application and message metrics.

How do I migrate from GCM to FCM?

For more information about migrating from GCM to FCM, see Migrate a GCM Client App for Android to Firebase Cloud Messaging on the Google Developers site.

If you have any questions, please post them in the comments section, or in the Amazon Pinpoint or Amazon SNS forums.

New – Registry of Open Data on AWS (RODA)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-registry-of-open-data-on-aws-roda/

Almost a decade ago, my colleague Deepak Singh introduced the AWS Public Datasets in his post Paging Researchers, Analysts, and Developers. I’m happy to report that Deepak is still an important part of the AWS team and that the Public Datasets program is still going strong!

Today we are announcing a new take on open and public data, the Registry of Open Data on AWS, or RODA. This registry includes existing Public Datasets and allows anyone to add their own datasets so that they can be accessed and analyzed on AWS.

Inside the Registry
The home page lists all of the datasets in the registry:

Entering a search term shrinks the list so that only the matching datasets are displayed:

Each dataset has an associated detail page, including usage examples, license info, and the information needed to locate and access the dataset on AWS:

In this case, I can access the data with a simple CLI command:

I could also access it programmatically, or download data to my EC2 instance.

Adding to the Repository
If you have a dataset that is publicly available and would like to add it to RODA , you can simply send us a pull request. Head over to the open-data-registry repo, read the CONTRIBUTING document, and create a YAML file that describes your dataset, using one of the existing files in the datasets directory as a model:

We’ll review pull requests regularly; you can “star” or watch the repo in order to track additions and changes.

Impress Me
I am looking forward to an inrush of new datasets, along with some blog posts and apps that show how to to use the data in powerful and interesting ways. Let me know what you come up with.

Jeff;

 

[$] PostgreSQL’s fsync() surprise

Post Syndicated from corbet original https://lwn.net/Articles/752063/rss

Developers of database management systems are, by necessity, concerned
about getting data safely to persistent storage. So when the PostgreSQL
community found out that the way the kernel handles I/O errors could result
in data being lost without any errors being reported to user space, a fair
amount of unhappiness resulted. The problem, which is exacerbated by the
way PostgreSQL performs buffered I/O, turns out not to be unique to Linux,
and will not be easy to solve even there.

Achieving Major Stability and Performance Improvements in Yahoo Mail with a Novel Redux Architecture

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/173062946866

yahoodevelopers:

By Mohit Goenka, Gnanavel Shanmugam, and Lance Welsh

At Yahoo Mail, we’re constantly striving to upgrade our product experience. We do this not only by adding new features based on our members’ feedback, but also by providing the best technical solutions to power the most engaging experiences. As such, we’ve recently introduced a number of novel and unique revisions to the way in which we use Redux that have resulted in significant stability and performance improvements. Developers may find our methods useful in achieving similar results in their apps.

Improvements to product metrics

Last year Yahoo Mail implemented a brand new architecture using Redux. Since then, we have transformed the overall architecture to reduce latencies in various operations, reduce JavaScript exceptions, and better synchronized states. As a result, the product is much faster and more stable.

Stability improvements:

  • when checking for new emails – 20%
  • when reading emails – 30%
  • when sending emails – 20%

Performance improvements:

  • 10% improvement in page load performance
  • 40% improvement in frame rendering time

We have also reduced API calls by approximately 20%.

How we use Redux in Yahoo Mail

Redux architecture is reliant on one large store that represents the application state. In a Redux cycle, action creators dispatch actions to change the state of the store. React Components then respond to those state changes. We’ve made some modifications on top of this architecture that are atypical in the React-Redux community.

For instance, when fetching data over the network, the traditional methodology is to use Thunk middleware. Yahoo Mail fetches data over the network from our API. Thunks would create an unnecessary and undesirable dependency between the action creators and our API. If and when the API changes, the action creators must then also change. To keep these concerns separate we dispatch the action payload from the action creator to store them in the Redux state for later processing by “action syncers”. Action syncers use the payload information from the store to make requests to the API and process responses. In other words, the action syncers form an API layer by interacting with the store. An additional benefit to keeping the concerns separate is that the API layer can change as the backend changes, thereby preventing such changes from bubbling back up into the action creators and components. This also allowed us to optimize the API calls by batching, deduping, and processing the requests only when the network is available. We applied similar strategies for handling other side effects like route handling and instrumentation. Overall, action syncers helped us to reduce our API calls by ~20% and bring down API errors by 20-30%.

Another change to the normal Redux architecture was made to avoid unnecessary props. The React-Redux community has learned to avoid passing unnecessary props from high-level components through multiple layers down to lower-level components (prop drilling) for rendering. We have introduced action enhancers middleware to avoid passing additional unnecessary props that are purely used when dispatching actions. Action enhancers add data to the action payload so that data does not have to come from the component when dispatching the action. This avoids the component from having to receive that data through props and has improved frame rendering by ~40%. The use of action enhancers also avoids writing utility functions to add commonly-used data to each action from action creators.

image

In our new architecture, the store reducers accept the dispatched action via action enhancers to update the state. The store then updates the UI, completing the action cycle. Action syncers then initiate the call to the backend APIs to synchronize local changes.

Conclusion

Our novel use of Redux in Yahoo Mail has led to significant user-facing benefits through a more performant application. It has also reduced development cycles for new features due to its simplified architecture. We’re excited to share our work with the community and would love to hear from anyone interested in learning more.

Pirate Site-Blocking? Music Biz Wants App Blocking Too

Post Syndicated from Andy original https://torrentfreak.com/pirate-site-blocking-music-biz-wants-app-blocking-too-180415/

In some way, shape or form, Internet piracy has always been carried out through some kind of application. Whether that’s a peer-to-peer client utilizing BitTorrent or eD2K, or a Usenet or FTP tool taking things back to their roots, software has always played a crucial role.

Of course, the nature of the Internet beast means that software usage is unavoidable but in recent years piracy has swung more towards the regular web browser, meaning that sites and services offering pirated content are largely easy to locate, identify and block, if authorities so choose.

As revealed this week by the MPA, thousands of platforms around the world are now targeted for blocking, with 1,800 sites and 5,300 domains blocked in Europe alone.

However, as the Kodi phenomenon has shown, web-based content doesn’t always have to be accessed via a standard web browser. Clever but potentially illegal addons and third-party apps are able to scrape web-based resources and present links to content on a wide range of devices, from mobile phones and tablets to set-top boxes.

While it’s still possible to block the resources upon which these addons rely, the scattered nature of the content makes the process much more difficult. One can’t simply block a whole platform because a few movies are illegally hosted there and even Google has found itself hosting thousands of infringing titles, a situation that’s ruthlessly exploited by addon and app developers alike.

Needless to say, the situation hasn’t gone unnoticed. The Alliance for Creativity and Entertainment has spent the last year (1,2,3) targeting many people involved in the addon and app scene, hoping they’ll take their tools and run, rather than further develop a rapidly evolving piracy ecosystem.

Over in Russia, a country that will happily block hundreds or millions of IP addresses if it suits them, the topic of infringing apps was raised this week. It happened during the International Strategic Forum on Intellectual Property, a gathering of 500 experts from more than 30 countries. There were strong calls for yet more tools and measures to deal with films and music being made available via ‘pirate’ apps.

The forum heard that in response to widespread website blocking, people behind pirate sites have begun creating applications for mobile devices to achieve the same ends – the provision of illegal content. This, key players in the music industry say, means that the law needs to be further tightened to tackle the rising threat.

“Consumption of content is now going into the mobile sector and due to this we plan to prevent mass migration of ‘pirates’ to the mobile sector,” said Leonid Agronov, general director of the National Federation of the Music Industry.

The same concerns were echoed by Alexander Blinov, CEO of Warner Music Russia. According to TASS, the powerful industry player said that while recent revenues had been positively affected by site-blocking, it’s now time to start taking more action against apps.

“I agree with all speakers that we can not stop at what has been achieved so far. The music industry has a fight against illegal content in mobile applications on the agenda,” Blinov said.

And if Blinov is to be believed, music in Russia is doing particularly well at the moment. Attributing successes to efforts by parliament, the Ministry of Communications, and copyright holders, Blinov said the local music market has doubled in the past two years.

“We are now in the top three fastest growing markets in the world, behind only China and South Korea,” Blinov said.

While some apps can work in the same manner as a basic web interface, others rely on more complex mechanisms, ‘scraping’ content from diverse sources that can be easily and readily changed if mitigation measures kick in. It will be very interesting to see how Russia deals with this threat and whether it will opt for highly technical solutions or the nuclear options demonstrated recently.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AWS AppSync – Production-Ready with Six New Features

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-appsync-production-ready-with-six-new-features/

If you build (or want to build) data-driven web and mobile apps and need real-time updates and the ability to work offline, you should take a look at AWS AppSync. Announced in preview form at AWS re:Invent 2017 and described in depth here, AWS AppSync is designed for use in iOS, Android, JavaScript, and React Native apps. AWS AppSync is built around GraphQL, an open, standardized query language that makes it easy for your applications to request the precise data that they need from the cloud.

I’m happy to announce that the preview period is over and that AWS AppSync is now generally available and production-ready, with six new features that will simplify and streamline your application development process:

Console Log Access – You can now see the CloudWatch Logs entries that are created when you test your GraphQL queries, mutations, and subscriptions from within the AWS AppSync Console.

Console Testing with Mock Data – You can now create and use mock context objects in the console for testing purposes.

Subscription Resolvers – You can now create resolvers for AWS AppSync subscription requests, just as you can already do for query and mutate requests.

Batch GraphQL Operations for DynamoDB – You can now make use of DynamoDB’s batch operations (BatchGetItem and BatchWriteItem) across one or more tables. in your resolver functions.

CloudWatch Support – You can now use Amazon CloudWatch Metrics and CloudWatch Logs to monitor calls to the AWS AppSync APIs.

CloudFormation Support – You can now define your schemas, data sources, and resolvers using AWS CloudFormation templates.

A Brief AppSync Review
Before diving in to the new features, let’s review the process of creating an AWS AppSync API, starting from the console. I click Create API to begin:

I enter a name for my API and (for demo purposes) choose to use the Sample schema:

The schema defines a collection of GraphQL object types. Each object type has a set of fields, with optional arguments:

If I was creating an API of my own I would enter my schema at this point. Since I am using the sample, I don’t need to do this. Either way, I click on Create to proceed:

The GraphQL schema type defines the entry points for the operations on the data. All of the data stored on behalf of a particular schema must be accessible using a path that begins at one of these entry points. The console provides me with an endpoint and key for my API:

It also provides me with guidance and a set of fully functional sample apps that I can clone:

When I clicked Create, AWS AppSync created a pair of Amazon DynamoDB tables for me. I can click Data Sources to see them:

I can also see and modify my schema, issue queries, and modify an assortment of settings for my API.

Let’s take a quick look at each new feature…

Console Log Access
The AWS AppSync Console already allows me to issue queries and to see the results, and now provides access to relevant log entries.In order to see the entries, I must enable logs (as detailed below), open up the LOGS, and check the checkbox. Here’s a simple mutation query that adds a new event. I enter the query and click the arrow to test it:

I can click VIEW IN CLOUDWATCH for a more detailed view:

To learn more, read Test and Debug Resolvers.

Console Testing with Mock Data
You can now create a context object in the console where it will be passed to one of your resolvers for testing purposes. I’ll add a testResolver item to my schema:

Then I locate it on the right-hand side of the Schema page and click Attach:

I choose a data source (this is for testing and the actual source will not be accessed), and use the Put item mapping template:

Then I click Select test context, choose Create New Context, assign a name to my test content, and click Save (as you can see, the test context contains the arguments from the query along with values to be returned for each field of the result):

After I save the new Resolver, I click Test to see the request and the response:

Subscription Resolvers
Your AWS AppSync application can monitor changes to any data source using the @aws_subscribe GraphQL schema directive and defining a Subscription type. The AWS AppSync client SDK connects to AWS AppSync using MQTT over Websockets and the application is notified after each mutation. You can now attach resolvers (which convert GraphQL payloads into the protocol needed by the underlying storage system) to your subscription fields and perform authorization checks when clients attempt to connect. This allows you to perform the same fine grained authorization routines across queries, mutations, and subscriptions.

To learn more about this feature, read Real-Time Data.

Batch GraphQL Operations
Your resolvers can now make use of DynamoDB batch operations that span one or more tables in a region. This allows you to use a list of keys in a single query, read records multiple tables, write records in bulk to multiple tables, and conditionally write or delete related records across multiple tables.

In order to use this feature the IAM role that you use to access your tables must grant access to DynamoDB’s BatchGetItem and BatchPutItem functions.

To learn more, read the DynamoDB Batch Resolvers tutorial.

CloudWatch Logs Support
You can now tell AWS AppSync to log API requests to CloudWatch Logs. Click on Settings and Enable logs, then choose the IAM role and the log level:

CloudFormation Support
You can use the following CloudFormation resource types in your templates to define AWS AppSync resources:

AWS::AppSync::GraphQLApi – Defines an AppSync API in terms of a data source (an Amazon Elasticsearch Service domain or a DynamoDB table).

AWS::AppSync::ApiKey – Defines the access key needed to access the data source.

AWS::AppSync::GraphQLSchema – Defines a GraphQL schema.

AWS::AppSync::DataSource – Defines a data source.

AWS::AppSync::Resolver – Defines a resolver by referencing a schema and a data source, and includes a mapping template for requests.

Here’s a simple schema definition in YAML form:

  AppSyncSchema:
    Type: "AWS::AppSync::GraphQLSchema"
    DependsOn:
      - AppSyncGraphQLApi
    Properties:
      ApiId: !GetAtt AppSyncGraphQLApi.ApiId
      Definition: |
        schema {
          query: Query
          mutation: Mutation
        }
        type Query {
          singlePost(id: ID!): Post
          allPosts: [Post]
        }
        type Mutation {
          putPost(id: ID!, title: String!): Post
        }
        type Post {
          id: ID!
          title: String!
        }

Available Now
These new features are available now and you can start using them today! Here are a couple of blog posts and other resources that you might find to be of interest:

Jeff;

 

 

The Amazon SES Blog is now the AWS Messaging and Targeting Blog

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/the-amazon-ses-blog-is-now-the-aws-messaging-and-targeting-blog/

Regular visitors to this blog may have noticed that its name has changed from the Amazon SES Blog to the AWS Messaging and Targeting Blog. The Amazon SES team has been working closely with the Amazon Pinpoint team in recent months, so we decided to create a single source of information for both products.

If you’re a dedicated Amazon SES user, don’t worry—Amazon SES isn’t going anywhere. However, as the goals of our two teams started to overlap more and more, we realized that we had lots of ideas for blog posts that would be relevant to users of both products.

If you’re not familiar with Amazon Pinpoint yet, allow us to make a brief introduction. Amazon Pinpoint was originally created to help mobile app developers analyze the ways that their customers used their apps, and to send mobile push messages to those users. Over time, the capabilities of Amazon Pinpoint grew to include the ability to send transactional messages (such as order confirmations, welcome messages, and one-time passwords), segment your audience, schedule campaign execution, and send messages using other channels (including SMS and email). In short, Amazon Pinpoint helps you deliver the right message to the right customers at the right time using the right channel.

In the past, this blog focused mainly on providing information about new features and common issues. Our new blog will include that same information, as well as practical tips, industry best practices, and the exciting things our customers have done using Amazon SES and Amazon Pinpoint. We hope you enjoy it!

If you have any questions, or if there’s anything you’d like to see us cover in the blog, please let us know in the comments section.

Piracy & Money Are Virtually Inseparable & People Probably Don’t Care Anymore

Post Syndicated from Andy original https://torrentfreak.com/piracy-money-are-virtually-inseparable-people-probably-dont-care-anymore-180408/

Long before peer-to-peer file-sharing networks were a twinkle in developers’ eyes, piracy of software and games flourished under the radar. Cassettes, floppy discs and CDs were the physical media of choice, while the BBS became the haunt of the need-it-now generation.

Sharing was the name of the game. When someone had game ‘X’ on tape, it was freely shared with friends and associates because when they got game ‘Y’, the favor had to be returned. The content itself became the currency and for most, the thought of asking for money didn’t figure into the equation.

Even when P2P networks first took off, money wasn’t really a major part of the equation. Sure, the people running Kazaa and the like were generating money from advertising but for millions of users, sharing content between friends and associates was still the name of the game.

Even when the torrent site scene began to gain traction, money wasn’t the driving force. Everything was so new that developers were much more concerned with getting half written/half broken tracker scripts to work than anything else. Having people care enough to simply visit the sites and share something with others was the real payoff. Ironically, it was a reward that money couldn’t buy.

But as the scene began to develop, so did the influx of minor and even major businessmen. The ratio economy of the private tracker scene meant that bandwidth could essentially be converted to cash, something which gave site operators revenue streams that had never previously existed. That was both good and bad for the scene.

The fact is that running a torrent site costs money and if time is factored in too, that becomes lots of money. If site admins have to fund everything themselves, a tipping point is eventually reached. If the site becomes unaffordable, it closes, meaning that everyone loses. So, by taking in some donations or offering users other perks in exchange for financial assistance, the whole thing remains viable.

Counter-intuitively, the success of such a venture then becomes the problem, at least as far as maintaining the old “sharing is caring” philosophy goes. A well-run private site, with enthusiastic donors, has the potential to bring in quite a bit of cash. Initially, the excess can be saved away for that rainy day when things aren’t so good. Having a few thousand in the bank when chaos rains down is rarely a bad thing.

But what happens when a site does really well and is making money hand over fist? What happens when advertisers on public sites begin to queue up, offering lots of cash to get involved? Is a site operator really expected to turn down the donations and tell the advertisers to go away? Amazingly, some do. Less amazingly, most don’t.

Although there are some notable exceptions, particularly in the niche private tracker scene, these days most ‘pirate’ sites are in it for the money.

In the current legal climate, some probably consider this their well-earned ‘danger money’ yet others are so far away from the sharing ethos it hurts. Quite often, these sites are incapable of taking in a new member due to alleged capacity issues yet a sizeable ‘donation’ miraculously solves the problem and gets the user in. It’s like magic.

As it happens, two threads on Reddit this week sparked this little rant. Both discuss whether someone should consider paying $20 and 37 euros respectively to get invitations to a pair of torrent sites.

Ask a purist and the answer is always ‘NO’, whether that’s buying an invitation from the operator of a torrent site or from someone selling invites for profit.

Aside from the fact that no one on these sites has paid content owners a dime, sites that demand cash for entry are doing so for one reason and one reason only – profit. Ridiculous when it’s the users of those sites that are paying to distribute the content.

On the other hand, others see no wrong in it.

They argue that paying a relatively small amount to access huge libraries of content is preferable to spending hundreds of dollars on a legitimate service that doesn’t carry all the content they need. Others don’t bother making any excuses at all, spending sizable sums with pirate IPTV/VOD services that dispose of sharing morals by engaging in a different business model altogether.

But the bottom line, whether we like it or not, is that money and Internet piracy have become so intertwined, so enmeshed in each other’s existence, that it’s become virtually impossible to separate them.

Even those running the handful of non-profit sites still around today would be forced to reconsider if they had to start all over again in today’s climate. The risk model is entirely different and quite often, only money tips those scales.

The same holds true for the people putting together the next big streaming portals. These days it’s about getting as many eyeballs on content as possible, making the money, and getting out the other end unscathed.

This is not what most early pirates envisioned. This is certainly not what the early sharing masses wanted. Yet arguably, through the influx of business people and the desire to generate profit among the general population, the pirating masses have never had it so good.

As revealed in a recent study, volumes of piracy are on the up and it is now possible – still possible – to access almost any item of content on pirate sites, despite the so-called “follow the money” approach championed by the authorities.

While ‘Sharing is Caring’ still lives today, it’s slowly being drowned out and at this point, there’s probably no way back. The big question is whether anyone cares anymore and the answer to that is “probably not”.

So, if the driving force isn’t sharing or love, it’ll probably have to be money. And that works everywhere else, doesn’t it?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.