Tag Archives: ADI

Serverless Architectures with AWS Lambda: Overview and Best Practices

Post Syndicated from Andrew Baird original https://aws.amazon.com/blogs/architecture/serverless-architectures-with-aws-lambda-overview-and-best-practices/

For some organizations, the idea of “going serverless” can be daunting. But with an understanding of best practices – and the right tools — many serverless applications can be fully functional with only a few lines of code and little else.

Examples of fully-serverless-application use cases include:

  • Web or mobile backends – Create fully-serverless, mobile applications or websites by creating user-facing content in a native mobile application or static web content in an S3 bucket. Then have your front-end content integrate with Amazon API Gateway as a backend service API. Lambda functions will then execute the business logic you’ve written for each of the API Gateway methods in your backend API.
  • Chatbots and virtual assistants – Build new serverless ways to interact with your customers, like customer support assistants and bots ready to engage customers on your company-run social media pages. The Amazon Alexa Skills Kit (ASK) and Amazon Lex have the ability to apply natural-language understanding to user-voice and freeform-text input so that a Lambda function you write can intelligently respond and engage with them.
  • Internet of Things (IoT) backends – AWS IoT has direct-integration for device messages to be routed to and processed by Lambda functions. That means you can implement serverless backends for highly secure, scalable IoT applications for uses like connected consumer appliances and intelligent manufacturing facilities.

Using AWS Lambda as the logic layer of a serverless application can enable faster development speed and greater experimentation – and innovation — than in a traditional, server-based environment.

We recently published the “Serverless Architectures with AWS Lambda: Overview and Best Practices” whitepaper to provide the guidance and best practices you need to write better Lambda functions and build better serverless architectures.

Once you’ve finished reading the whitepaper, below are a couple additional resources I recommend as your next step:

  1. If you would like to better understand some of the architecture pattern possibilities for serverless applications: Thirty Serverless Architectures in 30 Minutes (re:Invent 2017 video)
  2. If you’re ready to get hands-on and build a sample serverless application: AWS Serverless Workshops (GitHub Repository)
  3. If you’ve already built a serverless application and you’d like to ensure your application has been Well Architected: The Serverless Application Lens: AWS Well Architected Framework (Whitepaper)

About the Author

 

Andrew Baird is a Sr. Solutions Architect for AWS. Prior to becoming a Solutions Architect, Andrew was a developer, including time as an SDE with Amazon.com. He has worked on large-scale distributed systems, public-facing APIs, and operations automation.

French Minister of Culture Calls For Pirate Streaming Blacklist

Post Syndicated from Ernesto original https://torrentfreak.com/french-minister-of-culture-calls-for-pirate-streaming-blacklist-180423/

Nearly a decade ago, France was on the anti-piracy enforcement frontline.

The country was the first to introduce a graduated response system, Hadopi, where Internet subscribers risked losing their Internet connections if they were caught sharing torrents repeatedly.

Today this approach is no longer as effective as it once was. The bulk of all online piracy has moved from P2P downloading to streaming, and the latter isn’t traceable by anti-piracy watchdogs.

This hasn’t gone unnoticed by the French Government, Minister of Culture Françoise Nyssen in particular, who highlighted the issue to reporters a few days ago.

“The Hadopi response is no longer suitable because piracy is now 80% by streaming,” she said, quoted by local media.

While Hadopi may have outgrown its usefulness, France is not giving up the piracy fight. On the contrary, the country is now pondering new measures to target the current epidemic of pirate streaming sites.

Nyssen hopes that local authorities will implement a national pirate site blocklist to address the problem. Ideally, this should be constantly updated to ensure that pirate streaming sites remain inaccessible.

The Minister told reporters that France must “act on the sites,” by implementing “a blacklist which is constantly updated to keep them offline”.

This list would be maintained by the Hadopi agency which can then circulate it among several online intermediaries. This can include Internet providers, but also search engines and advertising networks.

The tough language will be music to the ears of the film industry and the timing doesn’t appear to be a total coincidence either.

The comments from the French Minister of Culture come shortly after several film industry groups boycotted a reception at the ministry. According to the groups, France dropped the ball on enforcement against piracy, which is blamed for more than a billion euros in losses.

The renewed promise may calm the waters for a while, but for now, it’s little more than that. It will likely take time before an effective pirate site blacklist is established, if it gets that far.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

How Many Piracy Warnings Would Get You to Stop?

Post Syndicated from Andy original https://torrentfreak.com/how-many-piracy-warnings-would-get-you-to-stop-180422/

For the past several years, copyright holders in the US and Europe have been trying to reach out to file-sharers in an effort to change their habits.

Whether via high-profile publicity lawsuits or a simple email, it’s hoped that by letting people know they aren’t anonymous, they’ll stop pirating and buy more content instead.

Traditionally, most ISPs haven’t been that keen on passing infringement notices on. However, the BMG v Cox lawsuit seems to have made a big difference, with a growing number of ISPs now visibly warning their users that they operate a repeat infringer policy.

But perhaps the big question is how seriously users take these warnings because – let’s face it – that’s the entire point of their existence.

There can be little doubt that a few recipients will be scurrying away at the slightest hint of trouble, intimidated by the mere suggestion that they’re being watched.

Indeed, a father in the UK – who received a warning last year as part of the Get it Right From a Genuine Site campaign – confidently and forcefully assured TF that there would be no more illegal file-sharing taking place on his ten-year-old son’s computer again – ever.

In France, where the HADOPI anti-piracy scheme received much publicity, people receiving an initial notice are most unlikely to receive additional ones in future. A December 2017 report indicated that of nine million first warning notices sent to alleged pirates since 2012, ‘just’ 800,000 received a follow-up warning on top.

The suggestion is that people either stop their piracy after getting a notice or two, or choose to “go dark” instead, using streaming sites for example or perhaps torrenting behind a decent VPN.

But for some people, the message simply doesn’t sink in early on.

A post on Reddit this week by a TWC Spectrum customer revealed that despite a wealth of readily available information (including masses in the specialist subreddit where the post was made), even several warnings fail to have an effect.

“Was just hit with my 5th copyright violation. They halted my internet and all,” the self-confessed pirate wrote.

There are at least three important things to note from this opening sentence.

Firstly, the first four warnings did nothing to change the user’s piracy habits. Secondly, Spectrum presumably had enough at five warnings and kicked in a repeat-infringer suspension, presumably to avoid the same fate as Cox in the BMG case. Third, the account suspension seems to have changed the game.

Notably, rather than some huge blockbuster movie, that fifth warning came due to something rather less prominent.

“Thought I could sneak in a random episode of Rosanne. The new one that aired LOL. That fast. Under 24 hours I got shut off. Which makes me feel like [ISPs] do monitor your traffic and its not just the people sending them notices,” the post read.

Again, some interesting points here.

Any content can be monitored by rightsholders but if it’s popular in the US then a warning delivered via an ISP seems to be more likely than elsewhere. However, the misconception that the monitoring is done by ISPs persists, despite that not being the case.

ISPs do not monitor users’ file-sharing activity, anti-piracy companies do. They can grab an IP address the second someone enters a torrent swarm, or even connects to a tracker. It happens in an instant, at a time of their choosing. Quickly jumping in and out of a torrent is no guarantee and the fallacy of not getting caught due to a failure to seed is just that – a fallacy.

But perhaps the most important thing is that after five warnings and a disconnection, the Reddit user decided to take action. Sadly for the people behind Rosanne, it’s not exactly the reaction they’d have hoped for.

“I do not want to push it but I am curious to what happens 6th time, and if I would even be safe behind a VPN,” he wrote.

“Just want to learn how to use a VPN and Sonarr and have a guilt free stress free torrent watching.”

Of course, there was no shortage of advice.

“If you have gotten 5 notices, you really should of learnt [sic] how to use a VPN before now,” one poster noted, perhaps inevitably.

But curiously, or perhaps obviously given the number of previous warnings, the fifth warning didn’t come as a surprise to the user.

“I knew they were going to hit me for it. I just didn’t think a 195mb file would do it. They were getting me for Disney movies in the past,” he added.

So how do you grab the attention of a persistent infringer like this? Five warnings and a suspension apparently. But clearly, not even that is a guarantee of success. Perhaps this is why most ‘strike’ schemes tend to give up on people who can’t be rehabilitated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Russia Blacklists 250 Pirate Sites For Displaying Gambling Ads

Post Syndicated from Andy original https://torrentfreak.com/russia-blacklists-250-pirate-sites-for-displaying-gambling-ads-180421/

Blocking alleged pirate sites is usually a question of proving that they’re involved in infringement and then applying to the courts for an injunction.

In Europe, the process is becoming easier, largely thanks to an EU ruling that permits blocking on copyright grounds.

As reported over the past several years, Russia is taking its blocking processes very seriously. Copyright holders can now have sites blocked in just a few days, if they can show their operators as being unresponsive to takedown demands.

This week, however, Russian authorities have again shown that copyright infringement doesn’t have to be the only Achilles’ heel of pirate sites.

Back in 2006, online gambling was completely banned in Russia. Three years later in 2009, land-based gambling was also made illegal in all but four specified regions. Then, in 2012, the Russian Supreme Court ruled that ISPs must block access to gambling sites, something they had previously refused to do.

That same year, telecoms watchdog Rozcomnadzor began publishing a list of banned domains and within those appeared some of the biggest names in gambling. Many shut down access to customers located in Russia but others did not. In response, Rozcomnadzor also began targeting sites that simply offered information on gambling.

Fast forward more than six years and Russia is still taking a hard line against gambling operators. However, it now finds itself in a position where the existence of gambling material can also assist the state in its quest to take down pirate sites.

Following a complaint from the Federal Tax Service of Russia, Rozcomnadzor has again added a large number of ‘pirate’ sites to the country’s official blocklist after they advertised gambling-related products and services.

“Rozkomnadzor, at the request of the Federal Tax Service of Russia, added more than 250 pirate online cinemas and torrent trackers to the unified register of banned information, which hosted illegal advertising of online casinos and bookmakers,” the telecoms watchdog reported.

Almost immediately, 200 of the sites were blocked by local ISPs since they failed to remove the advertising when told to do so. For the remaining 50 sites, breathing space is still available. Their bans can be suspended if the offending ads are removed within a timeframe specified by the authorities, which has not yet run out.

“Information on a significant number of pirate resources with illegal advertising was received by Rozcomnadzor from citizens and organizations through a hotline that operates on the site of the Unified Register of Prohibited Information, all of which were sent to the Federal Tax Service for making decisions on restricting access,” the watchdog revealed.

Links between pirate sites and gambling companies have traditionally been close over the years, with advertising for many top-tier brands appearing on portals large and small. However, in recent times the prevalence of gambling ads has diminished, in part due to campaigns conducted in the United States, Europe, and the UK.

For pirate site operators in Russia, the decision to carry gambling ads now comes with the added risk of being blocked. Only time will tell whether any reduction in traffic is considered serious enough to warrant a gambling boycott of their own.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Implement continuous integration and delivery of serverless AWS Glue ETL applications using AWS Developer Tools

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-serverless-aws-glue-etl-applications-using-aws-developer-tools/

AWS Glue is an increasingly popular way to develop serverless ETL (extract, transform, and load) applications for big data and data lake workloads. Organizations that transform their ETL applications to cloud-based, serverless ETL architectures need a seamless, end-to-end continuous integration and continuous delivery (CI/CD) pipeline: from source code, to build, to deployment, to product delivery. Having a good CI/CD pipeline can help your organization discover bugs before they reach production and deliver updates more frequently. It can also help developers write quality code and automate the ETL job release management process, mitigate risk, and more.

AWS Glue is a fully managed data catalog and ETL service. It simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, and job scheduling. AWS Glue crawls your data sources and constructs a data catalog using pre-built classifiers for popular data formats and data types, including CSV, Apache Parquet, JSON, and more.

When you are developing ETL applications using AWS Glue, you might come across some of the following CI/CD challenges:

  • Iterative development with unit tests
  • Continuous integration and build
  • Pushing the ETL pipeline to a test environment
  • Pushing the ETL pipeline to a production environment
  • Testing ETL applications using real data (live test)
  • Exploring and validating data

In this post, I walk you through a solution that implements a CI/CD pipeline for serverless AWS Glue ETL applications supported by AWS Developer Tools (including AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild) and AWS CloudFormation.

Solution overview

The following diagram shows the pipeline workflow:

This solution uses AWS CodePipeline, which lets you orchestrate and automate the test and deploy stages for ETL application source code. The solution consists of a pipeline that contains the following stages:

1.) Source Control: In this stage, the AWS Glue ETL job source code and the AWS CloudFormation template file for deploying the ETL jobs are both committed to version control. I chose to use AWS CodeCommit for version control.

To get the ETL job source code and AWS CloudFormation template, download the gluedemoetl.zip file. This solution is developed based on a previous post, Build a Data Lake Foundation with AWS Glue and Amazon S3.

2.) LiveTest: In this stage, all resources—including AWS Glue crawlers, jobs, S3 buckets, roles, and other resources that are required for the solution—are provisioned, deployed, live tested, and cleaned up.

The LiveTest stage includes the following actions:

  • Deploy: In this action, all the resources that are required for this solution (crawlers, jobs, buckets, roles, and so on) are provisioned and deployed using an AWS CloudFormation template.
  • AutomatedLiveTest: In this action, all the AWS Glue crawlers and jobs are executed and data exploration and validation tests are performed. These validation tests include, but are not limited to, record counts in both raw tables and transformed tables in the data lake and any other business validations. I used AWS CodeBuild for this action.
  • LiveTestApproval: This action is included for the cases in which a pipeline administrator approval is required to deploy/promote the ETL applications to the next stage. The pipeline pauses in this action until an administrator manually approves the release.
  • LiveTestCleanup: In this action, all the LiveTest stage resources, including test crawlers, jobs, roles, and so on, are deleted using the AWS CloudFormation template. This action helps minimize cost by ensuring that the test resources exist only for the duration of the AutomatedLiveTest and LiveTestApproval

3.) DeployToProduction: In this stage, all the resources are deployed using the AWS CloudFormation template to the production environment.

Try it out

This code pipeline takes approximately 20 minutes to complete the LiveTest test stage (up to the LiveTest approval stage, in which manual approval is required).

To get started with this solution, choose Launch Stack:

This creates the CI/CD pipeline with all of its stages, as described earlier. It performs an initial commit of the sample AWS Glue ETL job source code to trigger the first release change.

In the AWS CloudFormation console, choose Create. After the template finishes creating resources, you see the pipeline name on the stack Outputs tab.

After that, open the CodePipeline console and select the newly created pipeline. Initially, your pipeline’s CodeCommit stage shows that the source action failed.

Allow a few minutes for your new pipeline to detect the initial commit applied by the CloudFormation stack creation. As soon as the commit is detected, your pipeline starts. You will see the successful stage completion status as soon as the CodeCommit source stage runs.

In the CodeCommit console, choose Code in the navigation pane to view the solution files.

Next, you can watch how the pipeline goes through the LiveTest stage of the deploy and AutomatedLiveTest actions, until it finally reaches the LiveTestApproval action.

At this point, if you check the AWS CloudFormation console, you can see that a new template has been deployed as part of the LiveTest deploy action.

At this point, make sure that the AWS Glue crawlers and the AWS Glue job ran successfully. Also check whether the corresponding databases and external tables have been created in the AWS Glue Data Catalog. Then verify that the data is validated using Amazon Athena, as shown following.

Open the AWS Glue console, and choose Databases in the navigation pane. You will see the following databases in the Data Catalog:

Open the Amazon Athena console, and run the following queries. Verify that the record counts are matching.

SELECT count(*) FROM "nycitytaxi_gluedemocicdtest"."data";
SELECT count(*) FROM "nytaxiparquet_gluedemocicdtest"."datalake";

The following shows the raw data:

The following shows the transformed data:

The pipeline pauses the action until the release is approved. After validating the data, manually approve the revision on the LiveTestApproval action on the CodePipeline console.

Add comments as needed, and choose Approve.

The LiveTestApproval stage now appears as Approved on the console.

After the revision is approved, the pipeline proceeds to use the AWS CloudFormation template to destroy the resources that were deployed in the LiveTest deploy action. This helps reduce cost and ensures a clean test environment on every deployment.

Production deployment is the final stage. In this stage, all the resources—AWS Glue crawlers, AWS Glue jobs, Amazon S3 buckets, roles, and so on—are provisioned and deployed to the production environment using the AWS CloudFormation template.

After successfully running the whole pipeline, feel free to experiment with it by changing the source code stored on AWS CodeCommit. For example, if you modify the AWS Glue ETL job to generate an error, it should make the AutomatedLiveTest action fail. Or if you change the AWS CloudFormation template to make its creation fail, it should affect the LiveTest deploy action. The objective of the pipeline is to guarantee that all changes that are deployed to production are guaranteed to work as expected.

Conclusion

In this post, you learned how easy it is to implement CI/CD for serverless AWS Glue ETL solutions with AWS developer tools like AWS CodePipeline and AWS CodeBuild at scale. Implementing such solutions can help you accelerate ETL development and testing at your organization.

If you have questions or suggestions, please comment below.

 


Additional Reading

If you found this post useful, be sure to check out Implement Continuous Integration and Delivery of Apache Spark Applications using AWS and Build a Data Lake Foundation with AWS Glue and Amazon S3.

 


About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 
Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Facebook Privacy Fiasco Sees Congress Urged on Anti-Piracy Action

Post Syndicated from Andy original https://torrentfreak.com/facebook-privacy-fiasco-sees-congress-urged-on-anti-piracy-action-180420/

It has been a tumultuous few weeks for Facebook, and some would say quite rightly so. The company is a notorious harvester of personal information but last month’s Cambridge Analytica scandal really brought things to a head.

With Facebook co-founder and Chief Executive Officer Mark Zuckerberg in the midst of a PR nightmare, last Tuesday the entrepreneur appeared before the Senate. A day later he faced a grilling from lawmakers, answering questions concerning the social networking giant’s problems with user privacy and how it responds to breaches.

What practical measures Zuckerberg and his team will take to calm the storm are yet to unfold but the opportunity to broaden the attack on both Facebook and others in the user-generated content field is now being seized upon. Yes, privacy is the number one controversy at the moment but Facebook and others of its ilk need to step up and take responsibility for everything posted on their platforms.

That’s the argument presented by the American Federation of Musicians, the Content Creators Coalition, CreativeFuture, and the Independent Film & Television Alliance, who together represent more than 650 entertainment industry companies and 240,000 members. CreativeFuture alone represents more than 500 companies, including all the big Hollywood studios and major players in the music industry.

In letters sent to the Senate Committee on the Judiciary; the Senate Committee on Commerce, Science, and Transportation; and the House Energy and Commerce Committee, the coalitions urge Congress to not only ensure that Facebook gets its house in order, but that Google, Twitter, and similar platforms do so too.

The letters begin with calls to protect user data and tackle the menace of fake news but given the nature of the coalitions and their entertainment industry members, it’s no surprise to see where this is heading.

“In last week’s hearing, Mr. Zuckerberg stressed several times that Facebook must ‘take a broader view of our responsibility,’ acknowledging that it is ‘responsible for the content’ that appears on its service and must ‘take a more active view in policing the ecosystem’ it created,” the letter reads.

“While most content on Facebook is not produced by Facebook, they are the publisher and distributor of immense amounts of content to billions around the world. It is worth noting that a lot of that content is posted without the consent of the people who created it, including those in the creative industries we represent.”

The letter recalls Zuckerberg as characterizing Facebook’s failure to take a broader view of its responsibilities as a “big mistake” while noting he’s also promised change.

However, the entertainment groups contend that the way the company has conducted itself – and the manner in which many Silicon Valley companies conduct themselves – is supported and encouraged by safe harbors and legal immunities that absolve internet platforms of accountability.

“We agree that change needs to happen – but we must ask ourselves whether we can expect to see real change as long as these companies are allowed to continue to operate in a policy framework that prioritizes the growth of the internet over accountability and protects those that fail to act responsibly. We believe this question must be at the center of any action Congress takes in response to the recent failures,” the groups write.

But while the Facebook fiasco has provided the opportunity for criticism, CreativeFuture and its colleagues see the problem from a much broader perspective. They suck in companies like Google, which is also criticized for shirking its responsibilities, largely because the law doesn’t compel it to act any differently.

“Google, another major global platform that has long resisted meaningful accountability, also needs to step forward and endorse the broader view of responsibility expressed by Mr. Zuckerberg – as do many others,” they continue.

“The real problem is not Facebook, or Mark Zuckerberg, regardless of how sincerely he seeks to own the ‘mistakes’ that led to the hearing last week. The problem is endemic in a system that applies a different set of rules to the internet and fails to impose ordinary norms of accountability on businesses that are built around monetizing other people’s personal information and content.”

Noting that Congress has encouraged technology companies to prosper by using a “light hand” for the past several decades, the groups say their level of success now calls for a fresh approach and a heavier touch.

“Facebook and Google are grown-ups – and it is time they behaved that way. If they will not act, then it is up to you and your colleagues in the House to take action and not let these platforms’ abuses continue to pile up,” they conclude.

But with all that said, there is an interesting conflict that develops when presenting the solution to piracy in the context of a user privacy fiasco.

In the EU, many of the companies involved in the coalitions above are calling for pre-emptive filters to prevent allegedly infringing content being uploaded to Facebook and YouTube. That means that all user uploads to such platforms will have to be opened and scanned to see what they contain before they’re allowed online.

So, user privacy or pro-active anti-piracy filters? It might not be easy or even legal to achieve both.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cloudflare Kicks Out Torrent Site For Abuse Reporting Interference

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-kicks-out-torrent-site-for-abuse-reporting-interference-180420/

As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

The company’s clients include billion dollar companies and national governments, but also personal blogs, and even pirate sites.

Copyright holders are not happy with the latter category and are pressuring Cloudflare to cut their ties with sites like The Pirate Bay, both in and out of court.

Cloudflare, however, maintains that it’s a neutral service provider. They forward copyright infringement notices to their customers, for example, but deny any liability for these sites.

Generally speaking, the company only disconnects a customer in response to a court order, as it did with Sci-Hub earlier this year. That’s why it came as a surprise when the anime torrent site NYAA.si was disconnected this week.

The site, which is a replacement for the original NYAA, has millions of users and is particularly popular in Japan. Without prior warning, it became unavailable for several hours this week, after Cloudflare removed it from its services. So what happened?

TorrentFreak spoke to the operator who said that the exact reason for the termination remains a mystery to him. He reached out to Cloudflare looking for answers, but the comany simply stated that it’s about “avoiding measures taken to avoid abuse complaints,” as can be seen below.

One of Cloudflare’s messages

The operator says he hasn’t done anything out of the ordinary and showed his willingness to resolve any possible issues. However, that hasn’t changed Cloudflare’s stance.

“We asked multiple times for clarification. We also expressed that we were willing to attempt to work with them on whatever the problem actually was, if they would explain what they even mean.

“Naturally, I have been stonewalled by them at every stage. I’ve contacted numerous persons at Cloudflare and nobody will talk about this,” NYAA’s operator adds.

TorrentFreak asked Cloudflare for more details and the company confirmed that the matter was related to interference with its abuse reporting systems, without providing further detail.

“We determined that the customer had taken steps specifically intended to interfere with and thwart the operation of our abuse reporting systems,” Cloudflare’s General Counsel Doug Kramer informed us.

Cloudflare’s statement suggests that the site took active steps to interfere with the abuse process. The company added that it can’t go into detail, but says that the reason for the termination was shared with the website owner.

The website owner, on the other hand, informs us that he has no clue what the exact problem is. NYAA.si occasionally swaps IP addresses and have recently set up some mirror domains, but these were all under the same account. So, he has no idea why that would interfere with any abuse reports.

“I’m honestly unsure of what we could have done that ‘circumvents” their abuse system,” NYAA’s operator says, adding that the only abuse reports received were copyright related.

It’s unlikely, however, that copyright takedown notices alone would warrant account termination, as most of the largest torrent sites use Cloudflare.

NYAA’s operator says he can do little more than speculate at the point. Some have hinted at a secret court order while Japan’s recent crackdown on manga and anime piracy also came to mind, all without a grain of evidence of course.

Whatever the reason, NYAA.si now has to move on without Cloudflare, while the mystery remains.

“Frankly, this whole thing is a joke. I don’t understand why they would willingly host much bigger sites like ThePirateBay without any issue, or even ISIS, or the various hacking groups that have used them over time,” the operator says.

If more information about the abuse process interfere becomes available, we’ll definitely follow it up.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Hackspace magazine 6: Paper Engineering

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-6/

HackSpace magazine is back with our brand-new issue 6, available for you on shop shelves, in your inbox, and on our website right now.

Inside Hackspace magazine 6

Paper is probably the first thing you ever used for making, and for good reason: in no other medium can you iterate through 20 designs at the cost of only a few pennies. We’ve roped in Rob Ives to show us how to make a barking paper dog with moveable parts and a cam mechanism. Even better, the magazine includes this free paper automaton for you to make yourself. That’s right: free!

At the other end of the scale, there’s the forge, where heat, light, and noise combine to create immutable steel. We speak to Alec Steele, YouTuber, blacksmith, and philosopher, about his amazingly beautiful Damascus steel creations, and about why there’s no difference between grinding a knife and blowing holes in a mountain to build a road through it.

HackSpace magazine 6 Alec Steele

Do it yourself

You’ve heard of reading glasses — how about glasses that read for you? Using a camera, optical character recognition software, and a text-to-speech engine (and of course a Raspberry Pi to hold it all together), reader Andrew Lewis has hacked together his own system to help deal with age-related macular degeneration.

It’s the definition of hacking: here’s a problem, there’s no solution in the shops, so you go and build it yourself!

Radio

60 years ago, the cutting edge of home hacking was the transistor radio. Before the internet was dreamt of, the transistor radio made the world smaller and brought people together. Nowadays, the components you need to build a radio are cheap and easily available, so if you’re in any way electronically inclined, building a radio is an ideal excuse to dust off your soldering iron.

Tutorials

If you’re a 12-month subscriber (if you’re not, you really should be), you’ve no doubt been thinking of all sorts of things to do with the Adafruit Circuit Playground Express we gave you for free. How about a sewable circuit for a canvas bag? Use the accelerometer to detect patterns of movement — walking, for example — and flash a series of lights in response. It’s clever, fun, and an easy way to add some programmable fun to your shopping trips.


We’re also making gin, hacking a children’s toy car to unlock more features, and getting started with robot sumo to fill the void left by the cancellation of Robot Wars.

HackSpace magazine 6

All this, plus an 11-metre tall mechanical miner, in HackSpace magazine issue 6 — subscribe here from just £4 an issue or get the PDF version for free. You can also find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

The post Hackspace magazine 6: Paper Engineering appeared first on Raspberry Pi.

Married Torrent Tracker Couple Settle With BREIN

Post Syndicated from Ernesto original https://torrentfreak.com/married-torrent-tracker-couple-settles-with-brein-180420/

Dutch anti-piracy group BREIN has targeted operators and uploaders of pirate sites for more than a decade.

The group’s main goal is to shut the sites down. Instead of getting embroiled in dozens of lengthy court battles, it prefers to settle the matter with those responsible.

This week, BREIN announced another victory against a small torrent site, Snuffelland. The private tracker was targeted at a Dutch audience and the anti-piracy group managed to track down its operators.

According to BREIN, the site was run by a married couple from the town of Montfort, a 65-year-old man and a 51-year-old woman. In addition, the group also identified one of the uploaders, a 60-year-old man from Heukelum.

All three are unemployed and their financial position was taken into account in determining the scale of the settlement. The couple agreed to pay 2,500 euros and the uploader settled for 650 euros, with a threat of further penalties if they are caught again.

The private tracker itself was shut down and replaced by a message that was provided by BREIN.

“Making copyright-protected works available infringes the copyrights of the entitled rightsholder. Downloading from unauthorized sources is also prohibited in the Netherlands,” the message reads.

“For providers of legal content, snuffelland.org refers you to thecontentmap.nl and film.nl,” it adds.

These type of shutdowns are nothing new. BREIN has taken down hundreds of smaller sites in the past. However, only in recent years has the group has started to publish these settlement details.

That serves as a deterrent but also provides some more insight into how the group prefers to solve these cases, which appears to be relatively softly. In this case, it also disproves the notion that torrent sites are run by youngsters.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Confused About the Hybrid Cloud? You’re Not Alone

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/confused-about-the-hybrid-cloud-youre-not-alone/

Hybrid Cloud. What is it?

Do you have a clear understanding of the hybrid cloud? If you don’t, it’s not surprising.

Hybrid cloud has been applied to a greater and more varied number of IT solutions than almost any other recent data management term. About the only thing that’s clear about the hybrid cloud is that the term hybrid cloud wasn’t invented by customers, but by vendors who wanted to hawk whatever solution du jour they happened to be pushing.

Let’s be honest. We’re in an industry that loves hype. We can’t resist grafting hyper, multi, ultra, and super and other prefixes onto the beginnings of words to entice customers with something new and shiny. The alphabet soup of cloud-related terms can include various options for where the cloud is located (on-premises, off-premises), whether the resources are private or shared in some degree (private, community, public), what type of services are offered (storage, computing), and what type of orchestrating software is used to manage the workflow and the resources. With so many moving parts, it’s no wonder potential users are confused.

Let’s take a step back, try to clear up the misconceptions, and come up with a basic understanding of what the hybrid cloud is. To be clear, this is our viewpoint. Others are free to do what they like, so bear that in mind.

So, What is the Hybrid Cloud?

The hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud resources combined with third-party public cloud resources that use some kind of orchestration between them.

To get beyond the hype, let’s start with Forrester Research‘s idea of the hybrid cloud: “One or more public clouds connected to something in my data center. That thing could be a private cloud; that thing could just be traditional data center infrastructure.”

To put it simply, a hybrid cloud is a mash-up of on-premises and off-premises IT resources.

To expand on that a bit, we can say that the hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud[1] resources combined with third-party public cloud resources that use some kind of orchestration[2] between them. The advantage of the hybrid cloud model is that it allows workloads and data to move between private and public clouds in a flexible way as demands, needs, and costs change, giving businesses greater flexibility and more options for data deployment and use.

In other words, if you have some IT resources in-house that you are replicating or augmenting with an external vendor, congrats, you have a hybrid cloud!

Private Cloud vs. Public Cloud

The cloud is really just a collection of purpose built servers. In a private cloud, the servers are dedicated to a single tenant or a group of related tenants. In a public cloud, the servers are shared between multiple unrelated tenants (customers). A public cloud is off-site, while a private cloud can be on-site or off-site — or on-prem or off-prem.

As an example, let’s look at a hybrid cloud meant for data storage, a hybrid data cloud. A company might set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage to save cost and reduce the amount of storage needed on-site. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

The hybrid cloud concept also contains cloud computing. For example, at the end of the quarter, order processing application instances can be spun up off-premises in a hybrid computing cloud as needed to add to on-premises capacity.

Hybrid Cloud Benefits

If we accept that the hybrid cloud combines the best elements of private and public clouds, then the benefits of hybrid cloud solutions are clear, and we can identify the primary two benefits that result from the blending of private and public clouds.

Diagram of the Components of the Hybrid Cloud

Benefit 1: Flexibility and Scalability

Undoubtedly, the primary advantage of the hybrid cloud is its flexibility. It takes time and money to manage in-house IT infrastructure and adding capacity requires advance planning.

The cloud is ready and able to provide IT resources whenever needed on short notice. The term cloud bursting refers to the on-demand and temporary use of the public cloud when demand exceeds resources available in the private cloud. For example, some businesses experience seasonal spikes that can put an extra burden on private clouds. These spikes can be taken up by a public cloud. Demand also can vary with geographic location, events, or other variables. The public cloud provides the elasticity to deal with these and other anticipated and unanticipated IT loads. The alternative would be fixed cost investments in on-premises IT resources that might not be efficiently utilized.

For a data storage user, the on-premises private cloud storage provides, among other benefits, the highest speed access. For data that is not frequently accessed, or needed with the absolute lowest levels of latency, it makes sense for the organization to move it to a location that is secure, but less expensive. The data is still readily available, and the public cloud provides a better platform for sharing the data with specific clients, users, or with the general public.

Benefit 2: Cost Savings

The public cloud component of the hybrid cloud provides cost-effective IT resources without incurring capital expenses and labor costs. IT professionals can determine the best configuration, service provider, and location for each service, thereby cutting costs by matching the resource with the task best suited to it. Services can be easily scaled, redeployed, or reduced when necessary, saving costs through increased efficiency and avoiding unnecessary expenses.

Comparing Private vs Hybrid Cloud Storage Costs

To get an idea of the difference in storage costs between a purely on-premises solutions and one that uses a hybrid of private and public storage, we’ll present two scenarios. For each scenario we’ll use data storage amounts of 100 terabytes, 1 petabyte, and 2 petabytes. Each table is the same format, all we’ve done is change how the data is distributed: private (on-premises) cloud or public (off-premises) cloud. We are using the costs for our own B2 Cloud Storage in this example. The math can be adapted for any set of numbers you wish to use.

Scenario 1    100% of data on-premises storage

Data Stored
Data stored On-Premises: 100% 100 TB 1,000 TB 2,000 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $1,200 $12,000 $24,000
High — $20/TB/Month $2,000 $20,000 $40,000

Scenario 2    20% of data on-premises with 80% public cloud storage (B2)

Data Stored
Data stored On-Premises: 20% 20 TB 200 TB 400 TB
Data stored in Cloud: 80% 80 TB 800 TB 1,600 TB
On-premises cost range Monthly Cost
Low — $12/TB/Month $240 $2,400 $4,800
High — $20/TB/Month $400 $4,000 $8,000
Public cloud cost range Monthly Cost
Low — $5/TB/Month (B2) $400 $4,000 $8,000
High — $20/TB/Month $1,600 $16,000 $32,000
On-premises + public cloud cost range Monthly Cost
Low $640 $6,400 $12,800
High $2,000 $20,000 $40,000

As can be seen in the numbers above, using a hybrid cloud solution and storing 80% of the data in the cloud with a provider such as Backblaze B2 can result in significant savings over storing only on-premises. For other cost scenarios, see the B2 Cost Calculator.

When Hybrid Might Not Always Be the Right Fit

There are circumstances where the hybrid cloud might not be the best solution. Smaller organizations operating on a tight IT budget might best be served by a purely public cloud solution. The cost of setting up and running private servers is substantial.

An application that requires the highest possible speed might not be suitable for hybrid, depending on the specific cloud implementation. While latency does play a factor in data storage for some users, it is less of a factor for uploading and downloading data than it is for organizations using the hybrid cloud for computing. Because Backblaze recognized the importance of speed and low-latency for customers wishing to use computing on data stored in B2, we directly connected our data centers with those of our computing partners, ensuring that latency would not be an issue even for a hybrid cloud computing solution.

It is essential to have a good understanding of workloads and their essential characteristics in order to make the hybrid cloud work well for you. Each application needs to be examined for the right mix of private cloud, public cloud, and traditional IT resources that fit the particular workload in order to benefit most from a hybrid cloud architecture.

The Hybrid Cloud Can Be a Win-Win Solution

From the high altitude perspective, any solution that enables an organization to respond in a flexible manner to IT demands is a win. Avoiding big upfront capital expenses for in-house IT infrastructure will appeal to the CFO. Being able to quickly spin up IT resources as they’re needed will appeal to the CTO and VP of Operations.

Should You Go Hybrid?

We’ve arrived at the bottom line and the question is, should you or your organization embrace hybrid cloud infrastructures?

According to 451 Research, by 2019, 69% of companies will operate in hybrid cloud environments, and 60% of workloads will be running in some form of hosted cloud service (up from 45% in 2017). That indicates that the benefits of the hybrid cloud appeal to a broad range of companies.

In Two Years, More Than Half of Workloads Will Run in Cloud

Clearly, depending on an organization’s needs, there are advantages to a hybrid solution. While it might have been possible to dismiss the hybrid cloud in the early days of the cloud as nothing more than a buzzword, that’s no longer true. The hybrid cloud has evolved beyond the marketing hype to offer real solutions for an increasingly complex and challenging IT environment.

If an organization approaches the hybrid cloud with sufficient planning and a structured approach, a hybrid cloud can deliver on-demand flexibility, empower legacy systems and applications with new capabilities, and become a catalyst for digital transformation. The result can be an elastic and responsive infrastructure that has the ability to quickly respond to changing demands of the business.

As data management professionals increasingly recognize the advantages of the hybrid cloud, we can expect more and more of them to embrace it as an essential part of their IT strategy.

Tell Us What You’re Doing with the Hybrid Cloud

Are you currently embracing the hybrid cloud, or are you still uncertain or hanging back because you’re satisfied with how things are currently? Maybe you’ve gone totally hybrid. We’d love to hear your comments below on how you’re dealing with the hybrid cloud.


[1] Private cloud can be on-premises or a dedicated off-premises facility.

[2] Hybrid cloud orchestration solutions are often proprietary, vertical, and task dependent.

The post Confused About the Hybrid Cloud? You’re Not Alone appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcing Coolest Projects North America

Post Syndicated from Courtney Lentz original https://www.raspberrypi.org/blog/coolest-projects-north-america/

The Raspberry Pi Foundation loves to celebrate people who use technology to solve problems and express themselves creatively, so we’re proud to expand the incredibly successful event Coolest Projects to North America. This free event will be held on Sunday 23 September 2018 at the Discovery Cube Orange County in Santa Ana, California.

Coolest Projects North America logo Raspberry Pi CoderDojo

What is Coolest Projects?

Coolest Projects is a world-leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. The event is both a competition and an exhibition to give young digital makers aged 7 to 17 a platform to celebrate their successes, creativity, and ingenuity.

showcase crowd — Coolest Projects North America

In 2012, Coolest Projects was conceived as an opportunity for CoderDojo Ninjas to showcase their work and for supporters to acknowledge these achievements. Week after week, Ninjas would meet up to work diligently on their projects, hacks, and code; however, it can be difficult for them to see their long-term progress on a project when they’re concentrating on its details on a weekly basis. Coolest Projects became a dedicated time each year for Ninjas and supporters to reflect, celebrate, and share both the achievements and challenges of the maker’s journey.

three female coolest projects attendees — Coolest Projects North America

Coolest Projects North America

Not only is Coolest Projects expanding to North America, it’s also expanding its participant pool! Members of our team have met so many amazing young people creating in all areas of the world, that it simply made sense to widen our outreach to include Code Clubs, students of Raspberry Pi Certified Educators, and members of the Raspberry Jam community at large as well as CoderDojo attendees.

 a boy showing a technology project to an old man, with a girl playing on a laptop on the floor — Coolest Projects North America

Exhibit and attend Coolest Projects

Coolest Projects is a free, family- and educator-friendly event. Young people can apply to exhibit their projects, and the general public can register to attend this one-day event. Be sure to register today, because you make Coolest Projects what it is: the coolest.

The post Announcing Coolest Projects North America appeared first on Raspberry Pi.

Audit Trail Overview

Post Syndicated from Bozho original https://techblog.bozho.net/audit-trail-overview/

As part of my current project (secure audit trail) I decided to make a survey about the use of audit trail “in the wild”.

I haven’t written in details about this project of mine (unlike with some other projects). Mostly because it’s commercial and I don’t want to use my blog as a direct promotion channel (though I am doing that at the moment, ironically). But the aim of this post is to shed some light on how audit trail is used.

The survey can be found here. The questions are basically: does your current project have audit trail functionality, and if yes, is it protected from tampering. If not – do you think you should have such functionality.

The results are interesting (although with only around 50 respondents)

So more than half of the systems (on which respondents are working) don’t have audit trail. While audit trail is recommended by information security and related standards, it may not find place in the “busy schedule” of a software project, even though it’s fairly easy to provide a trivial implementation (e.g. I’ve written how to quickly setup one with Hibernate and Spring)

A trivial implementation might do in many cases but if the audit log is critical (e.g. access to sensitive data, performing financial operations etc.), then relying on a trivial implementation might not be enough. In other words – if the sysadmin can access the database and delete or modify the audit trail, then it doesn’t serve much purpose. Hence the next question – how is the audit trail protected from tampering:

And apparently, from the less than 50% of projects with audit trail, around 50% don’t have technical guarantees that the audit trail can’t be tampered with. My guess is it’s more, because people have different understanding of what technical measures are sufficient. E.g. someone may think that digitally signing your log files (or log records) is sufficient, but in fact it isn’t, as whole files (or records) can be deleted (or fully replaced) without a way to detect that. Timestamping can help (and a good audit trail solution should have that), but it doesn’t guarantee the order of events or prevent a malicious actor from deleting or inserting fake ones. And if timestamping is done on a log file level, then any not-yet-timestamped log file is vulnerable to manipulation.

I’ve written about event logs before and their two flavours – event sourcing and audit trail. An event log can effectively be considered audit trail, but you’d need additional security to avoid the problems mentioned above.

So, let’s see what would various levels of security and usefulness of audit logs look like. There are many papers on the topic (e.g. this and this), and they often go into the intricate details of how logging should be implemented. I’ll try to give an overview of the approaches:

  • Regular logs – rely on regular INFO log statements in the production logs to look for hints of what has happened. This may be okay, but is harder to look for evidence (as there is non-auditable data in those log files as well), and it’s not very secure – usually logs are collected (e.g. with graylog) and whoever has access to the log collector’s database (or search engine in the case of Graylog), can manipulate the data and not be caught
  • Designated audit trail – whether it’s stored in the database or in logs files. It has the proper business-event level granularity, but again doesn’t prevent or detect tampering. With lower risk systems that may is perfectly okay.
  • Timestamped logs – whether it’s log files or (harder to implement) database records. Timestamping is good, but if it’s not an external service, a malicious actor can get access to the local timestamping service and issue fake timestamps to either re-timestamp tampered files. Even if the timestamping is not compromised, whole entries can be deleted. The fact that they are missing can sometimes be deduced based on other factors (e.g. hour of rotation), but regularly verifying that is extra effort and may not always be feasible.
  • Hash chaining – each entry (or sequence of log files) could be chained (just as blockchain transactions) – the next one having the hash of the previous one. This is a good solution (whether it’s local, external or 3rd party), but it has the risk of someone modifying or deleting a record, getting your entire chain and re-hashing it. All the checks will pass, but the data will not be correct
  • Hash chaining with anchoring – the head of the chain (the hash of the last entry/block) could be “anchored” to an external service that is outside the capabilities of a malicious actor. Ideally, a public blockchain, alternatively – paper, a public service (twitter), email, etc. That way a malicious actor can’t just rehash the whole chain, because any check against the external service would fail.
  • WORM storage (write once, ready many). You could send your audit logs almost directly to WORM storage, where it’s impossible to replace data. However, that is not ideal, as WORM storage can be slow and expensive. For example AWS Glacier has rather big retrieval times and searching through recent data makes it impractical. It’s actually cheaper than S3, for example, and you can have expiration policies. But having to support your own WORM storage is expensive. It is a good idea to eventually send the logs to WORM storage, but “fresh” audit trail should probably not be “archived” so that it’s searchable and some actionable insight can be gained from it.
  • All-in-one – applying all of the above “just in case” may be unnecessary for every project out there, but that’s what I decided to do at LogSentinel. Business-event granularity with timestamping, hash chaining, anchoring, and eventually putting to WORM storage – I think that provides both security guarantees and flexibility.

I hope the overview is useful and the results from the survey shed some light on how this aspect of information security is underestimated.

The post Audit Trail Overview appeared first on Bozho's tech blog.

IsoHunt Founder Returns With New Search Tool

Post Syndicated from Ernesto original https://torrentfreak.com/isohunt-founder-returns-with-new-search-tool-180419/

Of all the major torrent sites that dominated the Internet at the beginning of this decade, only a few remain.

One of the sites that fell prey to ever-increasing pressure from the entertainment industry was isoHunt.

Founded by the Canadian entrepreneur Gary Fung, the site was one of the early pioneers in the world of torrents, paving the way for many others. However, this spotlight also caught the attention of the major movie studios.

After a lengthy legal battle isoHunt’s founder eventually shut down the site late 2013. This happened after Fung signed a settlement agreement with Hollywood for no less than $110 million, on paper at least.

Launching a new torrent search engine was never really an option, but Fung decided not to let his expertise go to waste. He focused his time and efforts on a new search project instead, which was unveiled to the public this week.

The new app called “WonderSwipe” has just been added to Apple’s iOS store. It’s a mobile search app that ties into Google’s backend, but with a different user interface. While it has nothing to do with file-sharing, we decided to reach out to isoHunt’s founder to find out more.

Fung tells us that he got the idea for the app because he was frustrated with Google’s default search options on the mobile platform.

“I find myself barely do any search on the smartphone, most of the time waiting until I get to my desktop. I ask why?” Fung tells us.

One of the main issues he identified is the fact that swiping is not an option. Instead, people end up browsing through dozens of mobile browser tabs. So, Fung took Google’s infrastructure and search power, making it swipeable.

“From a UI design perspective, I find swiping through photos on the first iPhone one of the most extraordinary advances in computing. It’s so easy that babies would be doing it before they even learn how to flip open a book!

“Bringing that ease of use to the central way of conducting mobile search and research is the initial eureka I had in starting work on WonderSwipe,” Fung adds.

That was roughly three years ago, and a few hours ago WonderSwipe finally made its way into the App store. Android users will have to wait for now, but the application will eventually be available on that platform as well.

In addition to swiping through search results, the app also promises faster article loading and browsing, a reader mode with condensed search results, and a hands-free mode with automated browsing where summaries are read out loud.

WonderwSwipe


Of course, WonderSwipe is nothing like isoHunt ever was, apart from the fact that Google is a search engine that also links to torrents, indirectly.

This similarity was also brought up during the lawsuit with the MPAA, when Fung’s legal team likened isoHunt to Google in court. However, the Canadian entrepreneur doesn’t expect that Hollywood will have an issue with WonderSwipe in particular.

“isoHunt was similar to Google in how it worked as a search engine, but not in scope. Torrents are a small subset of all the webpages Google indexes,” Fung says.

“WonderSwipe’s aim is to find answers in all webpages, powered by Google search results. It presents results in extracted text and summaries with no connection to BitTorrent clients. As such, WonderSwipe can be bigger than isoHunt has ever been.”

Ironically, in recent years Hollywood has often criticized Google for linking to pirated content in its search results. These results will also be available through WonderSwipe.

However, Fung says that any copyright issues with WonderSwipe will have to be dealt with on the search engine level, by Google.

“If there are links to pirated content, tell search engines so they can take them down!” he says.

WonderSwipe is totally free and Fung tells us that he plans to monetize it with in-app purchases for pro features, and non-intrusive advertising that won’t slow down swiping or search results. More details on the future plans for the app are available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Achieving Major Stability and Performance Improvements in Yahoo Mail with a Novel Redux Architecture

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/173062946866

yahoodevelopers:

By Mohit Goenka, Gnanavel Shanmugam, and Lance Welsh

At Yahoo Mail, we’re constantly striving to upgrade our product experience. We do this not only by adding new features based on our members’ feedback, but also by providing the best technical solutions to power the most engaging experiences. As such, we’ve recently introduced a number of novel and unique revisions to the way in which we use Redux that have resulted in significant stability and performance improvements. Developers may find our methods useful in achieving similar results in their apps.

Improvements to product metrics

Last year Yahoo Mail implemented a brand new architecture using Redux. Since then, we have transformed the overall architecture to reduce latencies in various operations, reduce JavaScript exceptions, and better synchronized states. As a result, the product is much faster and more stable.

Stability improvements:

  • when checking for new emails – 20%
  • when reading emails – 30%
  • when sending emails – 20%

Performance improvements:

  • 10% improvement in page load performance
  • 40% improvement in frame rendering time

We have also reduced API calls by approximately 20%.

How we use Redux in Yahoo Mail

Redux architecture is reliant on one large store that represents the application state. In a Redux cycle, action creators dispatch actions to change the state of the store. React Components then respond to those state changes. We’ve made some modifications on top of this architecture that are atypical in the React-Redux community.

For instance, when fetching data over the network, the traditional methodology is to use Thunk middleware. Yahoo Mail fetches data over the network from our API. Thunks would create an unnecessary and undesirable dependency between the action creators and our API. If and when the API changes, the action creators must then also change. To keep these concerns separate we dispatch the action payload from the action creator to store them in the Redux state for later processing by “action syncers”. Action syncers use the payload information from the store to make requests to the API and process responses. In other words, the action syncers form an API layer by interacting with the store. An additional benefit to keeping the concerns separate is that the API layer can change as the backend changes, thereby preventing such changes from bubbling back up into the action creators and components. This also allowed us to optimize the API calls by batching, deduping, and processing the requests only when the network is available. We applied similar strategies for handling other side effects like route handling and instrumentation. Overall, action syncers helped us to reduce our API calls by ~20% and bring down API errors by 20-30%.

Another change to the normal Redux architecture was made to avoid unnecessary props. The React-Redux community has learned to avoid passing unnecessary props from high-level components through multiple layers down to lower-level components (prop drilling) for rendering. We have introduced action enhancers middleware to avoid passing additional unnecessary props that are purely used when dispatching actions. Action enhancers add data to the action payload so that data does not have to come from the component when dispatching the action. This avoids the component from having to receive that data through props and has improved frame rendering by ~40%. The use of action enhancers also avoids writing utility functions to add commonly-used data to each action from action creators.

image

In our new architecture, the store reducers accept the dispatched action via action enhancers to update the state. The store then updates the UI, completing the action cycle. Action syncers then initiate the call to the backend APIs to synchronize local changes.

Conclusion

Our novel use of Redux in Yahoo Mail has led to significant user-facing benefits through a more performant application. It has also reduced development cycles for new features due to its simplified architecture. We’re excited to share our work with the community and would love to hear from anyone interested in learning more.

Backblaze at NAB 2018 in Las Vegas

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backblaze-at-nab-2018-in-las-vegas/

Backblaze B2 Cloud Storage NAB Booth

Backblaze just returned from exhibiting at NAB in Las Vegas, April 9-12, where the response to our recent announcements was tremendous. In case you missed the news, Backblaze B2 Cloud Storage continues to extend its lead as the most affordable, high performance cloud on the planet.

Backblaze’s News at NAB

Backblaze at NAB 2018 in Las Vegas

The Backblaze booth just before opening

What We Were Asked at NAB

Our booth was busy from start to finish with attendees interested in learning more about Backblaze and B2 Cloud Storage. Here are the questions we were asked most often in the booth.

Q. How long has Backblaze been in business?
A. The company was founded in 2007. Today, we have over 500 petabytes of data from customers in over 150 countries.

B2 Partners at NAB 2018

Q. Where is your data stored?
A. We have data centers in California and Arizona and expect to expand to Europe by the end of the year.

Q. How can your services be so inexpensive?
A. Backblaze’s goal from the beginning was to offer cloud backup and storage that was easy to use and affordable. All the existing options were simply too expensive to be viable, so we created our own infrastructure. Our purpose-built storage system — the Backblaze’s Storage Pod — is recognized as one of the most cost efficient storage platforms available.

Q. Tell me about your hardware.
A. Backblaze’s Storage Pods hold 60 HDDs each, containing as much as 720TB data per pod, stored using Reed-Solomon error correction. Storage Pods are arranged in Tomes with twenty Storage Pods making up a Vault.

Q. Where do you fit in the data workflow?
A. People typically use B2 in for archiving completed projects. All data is readily available for download from B2, making it more convenient than off-line storage. In addition, DAM and MAM systems such as CatDV, axle ai, Cantemo, and others have integrated with B2 to store raw images behind the proxies.

Q. Who uses B2 in the M&E business?
A. KLRU-TV, the PBS station in Austin Texas, uses B2 to archive their entire 43 year catalog of Austin City Limits episodes and related materials. WunderVu, the production house for Pixvana, uses B2 to back up and archive their local storage systems on which they build virtual reality experiences for their customers.

Q. You’re the company that publishes the hard drive stats, right?
A. Yes, we are!

Backblaze Case Studies and Swag at NAB 2018 in Las Vegas

Were You at NAB?

If you were, we hope you stopped by the Backblaze booth to say hello. We’d like to hear what you saw at the show that was interesting or exciting. Please tell us in the comments.

The post Backblaze at NAB 2018 in Las Vegas appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.