Tag Archives: paris

Amazon Relational Database Service – Looking Back at 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-relational-database-service-looking-back-at-2017/

The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!

Certification & Security

Features

Engine Versions & Features

Regional Support

Instance Support

Price Reductions

And That’s a Wrap
I’m pretty sure that’s everything. As you can see, 2017 was quite the year! I can’t wait to see what the team delivers in 2018.

Jeff;

 

500 Petabytes And Counting

Post Syndicated from Yev original https://www.backblaze.com/blog/500-petabytes-and-counting/

500 Petabytes = 500,000,000 Gigabytes

It seems like only yesterday that we crossed the 350 petabyte mark. It was actually June 2017, but boy have we been growing since. In October 2017 we crossed 400 petabytes. Today, we’re proud to announce we’ve crossed the 500 petabyte mark. That’s a very healthy clip, see for yourself!

Whether you have 50 GB, 500 GB or are just an avid blog reader, thank you for being on this incredible journey with us through the years.

…we’re literally moving at 1,000,000 files per hour.

We’re extremely proud of our track record. Throughout these 11 years we’ve striven to be the simplest, fastest, and most affordable online backup (and now cloud storage) solution available. We’re not just focusing on data ingress, but also adhering to our original goal of making sure that “no one ever loses data again.” How quickly are we restoring data? On average, we’re literally moving at 1,000,000 files per hour.

Even after all these years, one of the most frequent questions asked is, “How has Backblaze maintained such affordable pricing, particularly when the industry continues to move away from unlimited data plans?”

The cloud storage industry is very competitive, with cloud sync, storage, and backup providers leaving the unlimited market every single day: OneDrive, Amazon Cloud Storage, and most recently CrashPlan. Other providers either have tiered pricing (iDrive), or charge almost double or even triple for all the features we provide for our unlimited backup service (Carbonite). So how do we do it?

The answer comes down to our relentless pursuit of lowering costs. Our open-source Backblaze Storage Pods comprise our Backblaze Vaults, and the less expensive and more performant our Storage Pods are, the better the service that we can provide. This all directly translates into the service and pricing we can offer you.

A key part of our service is to be as open as possible with our costs and structure. After all, you are entrusting us with some of your most valuable assets. Still, it is very difficult to find an apples to apples comparison to what our competitors are doing. For example, we can gain some insight from a 2011 interview with Carbonite’s CEO, who gave an interview in which he said Carbonite’s cost of storing a petabyte was $250,000. At the time, our cost to store a petabyte was $76,481 (more on that calculation can be found here and here). If Backblaze’s fundamental cost to store data is one-third that of Carbonite’s, it makes sense that Carbonite’s cost to its customers would be more than Backblaze’s. Today, Backblaze backup is $50/year and Carbonite’s equivalent service is $149.99.

Our continued focus on reducing costs has allowed us to maintain a healthy business. And after accepting customer data for almost 10 years, we sincerely want to thank you all for giving us your trust, and allowing us to protect your important data and memories for you. Here’s to the next 500 petabytes; they’ll be here before we know it.


Update 2/5/18

Since publishing this post, we have posted the latest in our series of Hard Drive Stats, in which we summarize the performance of the hard drives we used in our data centers in 2017 and previously.

The post 500 Petabytes And Counting appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The EU is Working On Its Own Piracy Watch-List

Post Syndicated from Ernesto original https://torrentfreak.com/the-eu-is-working-on-its-own-piracy-watch-list-180124/

Earlier this month, the Office of the US Trade Representative (USTR) released an updated version of its “Out-of-Cycle Review of Notorious Markets,” ostensibly identifying some of the worst IP-offenders worldwide.

The annual list overview helps to guide the U.S. Government’s position towards foreign countries when it comes to copyright enforcement.

The most recent version featured traditional pirate sites such as The Pirate Bay, Rapidgator, and Gostream, but also the Russian social network VK and China-based marketplaces Alibaba and Taobao.com.

Since the list only identifies foreign sites, American services are never included. However, this restriction doesn’t apply in Europe, where the European Commission announced this week that it’s working on its own piracy watch list.

“The European Commission – on the basis of input from the stakeholders – after thorough verification of the received information – intends to publish a so called ‘Counterfeit and Piracy Watch-List’ in 2018, which will be updated regularly,” the EU’s call for submissions reads.

The EU watch list will operate in a similar fashion to the US equivalent and will be used to encourage site operators and foreign governments to take action.

“The list will identify and describe the most problematic marketplaces – with special focus on online marketplaces – in order to encourage their operators and owners as well as the responsible local authorities and governments to take the necessary actions and measures to reduce the availability of IPR infringing goods or services.”

In recent years various copyright holder groups have repeatedly complained about a lack of anti-piracy initiatives from companies such as Google and Cloudflare, so it will be interesting to see if these will be mentioned.

The same is true for online marketplaces. Responding to the US list last week, Alibaba also highlighted that several American companies suffer the same piracy and counterfeiting problems as they do, without being reprimanded.

“What about Amazon, eBay and others? USTR has no basis for comparison, because it does not ask for similar data from U.S. companies,” Alibaba noted in a rebuttal.

The EU watch list is clearly inspired by the US counterpart. It shows striking similarities with the US version of the watch list and some of the language appears to be copied (or pirated) word for word.

The EU writes, for example, that their list “will not mean to reflect findings of legal violations, nor will it reflect the European Union’s analysis of the general intellectual property rights protection and enforcement climate in the country or countries concerned.”

Just a few days earlier the USTR noted that its list “does not make findings of legal violations. Nor does it reflect the U.S. Government’s analysis of the general IP protection and enforcement climate in the countries connected with the listed markets.”

The above means that, despite branding foreign services as notorious offenders, these are mere allegations. No hard proof is to be expected in the report, nor will the EU research the matter on its own.

If the US example is followed, the watch list will be mostly an overview of copyright holder complaints, signed by the authorities. The latter is not without controversy, as China says it doubts the objectivity of USTR’s report for this very reason.

Copyright holders and other interested parties are invited to submit their contributions and comments by 31 March 2018, and the final list is expected to be released later in the year.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

[$] Monitoring with Prometheus 2.0

Post Syndicated from corbet original https://lwn.net/Articles/744410/rss

Prometheus is a monitoring tool
built from scratch by SoundCloud in 2012. It works by pulling metrics from
monitored services and storing them in a time series database (TSDB). It
has a powerful query language to inspect that database, create alerts, and
plot basic graphs. Those graphs can then be used to detect anomalies or
trends for (possibly automated) resource provisioning. Prometheus also has
extensive service discovery features and supports high availability
configurations.

That’s what the brochure says, anyway; let’s see how it works in the hands
of an old grumpy system administrator. I’ll be drawing comparisons
with Munin and Nagios frequently because those are the tools I have
used for over a decade in monitoring Unix clusters.

Fix Your Crawler

Post Syndicated from Bozho original https://techblog.bozho.net/fix-your-crawler/

Every now and then I open the admin panel of my blog hosting and ban a few IPs (after I’ve tried messaging their abuse email, if I find one). It is always IPs that are generating tons of requests (and traffic) – most likely running some home-made crawler. In some cases the IPs belong to an actual service that captures and provides content, in other cases it’s just a scraper for unknown reasons.

I don’t want to ban IPs, especially because that same IP may be reassigned to a legitimate user (or network) in the future. But they are increasing my hosting usage, which in turn leads to the hosting provider suggesting an upgrade in the plan. And this is not about me, I’m just an example – tons of requests to millions of sites are … useless.

My advice (and plea) is this – please fix your crawlers. Or scrapers. Or whatever you prefer to call that thing that programmatically goes on websites and gets their content.

How? First, reuse an existing crawler. No need to make something new (unless there’s a very specific use-case). A good intro and comparison can be seen here.

Second, make your crawler “polite” (the “politeness” property in the article above). Here’s a good overview on how to be polite, including respect for robots.txt. Existing implementations most likely have politeness options, but you may have to configure them.

Here I’d suggest another option – set a dynamic crawl rate per website that depends on how often the content is updated. My blog updates 3 times a month – no need to crawl it more than once or twice a day. TechCrunch updates many times a day; it’s probably a good idea to crawl it way more often. I don’t have a formula, but you can come up with one that ends up crawling different sites with periods between 2 minutes and 1 day.

Third, don’t “scrape” the content if a better protocol is supported. Many content websites have RSS – use that instead of the HTML of the page. If not, make use of sitemaps. If the WebSub protocol gains traction, you can avoid the crawling/scraping entirely and get notified on new content.

Finally, make sure your crawler/scraper is identifiable by the UserAgent. You can supply your service name or web address in it to make it easier for website owners to find you and complain in case you’ve misconfigured something.

I guess it makes sense to see if using a service like import.io, ScrapingHub, WrapAPI or GetData makes sense for your usecase, instead of reinventing the wheel.

No matter what your use case or approach is, please make sure you don’t put unnecessary pressure on others’ websites.

The post Fix Your Crawler appeared first on Bozho's tech blog.

Wanted: Fixed Assets Accountant

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-fixed-assets-accountant/

As Backblaze continues to grow, we’re expanding our accounting team! We’re looking for a seasoned Fixed Asset Accountant to help us with fixed assets and equipment leases.

Job Duties:

  • Maintain and review fixed assets.
  • Record fixed asset acquisitions and dispositions.
  • Review and update the detailed schedule of fixed assets and accumulated depreciation.
  • Calculate depreciation for all fixed assets.
  • Investigate the potential obsolescence of fixed assets.
  • Coordinate with Operations team data center asset dispositions.
  • Conduct periodic physical inventory counts of fixed assets. Work with Operations team on cycle counts.
  • Reconcile the balance in the fixed asset subsidiary ledger to the summary-level account in the general ledger.
  • Track company expenditures for fixed assets in comparison to the capital budget and management authorizations.
  • Prepare audit schedules relating to fixed assets, and assist the auditors in their inquiries.
  • Recommend to management any updates to accounting policies related to fixed assets.
  • Manage equipment leases.
  • Engage and negotiate acquisition of new equipment lease lines.
  • Overall control of original lease documentation and maintenance of master lease files.
  • Facilitate and track routing and execution of various lease related: agreements — documents/forms/lease documents.
  • Establish and maintain proper controls to track expirations, renewal options, and all other critical dates.
  • Perform other duties and special projects as assigned.

Qualifications:

  • 5-6 years relevant accounting experience.
  • Knowledge of inventory and cycle counting preferred.
  • Quickbooks, Excel, Word experience desired.
  • Organized, with excellent attention to detail, meticulous, quick-learner.
  • Good interpersonal skills and a team player.
  • Flexibility and ability to adapt and wear different hats.

Backblaze Employees Have:

  • Good attitude and willingness to do whatever it takes to get the job done.
  • Strong desire to work for a small, fast-paced company.
  • Desire to learn and adapt to rapidly changing technologies and work environment.
  • Comfortable with well-behaved pets in the office.

This position is located in San Mateo, California. Regular attendance in the office is expected. Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

If This Sounds Like You:
Send an email to [email protected] with:

  1. Fixed Asset Accountant in the subject line
  2. Your resume attached
  3. An overview of your relevant experience

The post Wanted: Fixed Assets Accountant appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Might Google Class “Torrent” a Dirty Word? France is About to Find Out

Post Syndicated from Andy original https://torrentfreak.com/might-google-class-torrent-a-dirty-word-france-is-about-to-find-out-171223/

Like most countries, France is struggling to find ways to stop online piracy running rampant. A number of options have been tested thus far, with varying results.

One of the more interesting cases has been running since 2015, when music industry group SNEP took Google and Microsoft to court demanding automated filtering of ‘pirate’ search results featuring three local artists.

Before the High Court of Paris, SNEP argued that searches for the artists’ names plus the word “torrent” returned mainly infringing results on Google and Bing. Filtering out results with both sets of terms would reduce the impact of people finding pirate content through search, they said.

While SNEP claimed that its request was in line with Article L336-2 of France’s intellectual property code, which allows for “all appropriate measures” to prevent infringement, both Google and Microsoft fought back, arguing that such filtering would be disproportionate and could restrict freedom of expression.

The Court eventually sided with the search engines, noting that torrent is a common noun that refers to a neutral communication protocol.

“The requested measures are thus tantamount to general monitoring and may block access to lawful websites,” the High Court said.

Despite being told that its demands were too broad, SNEP decided to appeal. The case was heard in November where concerns were expressed over potential false positives.

Since SNEP even wants sites with “torrent” in their URL filtered out via a “fully automated procedures that do not require human intervention”, this very site – TorrentFreak.com – could be sucked in. To counter that eventuality, SNEP proposed some kind of whitelist, NextInpact reports.

With no real consensus on how to move forward, the parties were advised to enter discussions on how to get closer to the aim of reducing piracy but without causing collateral damage. Last week the parties agreed to enter negotiations so the details will now have to be hammered out between their respective law firms. Failing that, they will face a ruling from the court.

If this last scenario plays out, the situation appears to favor the search engines, who have a High Court ruling in their favor and already offer comprehensive takedown tools for copyright holders to combat the exploitation of their content online.

Meanwhile, other elements of the French recording industry have booked a notable success against several pirate sites.

SCPP, which represents Warner, Universal, Sony and thousands of others, went to court in February this year demanding that local ISPs Bouygues, Free, Orange, SFR and Numéricable prevent their subscribers from accessing ExtraTorrent, isoHunt, Torrent9 and Cpasbien.

Like SNEP in the filtering case, SCPP also cited Article L336-2 of France’s intellectual property code, demanding that the sites plus their variants, mirrors and proxies should be blocked by the ISPs so that their subscribers can no longer gain access.

This week the Paris Court of First Instance sided with the industry group, ordering the ISPs to block the sites. The service providers were also told to pick up the bill for costs.

These latest cases are yet more examples of France’s determination to crack down on piracy.

Early December it was revealed that since its inception, nine million piracy warnings have been sent to citizens via the Hadopi anti-piracy agency. Since the launch of its graduated response regime in 2010, more than 2,000 cases have been referred to prosecutors, resulting in 189 criminal convictions.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Now Open AWS EU (Paris) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

Sci-Hub Battles Pirate Bay-esque Domain Name Whack-a-Mole

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-battles-pirate-bay-esque-domain-name-whack-a-mole-171216/

Sci-Hub is often referred to as the “Pirate Bay of Science,” and this description has become more and more apt in recent weeks.

Initially, the comparison was made to illustrate that Sci-Hub is used by researchers to download articles for free, much like the rest of the world uses The Pirate Bay to get free stuff.

There are more parallels though. Increasingly, Sci-Hub has trouble keeping its domain names. Following two injunctions in the US, academic publishers now have court orders to compel domain registrars and registries to suspend Sci-Hub’s addresses.

Although there is no such court order for The Pirate Bay, the notorious torrent site also has a long history of domain suspensions.

Both sites appear to tackle the problem in a similar manner. They simply ignore all enforcement efforts and bypass them with new domains and other circumvention tools. They have several backup domains in place as well as unsuspendable .onion addresses, which are accessible on the Tor network.

Since late November, a lot of Sci-Hub users have switched to Sci-Hub.bz when other domains were suspended. And, when the .bz domain was targeted a few days ago, they moved to different alternatives. It’s a continuous game of Whack-a-Mole that is hard to stop.

Suspended…

There’s another striking similarity between TPB and Sci-Hub. Unlike other pirate sites, their founders are both vocal. In the case of Sci-Hub this is Alexandra Elbakyan, a researcher born and graduated in Kazakhstan.

She recently responded to people who had trouble accessing the site. “The site is working properly, but the capitalists have started blocking Sci-Hub domains, so the site may not be accessible at the regular addresses,” she wrote on VK.

Instead of complaining, Elbakyan encouraged people to do some research of their own, as there are still plenty of alternative domains up and running. And indeed, at the time of writing Sci-hub.la, Sci-hub.tv, Sci-hub.tw, Sci-hub.hk, and others can be accessed without any hassle.

While Sci-Hub’s classification as the “Pirate Bay of Science” is certainly warranted, there are also differences. The Pirate Bay was raided several times and the founders were criminally prosecuted. That’s not the case for Sci-Hub.

But who knows what will happen next…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

How to Make Your Web App More Reliable and Performant Using webpack: a Yahoo Mail Case Study

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/168508200981

yahoodevelopers:

image

By Murali Krishna Bachhu, Anurag Damle, and Utkarsh Shrivastava

As engineers on the Yahoo Mail team at Oath, we pride ourselves on the things that matter most to developers: faster development cycles, more reliability, and better performance. Users don’t necessarily see these elements, but they certainly feel the difference they make when significant improvements are made. Recently, we were able to upgrade all three of these areas at scale by adopting webpack® as Yahoo Mail’s underlying module bundler, and you can do the same for your web application.

What is webpack?

webpack is an open source module bundler for modern JavaScript applications. When webpack processes your application, it recursively builds a dependency graph that includes every module your application needs. Then it packages all of those modules into a small number of bundles, often only one, to be loaded by the browser.

webpack became our choice module bundler not only because it supports on-demand loading, multiple bundle generation, and has a relatively low runtime overhead, but also because it is better suited for web platforms and NodeJS apps and has great community support.

image

Comparison of webpack to other open source bundlers


How did we integrate webpack?

Like any developer does when integrating a new module bundler, we started integrating webpack into Yahoo Mail by looking at its basic config file. We explored available default webpack plugins as well as third-party webpack plugins and then picked the plugins most suitable for our application. If we didn’t find a plugin that suited a specific need, we wrote the webpack plugin ourselves (e.g., We wrote a plugin to execute Atomic CSS scripts in the latest Yahoo Mail experience in order to decrease our overall CSS payload**).

During the development process for Yahoo Mail, we needed a way to make sure webpack would continuously run in the background. To make this happen, we decided to use the task runner Grunt. Not only does Grunt keep the connection to webpack alive, but it also gives us the ability to pass different parameters to the webpack config file based on the given environment. Some examples of these parameters are source map options, enabling HMR, and uglification.

Before deployment to production, we wanted to optimize the javascript bundles for size to make the Yahoo Mail experience faster. webpack provides good default support for this with the UglifyJS plugin. Although the default options are conservative, they give us the ability to configure the options. Once we modified the options to our specifications, we saved approximately 10KB.

image

Code snippet showing the configuration options for the UglifyJS plugin


Faster development cycles for developers

While developing a new feature, engineers ideally want to see their code changes reflected on their web app instantaneously. This allows them to maintain their train of thought and eventually results in more productivity. Before we implemented webpack, it took us around 30 seconds to 1 minute for changes to reflect on our Yahoo Mail development environment. webpack helped us reduce the wait time to 5 seconds.

More reliability

Consumers love a reliable product, where all the features work seamlessly every time. Before we began using webpack, we were generating javascript bundles on demand or during run-time, which meant the product was more prone to exceptions or failures while fetching the javascript bundles. With webpack, we now generate all the bundles during build time, which means that all the bundles are available whenever consumers access Yahoo Mail. This results in significantly fewer exceptions and failures and a better experience overall.

Better Performance

We were able to attain a significant reduction of payload after adopting webpack.

  1. Reduction of about 75 KB gzipped Javascript payload
  2. 50% reduction on server-side render time
  3. 10% improvement in Yahoo Mail’s launch performance metrics, as measured by render time above the fold (e.g., Time to load contents of an email).

Below are some charts that demonstrate the payload size of Yahoo Mail before and after implementing webpack.

image

Payload before using webpack (JavaScript Size = 741.41KB)


image

Payload after switching to webpack (JavaScript size = 669.08KB)


image

Conclusion

Shifting to webpack has resulted in significant improvements. We saw a common build process go from 30 seconds to 5 seconds, large JavaScript bundle size reductions, and a halving in server-side rendering time. In addition to these benefits, our engineers have found the community support for webpack to have been impressive as well. webpack has made the development of Yahoo Mail more efficient and enhanced the product for users. We believe you can use it to achieve similar results for your web application as well.

**Optimized CSS generation with Atomizer

Before we implemented webpack into the development of Yahoo Mail, we looked into how we could decrease our CSS payload. To achieve this, we developed an in-house solution for writing modular and scoped CSS in React. Our solution is similar to the Atomizer library, and our CSS is written in JavaScript like the example below:

image

Sample snippet of CSS written with Atomizer


Every React component creates its own styles.js file with required style definitions. React-Atomic-CSS converts these files into unique class definitions. Our total CSS payload after implementing our solution equaled all the unique style definitions in our code, or only 83KB (21KB gzipped).

During our migration to webpack, we created a custom plugin and loader to parse these files and extract the unique style definitions from all of our CSS files. Since this process is tied to bundling, only CSS files that are part of the dependency chain are included in the final CSS.

YouTuber Convicted For Publishing Video Piracy ‘Tutorials’

Post Syndicated from Andy original https://torrentfreak.com/youtuber-convicted-for-publishing-video-piracy-tutorials-171212/

While piracy-focused tutorials have been around for many years, the advent of streaming piracy coupled with the rise of the YouTube star created a perfect storm online.

Even a cursory search on YouTube now turns up thousands of Kodi addon and IPTV-focused channels, each vying to become the ultimate location for the latest and hottest piracy tips. While these videos don’t appear to be a priority for copyright holders, a channel operator in Brazil has just discovered that they aren’t without consequences.

The case involves Marcelo Otto Nascimento, the operator of YouTube channel Café Tecnológico. It began, strangely, with videos about baking bread but later experimented with videos on technological topics including observations on streaming content without paying for it.

In time, this attracted the negative attention of local TV industry group Associação Brasileira de Televisões por Assinatura (Brazilian Association of Television By Signature / ABTA). The group eventually took legal action, complaining about the nature of Nascimento’s YouTube and Facebook pages.

ABTA told the court that Nascimento had been posting tutorials that “encourage the use of equipment and applications designed to allow access to services and content” of its members, despite that content being protected by copyright. The trade group called for the removal of the content, an injunction against Nascimento, an apology, plus compensation for “material and moral damages.”

In his defense, Nascimento said that he merely comments on IPTV systems, does not breach copyright, doesn’t represent unfair competition, and did not cause the TV companies to incur any losses. Overall, Judge Fernando Henrique de Oliveira Biolcati did not agree with his assertions.

“[T]he plain intention of the defendant was to guide users in order for them to obtain access to the restricted content of the applicant’s associates….while gaining advantages for this, especially via remuneration from the providers of the mentioned applications (YouTube and Facebook), proportional to the volumes of visitors,” the Judge wrote in his ruling.

“This is not a question of mere disinterested comments, in the exercise of freedom of expression,” he added.

As a result, Nascimento was ordered to remove all of his online content that could be deemed instructional for pirates, in order to protect the interests of ABTA’s members and their ability to earn revenue from their content. In addition, the channel operator was forbidden from publishing any more videos of a similar nature.

On top, Nascimento must now pay the copyright holders for material damages, yet to be determined, measured from the posting of the first ‘pirate’ tutorial until such a date when all of the tutorials have been removed.

The ruling (PDF via Mg, Portuguese) also requires Nascimento to pay the equivalent of US$7,600 for “moral damages” plus extra for legal costs, during the next 15 days.

In a statement, ABTA said that following this conviction, more people could fall under the spotlight.

“ABTA is also monitoring the activities of other channels on YouTube and on social networks that publish illegal content such as channel lists, movies and ‘free’ access TV series, as well as tutorials and comparisons of devices or applications intended for illicit use (such as Megabox, HtvBox, Kodi, Dejavu, IPTV, ITVGo, etc.),” the group said.

Meanwhile, Nascimento says that he would’ve taken the videos down if only ABTA had asked him to. He will be appealing the decision, claiming that the videos did not teach people about piracy, they only demonstrated functionality. YouTube declined to comment.

Update: Following publication, a spokesperson for TVAddons – which has previously published instructional videos for Kodi – commented to TorrentFreak on the apparent urgency to take this matter to court, rather than handle via YouTube’s established complaints procedure.

“Taking the matter to courts rather than going through YouTube’s takedown system is part of an increasing pattern of legal bullying in the realm of intellectual property enforcement. Fighting a lawsuit against a major corporation can cost more than buying a house, it’s not a fair playing field for your average individual,” he said.

One of the remaining IPTV-focused videos

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

MQTT 5: Introduction to MQTT 5

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/mqtt-5-introduction-to-mqtt-5/

MQTT 5 Introduction

Introduction to MQTT 5

Welcome to our brand new blog post series MQTT 5 – Features and Hidden Gems. Without doubt, the MQTT protocol is the most popular and best received Internet of Things protocol as of today (see the Google Trends Chart below), supporting large scale use cases ranging from Connected Cars, Manufacturing Systems, Logistics, Military Use Cases to Enterprise Chat Applications, Mobile Apps and connecting constrained IoT devices. Of course, with huge amounts of production deployments, the wish list for future versions of the MQTT protocol grew bigger and bigger.

MQTT 5 is by far the most extensive and most feature-rich update to the MQTT protocol specification ever. We are going to explore all hidden gems and protocol features with use case discussion and useful background information – one blog post at a time.

Be sure to read the MQTT Essentials Blog Post series first before diving into our new MQTT 5 series. To get the most out of the new blog posts, it’s important to have a basic understanding of the MQTT 3.1.1 protocol as we are going to highlight key changes as well as all improvements.

H1 Instances – Fast, Dense Storage for Big Data Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-h1-instances-fast-dense-storage-for-big-data-applications/

The scale of AWS and the diversity of our customer base gives us the opportunity to create EC2 instance types that are purpose-built for many different types of workloads. For example, a number of popular big data use cases depend on high-speed, sequential access to multiple terabytes of data. Our customers want to build and run very large MapReduce clusters, host distributed file systems, use Apache Kafka to process voluminous log files, and so forth.

New H1 Instances
The new H1 instances are designed specifically for this use case. In comparison to the existing D2 (dense storage) instances, the H1 instances provide more vCPUs and more memory per terabyte of local magnetic storage, along with increased network bandwidth, giving you the power to address more complex challenges with a nicely balanced mix of resources.

The instances are based on Intel Xeon E5-2686 v4 processors running at a base clock frequency of 2.3 GHz and come in four instance sizes (all VPC-only and HVM-only):

Instance Name vCPUs
RAM
Local Storage Network Bandwidth
h1.2xlarge 8 32 GiB 2 TB Up to 10 Gbps
h1.4xlarge 16 64 GiB 4 TB Up to 10 Gbps
h1.8xlarge 32 128 GiB 8 TB 10 Gbps
h1.16xlarge 64 256 GiB 16 TB 25 Gbps

The two largest sizes support Intel Turbo and CPU power management, with all-core Turbo at 2.7 GHz and single-core Turbo at 3.0 GHz.

Local storage is optimized to deliver high throughput for sequential I/O; you can expect to transfer up to 1.15 gigabytes per second if you use a 2 megabyte block size. The storage is encrypted at rest using 256-bit XTS-AES and one-time keys.

Moving large amounts of data on and off of these instances is facilitated by the use of Enhanced Networking, giving you up to 25 Gbps of network bandwith within Placement Groups.

Launch One Today
H1 instances are available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions. You can launch them in On-Demand or Spot Form. Dedicated Hosts, Dedicated Instances, and Reserved Instances (both 1-year and 3-year) are also available.

Jeff;

Amazon EC2 Update – Streamlined Access to Spot Capacity, Smooth Price Changes, Instance Hibernation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-update-streamlined-access-to-spot-capacity-smooth-price-changes-instance-hibernation/

EC2 Spot Instances give you access to spare compute capacity in the AWS Cloud. Our customers use fleets of Spot Instances to power their CI/CD environments & traffic generators, host web servers & microservices, render movies, and to run many types of analytics jobs, all at prices that offer significant savings in comparison to On-Demand Instances.

New Streamlined Access
Today we are introducing a new, streamlined access model for Spot Instances. You simply indicate your desire to use Spot capacity when you launch an instance via the RunInstances function, the run-instances command, or the AWS Management Console to submit a request that will be fulfilled as long as the capacity is available. With no extra effort on your part you’ll save up to 90% off of the On-Demand price for the instance type, allowing you to boost your overall application throughput by up to 10x for the same budget. The instances that you launch in this way will continue to run until you terminate them or if EC2 needs to reclaim them for On-Demand usage. At that point the instance will be given the usual 2-minute warning and then reclaimed, making this a great fit for applications that are fault-tolerant.

Unlike the old model which required an understanding of Spot markets, bidding, and calls to a standalone asynchronous API, the new model is synchronous and as easy to use as On-Demand. Your code or your script receives an Instance ID immediately and need not check back to see if the request has been processed and accepted.

We’ve made this as clean and as simple as possible, with the expectation that it will be easy to modify many current scripts and applications to request and make use of Spot capacity. If you want to exercise additional control over your Spot instance budget, you have the option to specify a maximum price when you make a request for capacity. If you use Spot capacity to power your Amazon EMR, Amazon ECS, or AWS Batch clusters, or if you launch Spot instances by way of a AWS CloudFormation template or Auto Scaling Group, you will benefit from this new model without having to make any changes.

Applications that are built around RequestSpotInstances or RequestSpotFleet will continue to work just fine with no changes. However, you now have the option to make requests that do not include the SpotPrice parameter.

Smooth Price Changes
As part of today’s launch we are also changing the way that Spot prices change, moving to a model where prices adjust more gradually, based on longer-term trends in supply and demand. As I mentioned earlier, you will continue to save an average of 70-90% off the On-Demand price, and you will continue to pay the Spot price that’s in effect for the time period your instances are running. Applications built around our Spot Fleet feature will continue to automatically diversify placement of their Spot Instances across the most cost-effective pools based on the configuration you specified when you created the fleet.

Spot in Action
To launch a Spot Instance from the command line; simply specify the Spot market:

$ aws ec2 run-instances –-market Spot --image-id ami-1a2b3c4d --count 1 --instance-type c3.large 

Instance Hibernation
If you run workloads that keep a lot of state in memory, you will love this new feature!

You can arrange for instances to save their in-memory state when they are reclaimed, allowing the instances and the applications on them to pick up where they left off when capacity is once again available, just like closing and then opening your laptop. This feature works on C3, C4, and certain sizes of R3, R4, and M4 instances running Amazon Linux, Ubuntu, or Windows Server, and is supported by the EC2 Hibernation Agent.

The in-memory state is written to the root EBS volume of the instance using space that is set-aside when the instance launches. The private IP address and any Elastic IP Addresses are also preserved across a stop/start cycle.

Jeff;

Ares Kodi Project Calls it Quits After Hollywood Cease & Desist

Post Syndicated from Andy original https://torrentfreak.com/ares-kodi-project-calls-it-quits-after-hollywood-cease-desist-171117/

This week has been particularly bad for those involved in the Kodi addon scene. Following cease-and-desist notices from the MPA-led anti-piracy coalition Alliance for Creativity and Entertainment, several addon developers and repositories shut down.

With Columbia, Disney, Paramount, Twentieth Century Fox, Universal, Warner, Netflix, Amazon and Sky TV all lined up for war, the third-party developers had little choice but to quit. One of those affected was the leader of the hugely popular Ares Project, which quietly disappeared mid-week.

The Ares Wizard was an extremely popular and important piece of software which allowed people to switch Kodi builds, install third-party addons, install popular repositories, change system settings, and carry out backups. It’s installed on huge numbers of machines worldwide but it will soon fall into disrepair.

The mighty Ares Wizard in action

“[This week] I was subject to a hand-delivered notice to cease-and-desist from MPA & ACE,” Ares Project leader Tekto informs TorrentFreak.

“Given the notice, we obviously shut down the repo and wizard as requested.”

The news that Ares Project is done and never coming back will be a huge blow to the community. The project just celebrated its second birthday and has grown exponentially since it first arrived on the scene.

“Ares Project started in Oct 2015. Originally it was to be a tool to setup up the video cache on Kodi correctly. However, many ideas were thrown into the pot and it became a wee bit more; such as a wizard to install community provided builds, common addons and few other tweaks and options,” Tekto says.

“For my own part I started blogging earlier that year as part of a longer-term goal to be self-funding. I always disliked seeing begging bowls out to support ‘server’ costs, many of which were cheap £5-10 per month servers that were used to gain £100s in donations.

“The blog, via affiliate links and ads, could and would provide the funds to cover our hosting costs without resorting to begging for money every weekend.”

Intrigued by this first wave of actions by ACE in Europe, TorrentFreak asked for a copy of the MPA/ACE cease-and-desist notice but unfortunately, Tekto flat-out refused. All he would tell us is that he’d agreed not to give out any copies or screenshots and that he was adhering to that 100%.

That only leaves speculation as to what grounds the MPA/ACE cited for closing the project but to be fair, it doesn’t take much thought to find a direct comparison. Earlier this year, in the BREIN v Filmspeler case, the European Court of Justice (ECJ) ruled that selling “fully-loaded” Kodi boxes amounted to illegally communicating copyrighted content to the public.

With that in mind, it doesn’t take much of a leap to see how this ruling could also apply to someone distributing “fully-loaded” Kodi software builds or addons via a website. It had previously been considered a legal gray area, of course, and it was in that space that the Ares team believed it operated. After all, it took ECJ clarification for local courts in the Netherlands to be satisfied with the legal position.

“There was never any question that what we were doing was illegal. We didn’t and never have hosted any content, we always prevented discussions about illegal paid services, and never sold any devices, pre-loaded or otherwise. That used to be enough to occupy the ‘gray’ area which meant we were safe to develop our applications. That changed in 2017 as we were to discover,” Tekto notes.

Up until this week and apparently oblivious to how the earlier ECJ ruling might affect their operation, things had been going extremely well for Ares. In mid-2016, the group moved to its own support forum that attracted 100,000 signed-up members and 300,000 visitors every month.

“This was quite an achievement in terms of viral marketing but ultimately this would become part of our downfall,” Tekto says.

“The recent innovation of the ‘basket driven’ Ares Portal system seems to have triggered the legal move to shut the project down completely. This simple system gave access to hundreds of add-ons. The system removed the need for builds, blogs and YouTubers – you just shopped on the site for addons and then installed them to your device with a simple 6 digit code.”

While Ares and Tekto still didn’t believe they were doing anything illegal (addons were linked, not hosted) it is now pretty clear to them that the previous gray area has been well and truly closed, at least as far as the MPA/ACE alliance is concerned. And with that in mind, the show is over. Done. Finished.

“We are not criminals or malicious hackers, we weren’t even careful about hiding our identities. You couldn’t meet a more ordinary bunch of folks in truth,” he says.

“There was never any question we would close our doors if what we were doing crossed any boundaries of legality. So with the notice served on us, we are closing our doors and removing all our websites and applications. It’s a sad day in many ways, but nobody wants to be facing court or a potential custodial sentence, for what is essentially a hobby.”

Finally, Tekto says that others like him might want to consider their positions carefully, before they too get a knock at the door. In the meantime, he gives thanks to the project’s supporters, who have remained loyal over the past two years.

“It just leaves me to thank our users for their support and step away from the Kodi scene,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Backing Up the Modern Enterprise with Backblaze for Business

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-solutions/

Endpoint backup diagram

Organizations of all types and sizes need reliable and secure backup. Whether they have as few as 3 or as many as 300,000 computer users, an organization’s computer data is a valuable business asset that needs to be protected.

Modern organizations are changing how they work and where they work, which brings new challenges to making sure that company’s data assets are not only available, but secure. Larger organizations have IT departments that are prepared to address these needs, but often times in smaller and newer organizations the challenge falls upon office management who might not be as prepared or knowledgeable to face a work environment undergoing dramatic changes.

Whether small or large, local or world-wide, for-profit or non-profit, organizations need a backup strategy and solution that matches the new ways of working in the enterprise.

The Enterprise Has Changed, and So Has Data Use

More and more, organizations are working in the cloud. These days organizations can operate just fine without their own file servers, database servers, mail servers, or other IT infrastructure that used to be standard for all but the smallest organization.

The reality is that for most organizations, though, it’s a hybrid work environment, with a combination of cloud-based and PC and Macintosh-based applications. Legacy apps aren’t going away any time soon. They will be with us for a while, with their accompanying data scattered amongst all the desktops, laptops and other endpoints in corporate headquarters, home offices, hotel rooms, and airport waiting areas.

In addition, the modern workforce likely combines regular full-time employees, remote workers, contractors, and sometimes interns, volunteers, and other temporary workers who also use company IT assets.

The Modern Enterprise Brings New Challenges for IT

These changes in how enterprises work present a problem for anyone tasked with making sure that data — no matter who uses it or where it lives — is adequately backed-up. Cloud-based applications, when properly used and managed, can be adequately backed up, provided that users are connected to the internet and data transfers occur regularly — which is not always the case. But what about the data on the laptops, desktops, and devices used by remote employees, contractors, or just employees whose work keeps them on the road?

The organization’s backup solution must address all the needs of the modern organization or enterprise using both cloud and PC and Mac-based applications, and not be constrained by employee or computer location.

A Ten-Point Checklist for the Modern Enterprise for Backing Up

What should the modern enterprise look for when evaluating a backup solution?

1) Easy to deploy to workers’ computers

Whether installed by the computer user or an IT person locally or remotely, the backup solution must be easy to implement quickly with minimal demands on the user or administrator.

2) Fast and unobtrusive client software

Backups should happen in the background by efficient (native) PC and Macintosh software clients that don’t consume valuable processing power or take memory away from applications the user needs.

3) Easy to configure

The backup solutions must be easy to configure for both the user and the IT professional. Ease-of-use means less time to deploy, configure, and manage.

4) Defaults to backing up all valuable data

By default, the solution backs up commonly used files and folders or directories, including desktops. Some backup solutions are difficult and intimidating because they require that the user chose what needs to be backed up, often missing files and folders/directories that contain valuable data.

5) Works automatically in the background

Backups should happen automatically, no matter where the computer is located. The computer user, especially the remote or mobile one, shouldn’t be required to attach cables or drives, or remember to initiate backups. A working solution backs up automatically without requiring action by the user or IT administrator.

6) Data restores are fast and easy

Whether it’s a single file, directory, or an entire system that must be restored, a user or IT sysadmin needs to be able to restore backed up data as quickly as possible. In cases of large restores to remote locations, the ability to send a restore via physical media is a must.

7) No limitations on data

Throttling, caps, and data limits complicate backups and require guesses about how much storage space will be needed.

8) Safe & Secure

Organizations require that their data is secure during all phases of initial upload, storage, and restore.

9) Easy-to-manage

The backup solution needs to provide a clear and simple web management interface for all functions. Designing for ease-of-use leads to efficiency in management and operation.

10) Affordable and transparent pricing

Backup costs should be predictable, understandable, and without surprises.

Two Scenarios for the Modern Enterprise

Enterprises exist in many forms and types, but wanting to meet the above requirements is common across all of them. Below, we take a look at two common scenarios showing how enterprises face these challenges. Three case studies are available that provide more information about how Backblaze customers have succeeded in these environments.

Enterprise Profile 1

The needs of a smaller enterprise differ from those of larger, established organizations. This organization likely doesn’t have anyone who is devoted full-time to IT. The job of on-boarding new employees and getting them set up with a computer likely falls upon an executive assistant or office manager. This person might give new employees a checklist with the software and account information and lets users handle setting up the computer themselves.

Organizations in this profile need solutions that are easy to install and require little to no configuration. Backblaze, by default, backs up all user data, which lets the organization be secure in knowing all the data will be backed up to the cloud — including files left on the desktop. Combined with Backblaze’s unlimited data policy, organizations have a truly “set it and forget it” platform.

Customizing Groups To Meet Teams’ Needs

The Groups feature of Backblaze for Business allows an organization to decide whether an individual client’s computer will be Unmanaged (backups and restores under the control of the worker), or Managed, in which an administrator can monitor the status and frequency of backups and handle restores should they become necessary. One group for the entire organization might be adequate at this stage, but the organization has the option to add additional groups as it grows and needs more flexibility and control.

The organization, of course, has the choice of managing and monitoring users using Groups. With Backblaze’s Groups, organizations can set user-based access rules, which allows the administrator to create restores for lost files or entire computers on an employee’s behalf, to centralize billing for all client computers in the organization, and to redeploy a recovered computer or new computer with the backed up data.

Restores

In this scenario, the decision has been made to let each user manage her own backups, including restores, if necessary, of individual files or entire systems. If a restore of a file or system is needed, the restore process is easy enough for the user to handle it by herself.

Case Study 1

Read about how PagerDuty uses Backblaze for Business in a mixed enterprise of cloud and desktop/laptop applications.

PagerDuty Case Study

In a common approach, the employee can retrieve an accidentally deleted file or an earlier version of a document on her own. The Backblaze for Business interface is easy to navigate and was designed with feedback from thousands of customers over the course of a decade.

In the event of a lost, damaged, or stolen laptop,  administrators of Managed Groups can  initiate the restore, which could be in the form of a download of a restore ZIP file from the web management console, or the overnight shipment of a USB drive directly to the organization or user.

Enterprise Profile 2

This profile is for an organization with a full-time IT staff. When a new worker joins the team, the IT staff is tasked with configuring the computer and delivering it to the new employee.

Backblaze for Business Groups

Case Study 2

Global charitable organization charity: water uses Backblaze for Business to back up workers’ and volunteers’ laptops as they travel to developing countries in their efforts to provide clean and safe drinking water.

charity: water Case Study

This organization can take advantage of additional capabilities in Groups. A Managed Group makes sense in an organization with a geographically dispersed work force as it lets IT ensure that workers’ data is being regularly backed up no matter where they are. Billing can be company-wide or assigned to individual departments or geographical locations. The organization has the choice of how to divide the organization into Groups (location, function, subsidiary, etc.) and whether the Group should be Managed or Unmanaged. Using Managed Groups might be suitable for most of the organization, but there are exceptions in which sensitive data might dictate using an Unmanaged Group, such as could be the case with HR, the executive team, or finance.

Deployment

By Invitation Email, Link, or Domain

Backblaze for Business allows a number of options for deploying the client software to workers’ computers. Client installation is fast and easy on both Windows and Macintosh, so sending email invitations to users or automatically enrolling users by domain or invitation link, is a common approach.

By Remote Deployment

IT might choose to remotely and silently deploy Backblaze for Business across specific Groups or the entire organization. An administrator can silently deploy the Backblaze backup client via the command-line, or use common RMM (Remote Monitoring and Management) tools such as Jamf and Munki.

Restores

Case Study 3

Read about how Bright Bear Technology Solutions, an IT Managed Service Provider (MSP), uses the Groups feature of Backblaze for Business to manage customer backups and restores, deploy Backblaze licenses to their customers, and centralize billing for all their client-based backup services.

Bright Bear Case Study

Some organizations are better equipped to manage or assist workers when restores become necessary. Individual users will be pleased to discover they can roll-back files to an earlier version if they wish, but IT will likely manage any complete system restore that involves reconfiguring a computer after a repair or requisitioning an entirely new system when needed.

This organization might chose to retain a client’s entire computer backup for archival purposes, using Backblaze B2 as the cloud storage solution. This is another advantage of having a cloud storage provider that combines both endpoint backup and cloud object storage among its services.

The Next Step: Server Backup & Data Archiving with B2 Cloud Storage

As organizations grow, they have increased needs for cloud storage beyond Macintosh and PC data backup. Backblaze’s object cloud storage, Backblaze B2, provides low-cost storage and archiving of records, media, and server data that can grow with the organization’s size and needs.

B2 Cloud Storage is available through the same Backblaze management console as Backblaze Computer Backup. This means that Admins have one console for billing, monitoring, deployment, and role provisioning. B2 is priced at 1/4 the cost of Amazon S3, or $0.005 per month per gigabyte (which equals $5/month per terabyte).

Why Modern Enterprises Chose Backblaze

Backblaze for Business

Businesses and organizations select Backblaze for Business for backup because Backblaze is designed to meet the needs of the modern enterprise. Backblaze customers are part of a a platform that has a 10+ year track record of innovation and over 400 petabytes of customer data already under management.

Backblaze’s backup model is proven through head-to-head comparisons to back up data that other backup solutions overlook in their default configurations — including valuable files that are needed after an accidental deletion, theft, or computer failure.

Backblaze is the only enterprise-level backup company that provides TOTP (Time-based One-time Password) via both SMS and Authentication app to all accounts at no incremental charge. At just $50/year/computer, Backblaze is affordable for any size of enterprise.

Modern Enterprises can Meet The Challenge of The Changing Data Environment

With the right backup solution and strategy, the modern enterprise will be prepared to ensure that its data is protected from accident, disaster, or theft, whether its data is in one office or dispersed among many locations, and remote and mobile employees.

Backblaze for Business is an affordable solution that enables organizations to meet the evolving data demands facing the modern enterprise.

The post Backing Up the Modern Enterprise with Backblaze for Business appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hard Drive Stats for Q3 2017

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-failure-rates-q3-2017/

Q3 2017 Hard Drive Stats

In Q3 2017, Backblaze introduced both 10 TB and 12 TB hard drives into our data centers, we continued to retire 3 TB and 4 TB hard drives to increase storage density, and we added over 59 petabytes of data storage to bring our total storage capacity to 400 petabytes.

In this update, we’ll review the Q3 2017 and lifetime hard drive failure rates for all our drive models in use at the end of Q3. We’ll also check in on our 8 TB enterprise versus consumer hard drive comparison, and look at the storage density changes in our data centers over the past couple of years. Along the way, we’ll share our observations and insights, and as always, you can download the hard drive statistics data we use to create these reports.

Q3 2017 Hard Drive Failure Rates

Since our Q2 2017 report, we added 9,599 new hard drives and retired 6,221 hard drives, for a net add of 3,378 drives and a total of 86,529. These numbers are for those hard drives of which we have 45 or more drives — with one exception that we’ll get to in a minute.

Let’s look at the Q3 statistics that include our first look at the 10 TB and 12 TB hard drives we added in Q3. The chart below is for activity that occurred just in Q3 2017.

Hard Drive Failure Rates for Q3 2017

Observations

  1. The hard drive failure rate for the quarter was 1.84%, our lowest quarterly rate ever. There are several factors that contribute to this, but one that stands out is the average age of the hard drives in use. Only the 4 TB HGST drives (model: HDS5C4040ALE630) have an average age over 4 years — 51.3 months to be precise. The average age of all the other drive models is less than 4 years, with nearly 80% of all of the drives being less than 3 years old.
  2. The 10- and 12 TB drive models are new. With a combined 13,000 drive days in operation, they’ve had zero failures. While all of these drives passed through formatting and load testing without incident, it is a little too early to reach any conclusions.

Testing Drives

Normally, we list only those drive models where we have 45 drives or more, as it formerly took 45 drives (currently 60), to fill a Storage Pod. We consider a Storage Pod as a base unit for drive testing. Yet, we listed the 12 TB drives even though we only have 20 of them in operation. What gives? It’s the first step in testing drives.

A Backblaze Vault consists of 20 Storage Pods logically grouped together. Twenty 12 TB drives are deployed in the same drive position in each of the 20 Storage Pods and grouped together into a storage unit we call a “tome.” An incoming file is stored in one tome, and is spread out across the 20 storage pods in the tome for reliability and availability. The remaining 59 tomes, in this case, use 8 TB drives. This allows us to see the performance and reliability of a 12 TB hard drive model in an operational environment without having to buy 1,200 of them to start.

Breaking news: Our first Backblaze Vault filled with 1,200 Seagate 12 TB hard drives (model: ST12000NM007) went into production on October 20th.

Storage Density Continues to Increase

As noted earlier, we retired 6,221 hard drives in Q3: all 3- or 4 TB hard drives. The retired drives have been replaced by 8-, 10-, and 12 TB drive models. This dramatic increase in storage density added 59 petabytes of storage in Q3. The following chart shows that change since the beginning of 2016.

Hard Drive Count by Drive Size

You clearly can see the retirement of the 2 TB and 3 TB drives, each being replaced predominantly by 8 TB drives. You also can see the beginning of the retirement curve for the 4 TB drives that will be replaced most likely by 12 TB drives over the coming months. A subset of the 4 TB drives, about 10,000 or so which were installed in the past year or so, will most likely stay in service for at least the next couple of years.

Lifetime Hard Drive Stats

The table below shows the failure rates for the hard drive models we had in service as of September 30, 2017. This is over the period beginning in April 2013 and ending September 30, 2017. If you are interested in the hard drive failure rates for all the hard drives we’ve used over the years, please refer to our 2016 hard drive review.

Cumulative Hard Drive Failure Rates

Note 1: The “+ / – Change” column reflects the change in the annualized failure rate from the previous quarter. Down is good, up is bad.
Note 2: You can download the data on this chart and the data from the “Hard Drive Failure Rates for Q3 2017” chart shown earlier in this review. The downloaded ZIP file contains one MSExcel spreadsheet.

The annualized failure rate for all of the drive models listed above is 2.07%; this is the higher than the 1.97% for the previous quarter. The primary driver behind this was the retirement of all of the HGST 3 TB drives (model: HDS5C3030ALA630) in Q3. Those drives had over 6 million drive days and an annualized failure rate of 0.82% — well below the average for the entire set of drives. Those drives now are gone and no longer part of the results.

Consumer Versus Enterprise Drives

The comparison of the consumer and enterprise Seagate 8 TB drives continues. Both of the drive models, Enterprise: ST8000NM0055 and Consumer: ST8000DM002, saw their annualized failure rates decrease from the previous quarter. In the case of the enterprise drives, this occurred even though we added 8,350 new drives in Q3. This brings the total number of Seagate 8 TB enterprise drives to 14,404, which have accumulated nearly 1.4 million drive days.

A comparison of the two drive models shows the annualized failure rates being very similar:

  • 8 TB Consumer Drives: 1.1% Annualized Failure Rate
  • 8 TB Enterprise Drives: 1.2% Annualized Failure Rate

Given that the failure rates for the two drive models appears to be similar, are the Seagate 8 TB enterprise drives worth any premium you might have to pay for them? As we have previously documented, the Seagate enterprise drives load data faster and have a number of features such as the PowerChoiceTM technology that can be very useful. In addition, enterprise drives typically have a 5 year warranty versus a 2 year warranty for the consumer drives. While drive price and availability are our primary considerations, you may decide other factors are more important.

We will continue to follow these drives, especially as they age over 2 years: the warranty point for the consumer drives.

Join the Drive Stats Webinar on Friday, November 3

We will be doing a deeper dive on this review in a webinar: “Q3 2017 Hard Drive Failure Stats” being held on Friday, November 3rd at 10:00 am Pacific Time. We’ll dig into what’s behind the numbers, including the enterprise vs consumer drive comparison. To sign up for the webinar, you will need to subscribe to the Backblaze BrightTALK channel if you haven’t already done so.

Wrapping Up

Our next drive stats post will be in January, when we’ll review the data for Q4 and all of 2017, and we’ll update our lifetime stats for all of the drives we have ever used. In addition, we’ll get our first real look at the 12 TB drives.

As a reminder, the hard drive data we use is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone: it is free.

Good luck and let us know if you find anything interesting.

The post Hard Drive Stats for Q3 2017 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

New – Amazon EC2 Instances with Up to 8 NVIDIA Tesla V100 GPUs (P3)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-ec2-instances-with-up-to-8-nvidia-tesla-v100-gpus-p3/

Driven by customer demand and made possible by on-going advances in the state-of-the-art, we’ve come a long way since the original m1.small instance that we launched in 2006, with instances that are emphasize compute power, burstable performance, memory size, local storage, and accelerated computing.

The New P3
Today we are making the next generation of GPU-powered EC2 instances available in four AWS regions. Powered by up to eight NVIDIA Tesla V100 GPUs, the P3 instances are designed to handle compute-intensive machine learning, deep learning, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, and genomics workloads.

P3 instances use customized Intel Xeon E5-2686v4 processors running at up to 2.7 GHz. They are available in three sizes (all VPC-only and EBS-only):

Model NVIDIA Tesla V100 GPUs GPU Memory NVIDIA NVLink vCPUs Main Memory Network Bandwidth EBS Bandwidth
p3.2xlarge 1 16 GiB n/a 8 61 GiB Up to 10 Gbps 1.5 Gbps
p3.8xlarge 4 64 GiB 200 GBps 32 244 GiB 10 Gbps 7 Gbps
p3.16xlarge 8 128 GiB 300 GBps 64 488 GiB 25 Gbps 14 Gbps

Each of the NVIDIA GPUs is packed with 5,120 CUDA cores and another 640 Tensor cores and can deliver up to 125 TFLOPS of mixed-precision floating point, 15.7 TFLOPS of single-precision floating point, and 7.8 TFLOPS of double-precision floating point. On the two larger sizes, the GPUs are connected together via NVIDIA NVLink 2.0 running at a total data rate of up to 300 GBps. This allows the GPUs to exchange intermediate results and other data at high speed, without having to move it through the CPU or the PCI-Express fabric.

What’s a Tensor Core?
I had not heard the term Tensor core before starting to write this post. According to this very helpful post on the NVIDIA Blog, Tensor cores are designed to speed up the training and inference of large, deep neural networks. Each core is able to quickly and efficiently multiply a pair of 4×4 half-precision (also known as FP16) matrices together, add the resulting 4×4 matrix to another half or single-precision (FP32) matrix, and store the resulting 4×4 matrix in either half or single-precision form. Here’s a diagram from NVIDIA’s blog post:

This operation is in the innermost loop of the training process for a deep neural network, and is an excellent example of how today’s NVIDIA GPU hardware is purpose-built to address a very specific market need. By the way, the mixed-precision qualifier on the Tensor core performance means that it is flexible enough to work with with a combination of 16-bit and 32-bit floating point values.

Performance in Perspective
I always like to put raw performance numbers into a real-world perspective so that they are easier to relate to and (hopefully) more meaningful. This turned out to be surprisingly difficult, given that the eight NVIDIA Tesla V100 GPUs on a single p3.16xlarge can do 125 trillion single-precision floating point multiplications per second.

Let’s go back to the dawn of the microprocessor era, and consider the Intel 8080A chip that powered the MITS Altair that I bought in the summer of 1977. With a 2 MHz clock, it was able to do about 832 multiplications per second (I used this data and corrected it for the faster clock speed). The p3.16xlarge is roughly 150 billion times faster. However, just 1.2 billion seconds have gone by since that summer. In other words, I can do 100x more calculations today in one second than my Altair could have done in the last 40 years!

What about the innovative 8087 math coprocessor that was an optional accessory for the IBM PC that was announced in the summer of 1981? With a 5 MHz clock and purpose-built hardware, it was able to do about 52,632 multiplications per second. 1.14 billion seconds have elapsed since then, p3.16xlarge is 2.37 billion times faster, so the poor little PC would be barely halfway through a calculation that would run for 1 second today.

Ok, how about a Cray-1? First delivered in 1976, this supercomputer was able to perform vector operations at 160 MFLOPS, making the p3.x16xlarge 781,000 times faster. It could have iterated on some interesting problem 1500 times over the years since it was introduced.

Comparisons between the P3 and today’s scale-out supercomputers are harder to make, given that you can think of the P3 as a step-and-repeat component of a supercomputer that you can launch on as as-needed basis.

Run One Today
In order to take full advantage of the NVIDIA Tesla V100 GPUs and the Tensor cores, you will need to use CUDA 9 and cuDNN7. These drivers and libraries have already been added to the newest versions of the Windows AMIs and will be included in an updated Amazon Linux AMI that is scheduled for release on November 7th. New packages are already available in our repos if you want to to install them on your existing Amazon Linux AMI.

The newest AWS Deep Learning AMIs come preinstalled with the latest releases of Apache MxNet, Caffe2, and Tensorflow (each with support for the NVIDIA Tesla V100 GPUs), and will be updated to support P3 instances with other machine learning frameworks such as Microsoft Cognitive Toolkit and PyTorch as soon as these frameworks release support for the NVIDIA Tesla V100 GPUs. You can also use the NVIDIA Volta Deep Learning AMI for NGC.

P3 instances are available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions in On-Demand, Spot, Reserved Instance, and Dedicated Host form.

Jeff;

 

Blockchain? It’s All Greek To Me…

Post Syndicated from Bozho original https://techblog.bozho.net/blockchain-its-all-greek-to-me/

The blockchain hype is huge, the ICO craze (“Coindike”) is generating millions if not billions of “funding” for businesses that claim to revolutionize basically anything.

I’ve been following all of that for a while. I got my first (and only) Bitcoin several years ago, I know how the technology works, I’ve implemented the data structure part, I’ve tried (with varying success) to install an Ethereum wallet since almost as soon as Ethereum appeared, and I’ve read and subscribed to newsletters about dozens of projects and new cryptocurrencies, including storj.io, siacoin, namecoin, etc. I would say I’m at least above average in terms of knowledge on how the cryptocurrencies, blockchain, smart contracts, EVM, proof-of-wahtever operates. And I’ve voiced my concerns about the technology in general.

Now it’s rant time.

I’ve been reading whitepapers of various projects, I’ve been to various meetups and talks, I’ve been reading the professed future applications of the blockchain, and I have to admit – it’s all Greek to me. I have no clue what these people are talking about. And why would all of that make any sense. I still think I’m not clever enough to understand the upcoming revolution, but there’s also a cynical side of me that says “this is all a scam”.

Why “X on the blockchain” somehow makes it magical and superior to a good old centralized solution? No, spare me the cliches about “immutable ledger”, “lack of central authority” and the likes. These are the phrases that a person learns after reading literally one article about blockchain. Have you actually written anything apart from a complex-sounding whitepaper or a hello-world smart contract? Do you really know how the overlay network works, how the economic incentives behind that network work, how all the cryptography works? Maybe there are many, many people that indeed know that and they know it better than me and are thus able to imagine the business case behind “X on the blockchain”.

I can’t. I can’t see why it would be useful to abandon a centralized database that you can query in dozens of ways, test easily and scale trivially in favour of a clunky write-only, low-throughput, hard-to-debug privacy nightmare that is any public blockchain. And how do you imagine to gain a substantial userbase with an ecosystem where the Windows client for the 2nd most popular blockchain (Ethereum) has been so buggy, I (a software engineer) couldn’t get it work and sync the whole chain. And why would building a website ontop of that clunky, user-unfriendly database has any benefit over a centralized competitor?

Do we all believe that somehow the huge datacenters with guarnateed power backups, regular hardware and network checks, regular backups and overall – guaranteed redundancy – will somehow be beaten by a few thousand machines hosting a software that has the sole purpose of guaranteeing integrity? Bitcoin has 10 thousand nodes. Ethereum has 22 thousand nodes. And while these nodes are probably very well GPU-equipped, they aren’t supercomputers. Amazon’s AWS has a million servers. How’s that for comparison. And why would anyone take seriously 22 thousand non-servers. Or even 220 thousand, if we believe in some inevitable growth.

Don’t get me wrong, the technology is really cool. The way tamper-evident data structures (hash chains) were combined with a consensus algorithm, an overlay network and a financial incentive is really awesome. When you add a distributed execution environment, it gets even cooler. But is it suitable for literally everything? I fail to see how.

I’m sure I’m missing something. The fact that many of those whitepapers sound increasingly like Greek to me might hint that I’m just a dumb developer and those enlightened people are really onto something huge. I guess time will tell.

But I happen to be living in a country that saw a transition to capitalism in the years of my childhood. And there were a lot of scams and ponzi schemes that people believed in. Because they didn’t know how capitalism works, how the market works. I’m seeing some similarities – we have no idea how the digital realm really works, and so a lot of scams are bound to appear, until we as a society learn the basics.

Until then – enjoy your ICO, enjoy your tokens, enjoy your big-player competitor with practically the same business model, only on a worse database.

And I hope that after the smoke of hype and fraud clears, we’ll be able to enjoy the true benefits of the blockchain innovation.

The post Blockchain? It’s All Greek To Me… appeared first on Bozho's tech blog.