Tag Archives: light

Pirate Site Visits Lead to More Malware, Research Finds

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-site-visits-lead-to-more-malware-research-finds-180318/

In recent years copyright holders have been rather concerned with the health of pirates’ computers.

They regularly highlight reports which show that pirate sites are rife with malware and even alert potential pirates-to-be about the dangers of these sites.

The recent “Meet The Malwares” campaign, targeted at small children, went as far as claiming that pirate sites are the number one way through which this malicious software is spread. We debunked this claim, but it’s hard to deny that pirate sites have their downsides.

While the operators of pirate sites are usually unaware, advertisers and malicious uploaders sometimes use their sites to distribute adware or malware. But does that put people at significant risk? Research from Carnegie Mellon University Professor Rahul Telang provides some further insight.

For a year, Telang observed the browsing and other computer habits of 253 people who took part in the Security Behavior Observatory. The results, published in a paper titled “Does Online Piracy make Computers Insecure?” show that there is a link between pirate site visits and malware.

“We find that more visits to infringing sites does lead to more number of malware files being downloaded on user machines. In particular doubling the amount of time spent on infringing sites cause a 20 percent increase in malware count,” Telang writes.

This effect was only visible for pirate sites, and not for other categories such as banking, gambling, gaming, shopping, social networking, and even adult websites.

Through the Security Behavior Observatory, all files on the respondents’ computers were scanned and checked against reports from Virustotal.com. This also includes adware, but even without this category, the results remain intact.

“Even after we classify malware files into adware and remove them from analysis, our results still suggest that there is a 20 percent increase in malware count due to visits to infringing sites. These results are robust to various controls and specifications.”

Interestingly, one would expect that people who frequently visit pirate sites are more likely to have anti-virus software installed. However, this was not the case.

“We also find that users who visit infringing sites do not take any more precautions than other users. In particular, we find no evidence that such users are more likely to install anti-virus software. If anything, we find that infringing users are more risk taking,” the paper reads.

A 20 percent increase in malware sounds dramatic, and while we don’t want to downplay these results or the risks involved, it’s worth highlighting the absolute numbers.

The research estimates that, when someone doubles the amount of traffic spent on a pirate site, this person adds an extra 0.05 of a piece of malware per month, with the average being 0.24. So, most people encounter no malware in a typical month. This means that pirate sites are an increased a risk, but it’s not as extreme as sometimes portrayed.

There is also no evidence that malware is predominantly spread through pirate sites. Looking at the total sample, the average number of malware files found on a pirate’s machine is 1.5, compared to 1.4 for those who never visit any pirate sites at all.

While there’s certainly some risk involved, it’s doubtful that the results will deter many people. Previous research revealed that the majority of all pirates are fully aware of the malware risks, but that they continue nonetheless.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Getting Ready for the AWS Quest Finale on Twitch

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/getting-ready-for-the-aws-quest-finale-on-twitch/

Whew! March has been one crazy month for me and it is only half over. After a week with my wife in the Caribbean, we hopped on a non-stop Seattle to Tokyo flight so that I could speak at JAWS Days, Startup Day, and some internal events. We arrived home last Wednesday and I am now sufficiently clear-headed and recovered from jet lag to do anything more intellectually demanding than respond to emails. The AWS Blogging Team and the great folks at Lone Shark Games have been working on AWS Quest for quite some time and it has been great to see all of the progress made toward solving the puzzles in order to find the orangeprints that I will use to rebuild Ozz.

The community effort has been impressive! There’s a shared spreadsheet with tabs for puzzles and clues, a busy Slack channel, and a leaderboard, all organized and built by a team that spans the globe.

I’ve been checking out the orangeprints as they are uncovered and have been doing a bit of planning and preparation to make sure that I am ready for the live-streamed rebuild on Twitch later this month. Yesterday I labeled a bunch of containers, one per puzzle, and stocked each one with the parts that I will use to rebuild the corresponding component of Ozz. Fortunately, I have at least (my last count may have skipped a few) 119,807 bricks and other parts at hand so this was easy. Here’s what I have set up so far:

The Twitch session will take place on Tuesday, March 27 at Noon PT. In the meantime, you should check out the #awsquest tweets and see what you can do to help me to rebuild Ozz.


Our Newest AWS Community Heroes (Spring 2018 Edition)

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/our-newest-aws-community-heroes-spring-2018-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.




Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.





Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.




Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.









‘Dutch Pirate Bay Blocking Case Should Get a Do-Over’

Post Syndicated from Ernesto original https://torrentfreak.com/dutch-pirate-bay-blocking-case-180316/

The Pirate Bay is arguably the most widely blocked website on the Internet. ISPs from all over the world have been ordered by courts to prevent users from accessing the torrent site.

In most countries courts have decided relatively quickly, but not in the Netherlands, where there’s still no final decision after eight years.

A Dutch court first issued an order to block The Pirate Bay in 2012, but this decision was overturned two years later. Anti-piracy group BREIN then took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.

After a careful review of the case, the EU Court of Justice decided last year that The Pirate Bay can indeed be blocked.

The top EU court ruled that although The Pirate Bay’s operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive.

This put the case back with the Dutch Supreme court, which now has to decide on the matter.

Today, Advocate General Van Peursem advised the court to throw out the previous court order, and do the case over in a new court.

In his recommendation, Van Peursem cites similar blocking orders from other European countries. He stresses that the rights of copyright holders should be carefully weighed against those of the ISPs and the public in general.

In blocking cases, this usually comes down to copyright protection versus Internet providers’ freedom to carry on business and the right to freedom of information. The Advocate General specifically highlights a recent Premier League case in the UK, where the court ruled that copyright prevails over the other rights.

The ultimate decision, however, depends on the context of the case, Van Peursem notes.

“At most, one can say that if a copyright is infringed, it normally won’t be possible to justify the infringement by invoking the freedom to conduct business or the freedom of information. After all, these freedoms find their limit in what is legally permissible.

“This does not mean that a blockade aimed at protecting the right to property always ‘wins’ over the freedoms of entrepreneurship and information,” he adds.

Previously, the Supreme Court already ruled that it was incorrect of the lower court to rule that the Pirate Bay blockade was ineffective. Together, this means that it will be tough for the ISPs to win this case.

If the Supreme Court throws out the previous court order the case will start over from scratch, but with this new context and the EU court orders as further clarification.

The Advocate General’s advice is not binding, so it’s not yet certain whether there will be a do-over. However, in most cases, the recommendations are followed by the Supreme Court.

The Supreme Court is expected to release its final verdict later this year. For now, the Pirate Bay remains blocked as the result of an interim injunction BREIN obtained last year.

Update: The article was updated to clarify that the existing blocking injunctions remain in place.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Raspbian update: supporting different screen sizes

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/raspbian-update-screen-sizes/

You may have noticed that we released a updated Raspbian software image yesterday. While the main reason for the new image was to provide support for the new Raspberry Pi 3 Model B+, the image also includes, alongside the usual set of bug fixes and minor tweaks, one significant chunk of new functionality that is worth pointing out.

Updating Raspbian on your Raspberry Pi

How to update to the latest version of Raspbian on your Raspberry Pi.


As a software developer, one of the most awkward things to deal with is what is known as platform fragmentation: having to write code that works on all the different devices and configurations people use. In my spare time, I write applications for iOS, and this has become increasingly painful over the last few years. When I wrote my first iPhone application, it only had to work on the original iPhone, but nowadays any iOS application has to work across several models of iPhone and iPad (which all have different processors and screens), and also across the various releases of iOS. And that’s before you start to consider making your code run on Android as well…

Screenshot of clean Raspbian desktop

The good thing about developing for Raspberry Pi is that there is only a relatively small number of different models of Pi hardware. We try our best to make sure that, wherever possible, the Raspberry Pi Desktop software works on every model of Pi ever sold, and we’ve managed to do this for most of the software in the image. The only exceptions are some of the more recent applications like Chromium, which won’t run on the older ARM6 processors in the Pi 1 and the Pi Zero, and some applications that run very slowly due to needing more memory than the older platforms have.

Raspbian with different screen resolutions

But there is one area where we have no control over the hardware, and that is screen resolution. The HDMI port on the Pi supports a wide range of resolutions, and when you include the composite port and display connector as well, people can be using the desktop  on a huge number of different screen sizes.

Supporting a range of screen sizes is harder than you might think. One problem is that the Linux desktop environment is made up of a large selection of bits of software from various different developers, and not all of these support resizing. And the bits of software that do support resizing don’t all do it in the same way, so making everything resize at once can be awkward.

This is why one of the first things I did when I first started working on the desktop was to create the Appearance Settings application in order to bring a lot of the settings for things like font and icon sizes into one place. This avoids users having to tweak several configuration files whenever they wanted to change something.

Screenshot of appearance settings application in Raspbian

The Appearance Settings application was a good place to start regarding support of different screen sizes. One of the features I originally included was a button to set everything to a default value. This was really a default setting for screens of an average size, and the resulting defaults would not have worked that well on much smaller or much larger screens. Now, there is no longer a single defaults button, but a new Defaults tab with multiple options:

Screenshot of appearance settings application in Raspbian

These three options adjust font size, icon size, and various other settings to values which ought to work well on screens with a high or low resolution. (The For medium screens option has the same effect as the previous defaults button.) The results will not be perfect in all circumstances and for all applications — as mentioned above, there are many different components used to create the desktop, and some of them don’t provide any way of resizing what they draw. But using these options should set the most important parts of the desktop and installed applications, such as icons, fonts, and toolbars, to a suitable size.

Pixel doubling

We’ve added one other option for supporting high resolution screens. At the bottom of the System tab in the Raspberry Pi Configuration application, there is now an option for pixel doubling:

Screenshot of configuration application in Raspbian

We included this option to facilitate the use of the x86 version of Raspbian with ultra-high-resolution screens that have very small pixels, such as Apple’s Retina displays. When running our desktop on one of these, the tininess of the pixels made everything too small for comfortable use.

Enabling pixel doubling simply draws every pixel in the desktop as a 2×2 block of pixels on the screen, making everything exactly twice the size and resulting in a usable desktop on, for example, a MacBook Pro’s Retina display. We’ve included the option on the version of the desktop for the Pi as well, because we know that some people use their Pi with large-screen HDMI TVs.

As pixel doubling magnifies everything on the screen by a factor of two, it’s also a useful option for people with visual impairments.

How to update

As mentioned above, neither of these new functionalities is a perfect solution to dealing with different screen sizes, but we hope they will make life slightly easier for you if you’re trying to run the desktop on a small or large screen. The features are included in the new image we have just released to support the Pi 3B+. If you want to add them to your existing image, the standard upgrade from apt will do so. As shown in the video above, you can just open a terminal window and enter the following to update Raspbian:

sudo apt-get update
sudo apt-get dist-upgrade

As always, your feedback, either in comments here or on the forums, is very welcome.

The post Raspbian update: supporting different screen sizes appeared first on Raspberry Pi.

Dolby Labs Sues Adobe For Copyright Infringement

Post Syndicated from Andy original https://torrentfreak.com/dolby-labs-sues-adobe-for-copyright-infringement-180314/

Adobe has some of the most recognized software products on the market today, including Photoshop which has become a household name.

While the company has been subjected to more than its fair share of piracy over the years, a new lawsuit accuses the software giant itself of infringement.

Dolby Laboratories is best known as a company specializing in noise reduction and audio encoding and compression technologies. Its reversed double ‘D’ logo is widely recognized after appearing on millions of home hi-fi systems and film end credits.

In a complaint filed this week at a federal court in California, Dolby Labs alleges that after supplying its products to Adobe for 15 years, the latter has failed to live up to its licensing obligations and is guilty of copyright infringement and breach of contract.

“Between 2002 and 2017, Adobe designed and sold its audio-video content creation and editing software with Dolby’s industry-leading audio processing technologies,” Dolby’s complaint reads.

“The basic terms of Adobe’s licenses for products containing Dolby technologies are clear; when Adobe granted its customer a license to any Adobe product that contained Dolby technology, Adobe was contractually obligated to report the sale to Dolby and pay the agreed-upon royalty.”

Dolby says that Adobe promised it wouldn’t sell its any of its products (such as Audition, After Effects, Encore, Lightroom, and Premiere Pro) outside the scope of its licenses with Dolby. Those licenses included clauses which grant Dolby the right to inspect Adobe’s records through a third-party audit, in order to verify the accuracy of Adobe’s sales reporting and associated payment of royalties.

Over the past several years, however, things didn’t go to plan. The lawsuit claims that when Dolby tried to audit Adobe’s books, Adobe refused to “engage in even basic auditing and information sharing practices,” a rather ironic situation given the demands that Adobe places on its own licensees.

Dolby’s assessment is that Adobe spent years withholding this information in an effort to hide the full scale of its non-compliance.

“The limited information that Dolby has reviewed to-date demonstrates that Adobe included Dolby technologies in numerous Adobe software products and collections of products, but refused to report each sale or pay the agreed-upon royalties owed to Dolby,” the lawsuit claims.

Due to the lack of information in Dolby’s possession, the company says it cannot determine the full scope of Adobe’s infringement. However, Dolby accuses Adobe of multiple breaches including bundling licensed products together but only reporting one sale, selling multiple products to one customer but only paying a single license, failing to pay licenses on product upgrades, and even selling products containing Dolby technology without paying a license at all.

Dolby entered into licensing agreements with Adobe in 2003, 2012 and 2013, with each agreement detailing payment of royalties by Adobe to Dolby for each product licensed to Adobe’s customers containing Dolby technology. In the early days when the relationship between the companies first began, Adobe sold either a physical product in “shrink-wrap” form or downloads from its website, a position which made reporting very easy.

In late 2011, however, Adobe began its transition to offering its Creative Cloud (SaaS model) under which customers purchase a subscription to access Adobe software, some of which contains Dolby technology. Depending on how much the customer pays, users can select up to thirty Adobe products. At this point, things appear to have become much more complex.

On January 15, 2015, Dolby tried to inspect Adobe’s books for the period 2012-2014 via a third-party auditing firm. But, according to Dolby, over the next three years “Adobe employed various tactics to frustrate Dolby’s right to audit Adobe’s inclusion of Dolby Technologies in Adobe’s products.”

Dolby points out that under Adobe’s own licensing conditions, businesses must allow Adobe’s auditors to allow the company to inspect their records on seven days’ notice to confirm they are not in breach of Adobe licensing terms. Any discovered shortfalls in licensing must then be paid for, at a rate higher than the original license. This, Dolby says, shows that Adobe is clearly aware of why and how auditing takes place.

“After more than three years of attempting to audit Adobe’s Sales of products containing Dolby Technologies, Dolby still has not received the information required to complete an audit for the full time period,” Dolby explains.

But during this period, Adobe didn’t stand still. According to Dolby, Adobe tried to obtain new licensing from Dolby at a lower price. Dolby stood its ground and insisted on an audit first but despite an official demand, Adobe didn’t provide the complete set of books and records requested.

Eventually, Dolby concluded that Adobe had “no intention to fully comply with its audit obligations” so called in its lawyers to deal with the matter.

“Adobe’s direct and induced infringements of Dolby Licensing’s copyrights in the Asserted Dolby Works are and have been knowing, deliberate, and willful. By its unauthorized copying, use, and distribution of the Asserted Dolby Works and the Adobe Infringing Products, Adobe has violated Dolby Licensing’s exclusive rights..,” the lawsuit reads.

Noting that Adobe has profited and gained a commercial advantage as a result of its alleged infringement, Dolby demands injunctive relief restraining the company from any further breaches in violation of US copyright law.

“Dolby now brings this action to protect its intellectual property, maintain fairness across its licensing partnerships, and to fund the next generations of technology that empower the creative community which Dolby serves,” the company concludes.

Dolby’s full complaint can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

One LED Matrix Table to rule them all

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/led-matrix-table/

Germany-based Andreas Rottach’s multi-purpose LED table is an impressive build within a gorgeous-looking body. Play games, view (heavily pixelated) images, and become hypnotised by flashy lights, once you’ve built your own using his newly released tutorial.

LED-Matrix Table – 300 LEDs – Raspberry Pi – C++ Engine – Custom Controllers

This is a short presentation of my LED-Matrix Table. The table is controlled by a raspberry pi computer that executes a control engine, written in c++. It supports input from keyboards or custom made game controllers. A full list of all features as well as the source code is available on GitHub (https://github.com/rottaca/LEDTableEngine).

Much excitement

Andreas uploaded a video of his LED Matrix Table to YouTube back in February, with the promise of publishing a complete write-up within the coming weeks. And so the members of Pi Towers sat, eagerly waiting and watching. Now the write-up has arrived, to our cheers of acclaim for this beautful, shiny, flashy, LED-based wonderment.

Build your own LED table

In his GitHub tutorial, Andreas goes through all the stages of building the table, from the necessary components to coding the Raspberry Pi 3 and 3D printing your own controllers.

Raspberry Pi LED Table

Find files for the controllers on Thingiverse

Andreas created the table’s impressive light matrix using a strip of 300 LEDs, chained together and connected to the Raspberry Pi via an LED controller.

Raspberry Pi LED Table

The LEDs are set out in zigzags

For the code, he used several open-source tools, such as SDL for image and audio support, and CMake for building the project software.

Anyone planning to recreate Andreas’ table can compile its engine by downloading the project repository from GitHub. Again, find full instructions for this on his GitHub.


The table boasts multiple cool features, including games and visualisation tools. Using the controllers, you can play simplified versions of Flappy Bird and Minesweeper, or go on a nostalgia trip with Tetris, Pong, and Snake.

Raspberry Pi LED Table

There’s also a version of Conway’s Game of Life. Andreas explains: “The lifespan of each cell is color-coded. If the game field gets static, the animation is automatically reset to a new random cell population.”

Raspberry Pi LED Table

The table can also display downsampled Bitmap images, or show clear static images such as a chess board, atop of which you can place physical game pieces.

Raspberry Pi LED Table
Raspberry Pi LED Table
Raspberry Pi LED Table

Find all the 3D-printable aspects of the LED table on Thingiverse here and here, and the full GitHub tutorial and repository here. If you build your own, or have already dabbled in LED tables and displays, be sure to share your project with us, either in the comments below or via our social media accounts. What other functions would you integrate into this awesome build?

The post One LED Matrix Table to rule them all appeared first on Raspberry Pi.

Voksi ‘Pirates’ New Serious Sam Game With Permission From Developers

Post Syndicated from Andy original https://torrentfreak.com/voksi-pirates-new-serious-sam-game-with-permission-from-developers-180312/

Bulgarian cracker Voksi is unlike many others in his line of work. He makes himself relatively available online, interacting with fans and revealing surprising things about his past.

Only last month he told TF that he is entirely self-taught and had been cracking games since he was 15-years-old, just six years ago.

Voksi is probably best known for his hatred of anti-piracy technology Denuvo and to this day is still one of just four groups/people who have managed to crack v4 of the anti-tamper technology. As such, he and his kind are often painted as enemies of the gaming industry but that doesn’t represent the full picture.

In discussion with TF over the weekend, Voksi told us that he’s a huge fan of the Serious Sam franchise so when he found out about the latest title – Serious Sam’s Bogus Detour (SSBD) – he wanted to play it – badly. That led to a remarkable series of events.

“One month before the game’s official release I got into the closed beta, thanks to a friend of mine, who invited me in. I introduced myself to the developers [Crackshell]. I told them what I do for a living, but also assured them that I didn’t have any malicious intents towards the game. They were very cool about it, even surprisingly cool,” Voksi informs TF.

The game eventually hit the market (without Voksi targeting it, of course) with some interesting additions. As shown in the screenshot taken from the game and embedded below, Voksi was listed as a tester for the game.

An unusual addition to the game credits….

Perhaps even more impressively, official Stream screenshots here show Voksi as a player in the game. It’s not exactly what one might expect for someone in his position but from there, the excitement began to fade. Despite a 9/10 rating on Steam, the books didn’t balance.

“The game was released officially on 20 of June, 2017. Months passed. We all hoped it’d be a success, but sadly that was not the case,” Voksi explains.

“Even with all the official marketing done by Devolver Digital, no one batted an eye and really gave it a chance. In December 2017, I found out how bad the sales really were, which even didn’t cover the expenses for the making game, let alone profit.”

Voksi was really disappointed that things hadn’t gone to plan so he contacted the developers with an idea – why didn’t he get involved to try and drum up some support from an entirely unconventional angle? How about giving a special edition of the game away for free while calling on ‘pirates’ to chip in with whatever they could afford?

“Last week I contacted the main dev of SSBD over Steam and proposed what I can do to help boost the game. He immediately agreed,” Voksi says.

“The plan was to release a build of the game that was playable from start to finish, playable in co-op with up to 4 players, not to miss anything important gameplay wise and add a little message in the bottom corner, which is visible at all times, telling you: “We are small indie studio. If you liked the game, please consider buying it. Thank you and enjoy the game!”

Message at the bottom of the screen

But Voksi’s marketing plan didn’t stop there. This special build of the game is also tied to a unique giveaway challenge with several prizes. It’s underway on Voksi’s REVOLT forum and is intended to encourage more people to play the game and share the word among family, friends and whoever else can support the developers.

Importantly, Voski isn’t getting paid to do any of this, he just wants to help the developers and support a game he feels deserves a lot more attention. For those interested in taking it for a spin, the download links are available here in the official thread.

The ‘pirate’ build – Serious.Sam.Bogus.Detour.B126.RIP-Voksi – is slightly less polished than those available officially but it’s hoped that people will offer their support on Steam and GOG if they like the game.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

What John Oliver gets wrong about Bitcoin

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/what-john-oliver-gets-wrong-about.html

John Oliver covered bitcoin/cryptocurrencies last night. I thought I’d describe a bunch of things he gets wrong.

How Bitcoin works

Nowhere in the show does it describe what Bitcoin is and how it works.
Discussions should always start with Satoshi Nakamoto’s original paper. The thing Satoshi points out is that there is an important cost to normal transactions, namely, the entire legal system designed to protect you against fraud, such as the way you can reverse the transactions on your credit card if it gets stolen. The point of Bitcoin is that there is no way to reverse a charge. A transaction is done via cryptography: to transfer money to me, you decrypt it with your secret key and encrypt it with mine, handing ownership over to me with no third party involved that can reverse the transaction, and essentially no overhead.
All the rest of the stuff, like the decentralized blockchain and mining, is all about making that work.
Bitcoin crazies forget about the original genesis of Bitcoin. For example, they talk about adding features to stop fraud, reversing transactions, and having a central authority that manages that. This misses the point, because the existing electronic banking system already does that, and does a better job at it than cryptocurrencies ever can. If you want to mock cryptocurrencies, talk about the “DAO”, which did exactly that — and collapsed in a big fraudulent scheme where insiders made money and outsiders didn’t.
Sticking to Satoshi’s original ideas are a lot better than trying to repeat how the crazy fringe activists define Bitcoin.

How does any money have value?

Oliver’s answer is currencies have value because people agree that they have value, like how they agree a Beanie Baby is worth $15,000.
This is wrong. A better way of asking the question why the value of money changes. The dollar has been losing roughly 2% of its value each year for decades. This is called “inflation”, as the dollar loses value, it takes more dollars to buy things, which means the price of things (in dollars) goes up, and employers have to pay us more dollars so that we can buy the same amount of things.
The reason the value of the dollar changes is largely because the Federal Reserve manages the supply of dollars, using the same law of Supply and Demand. As you know, if a supply decreases (like oil), then the price goes up, or if the supply of something increases, the price goes down. The Fed manages money the same way: when prices rise (the dollar is worth less), the Fed reduces the supply of dollars, causing it to be worth more. Conversely, if prices fall (or don’t rise fast enough), the Fed increases supply, so that the dollar is worth less.
The reason money follows the law of Supply and Demand is because people use money, they consume it like they do other goods and services, like gasoline, tax preparation, food, dance lessons, and so forth. It’s not like a fine art painting, a stamp collection or a Beanie Baby — money is a product. It’s just that people have a hard time thinking of it as a consumer product since, in their experience, money is what they use to buy consumer products. But it’s a symmetric operation: when you buy gasoline with dollars, you are actually selling dollars in exchange for gasoline. That you call one side in this transaction “money” and the other “goods” is purely arbitrary, you call gasoline money and dollars the good that is being bought and sold for gasoline.
The reason dollars is a product is because trying to use gasoline as money is a pain in the neck. Storing it and exchanging it is difficult. Goods like this do become money, such as famously how prisons often use cigarettes as a medium of exchange, even for non-smokers, but it has to be a good that is fungible, storable, and easily exchanged. Dollars are the most fungible, the most storable, and the easiest exchanged, so has the most value as “money”. Sure, the mechanic can fix the farmers car for three chickens instead, but most of the time, both parties in the transaction would rather exchange the same value using dollars than chickens.
So the value of dollars is not like the value of Beanie Babies, which people might buy for $15,000, which changes purely on the whims of investors. Instead, a dollar is like gasoline, which obey the law of Supply and Demand.
This brings us back to the question of where Bitcoin gets its value. While Bitcoin is indeed used like dollars to buy things, that’s only a tiny use of the currency, so therefore it’s value isn’t determined by Supply and Demand. Instead, the value of Bitcoin is a lot like Beanie Babies, obeying the laws of investments. So in this respect, Oliver is right about where the value of Bitcoin comes, but wrong about where the value of dollars comes from.

Why Bitcoin conference didn’t take Bitcoin

John Oliver points out the irony of a Bitcoin conference that stopped accepting payments in Bitcoin for tickets.
The biggest reason for this is because Bitcoin has become so popular that transaction fees have gone up. Instead of being proof of failure, it’s proof of popularity. What John Oliver is saying is the old joke that nobody goes to that popular restaurant anymore because it’s too crowded and you can’t get a reservation.
Moreover, the point of Bitcoin is not to replace everyday currencies for everyday transactions. If you read Satoshi Nakamoto’s whitepaper, it’s only goal is to replace certain types of transactions, like purely electronic transactions where electronic goods and services are being exchanged. Where real-life goods/services are being exchanged, existing currencies work just fine. It’s only the crazy activists who claim Bitcoin will eventually replace real world currencies — the saner people see it co-existing with real-world currencies, each with a different value to consumers.

Turning a McNugget back into a chicken

John Oliver uses the metaphor of turning a that while you can process a chicken into McNuggets, you can’t reverse the process. It’s a funny metaphor.
But it’s not clear what the heck this metaphor is trying explain. That’s not a metaphor for the blockchain, but a metaphor for a “cryptographic hash”, where each block is a chicken, and the McNugget is the signature for the block (well, the block plus the signature of the last block, forming a chain).
Even then that metaphor as problems. The McNugget produced from each chicken must be unique to that chicken, for the metaphor to accurately describe a cryptographic hash. You can therefore identify the original chicken simply by looking at the McNugget. A slight change in the original chicken, like losing a feather, results in a completely different McNugget. Thus, nuggets can be used to tell if the original chicken has changed.
This then leads to the key property of the blockchain, it is unalterable. You can’t go back and change any of the blocks of data, because the fingerprints, the nuggets, will also change, and break the nugget chain.
The point is that while John Oliver is laughing at a silly metaphor to explain the blockchain becuase he totally misses the point of the metaphor.
Oliver rightly says “don’t worry if you don’t understand it — most people don’t”, but that includes the big companies that John Oliver name. Some companies do get it, and are producing reasonable things (like JP Morgan, by all accounts), but some don’t. IBM and other big consultancies are charging companies millions of dollars to consult with them on block chain products where nobody involved, the customer or the consultancy, actually understand any of it. That doesn’t stop them from happily charging customers on one side and happily spending money on the other.
Thus, rather than Oliver explaining the problem, he’s just being part of the problem. His explanation of blockchain left you dumber than before.


John Oliver mocks the Brave ICO ($35 million in 30 seconds), claiming it’s all driven by YouTube personalities and people who aren’t looking at the fundamentals.
And while this is true, most ICOs are bunk, the  Brave ICO actually had a business model behind it. Brave is a Chrome-like web-browser whose distinguishing feature is that it protects your privacy from advertisers. If you don’t use Brave or a browser with an ad block extension, you have no idea how bad things are for you. However, this presents a problem for websites that fund themselves via advertisements, which is most of them, because visitors no longer see ads. Brave has a fix for this. Most people wouldn’t mind supporting the websites they visit often, like the New York Times. That’s where the Brave ICO “token” comes in: it’s not simply stock in Brave, but a token for micropayments to websites. Users buy tokens, then use them for micropayments to websites like New York Times. The New York Times then sells the tokens back to the market for dollars. The buying and selling of tokens happens without a centralized middleman.
This is still all speculative, of course, and it remains to be seen how successful Brave will be, but it’s a serious effort. It has well respected VC behind the company, a well-respected founder (despite the fact he invented JavaScript), and well-respected employees. It’s not a scam, it’s a legitimate venture.

How to you make money from Bitcoin?

The last part of the show is dedicated to describing all the scam out there, advising people to be careful, and to be “responsible”. This is garbage.
It’s like my simple two step process to making lots of money via Bitcoin: (1) buy when the price is low, and (2) sell when the price is high. My advice is correct, of course, but useless. Same as “be careful” and “invest responsibly”.
The truth about investing in cryptocurrencies is “don’t”. The only responsible way to invest is to buy low-overhead market index funds and hold for retirement. No, you won’t get super rich doing this, but anything other than this is irresponsible gambling.
It’s a hard lesson to learn, because everyone is telling you the opposite. The entire channel CNBC is devoted to day traders, who buy and sell stocks at a high rate based on the same principle as a ponzi scheme, basing their judgment not on the fundamentals (like long term dividends) but animal spirits of whatever stock is hot or cold at the moment. This is the same reason people buy or sell Bitcoin, not because they can describe the fundamental value, but because they believe in a bigger fool down the road who will buy it for even more.
For things like Bitcoin, the trick to making money is to have bought it over 7 years ago when it was essentially worthless, except to nerds who were into that sort of thing. It’s the same tick to making a lot of money in Magic: The Gathering trading cards, which nerds bought decades ago which are worth a ton of money now. Or, to have bought Apple stock back in 2009 when the iPhone was new, when nerds could understand the potential of real Internet access and apps that Wall Street could not.
That was my strategy: be a nerd, who gets into things. I’ve made a good amount of money on all these things because as a nerd, I was into Magic: The Gathering, Bitcoin, and the iPhone before anybody else was, and bought in at the point where these things were essentially valueless.
At this point with cryptocurrencies, with the non-nerds now flooding the market, there little chance of making it rich. The lottery is probably a better bet. Instead, if you want to make money, become a nerd, obsess about a thing, understand a thing when its new, and cash out once the rest of the market figures it out. That might be Brave, for example, but buy into it because you’ve spent the last year studying the browser advertisement ecosystem, the market’s willingness to pay for content, and how their Basic Attention Token delivers value to websites — not because you want in on the ICO craze.


John Oliver spends 25 minutes explaining Bitcoin, Cryptocurrencies, and the Blockchain to you. Sure, it’s funny, but it leaves you worse off than when it started. It admits they “simplify” the explanation, but they simplified it so much to the point where they removed all useful information.

Camcording Piracy is Dropping, But Not In Russia

Post Syndicated from Ernesto original https://torrentfreak.com/camcording-piracy-is-dropping-but-not-in-russia-180311/

The movie industry sees movies that are illegally recorded in theaters as one of the biggest piracy threats worldwide.

To combat this, audio and video watermarking tools are used to detect pirates and their favorite locations. In addition, night-vision goggles and other spy tech are employed to monitor moviegoers during high profile film premieres.

Despite these efforts, so-called ‘cam’ releases of hundreds of films still end up on pirate sites.

In fact, the majority of all new pirated movies that appear online can be traced to a digital recording in a movie theater. This can be the movie itself, the audio, or both. The good news for the movie industry is that the total number seems to be dropping somewhat.

According to statistics gathered by the MPAA, 447 illegal recording of its members’ movies were detected in 2017. This is down 11% compared to the year before when 503 titles were recorded. This suggests that enforcement actions and preventive measures are paying off. However, this is not visible everywhere.

This week Kevin Rosenbaum of the International Intellectual Property Alliance (IIPA), which represents various industry groups including the MPAA, informed the US International Trade Commission that camcording piracy is on the rise in Russia.

In his oral testimony, Rosenbaum signaled three key copyright issues in Russia that deserve attention from the US Government.

“First is to dramatically improve enforcement against online piracy, particularly piracy sites and services directed to users outside of Russia,” Rosenbaum said.

In addition, the country also has to address the problem with the Russian collecting societies, to effectively handle music licensing. These currently lack transparency or good governance, IIPA noted.

The third issue that needs attention is camcording piracy. According to IIPA’s statement, there has been a dramatic increase in illegally recorded movies over the past several years.

“Russia must address the problem of camcording motion pictures, which has risen dramatically over the past three years (200% since 2015) and fuels online piracy,” Rosenbaum noted.

In 2015 the movie industry traced 26 camcorded copies to Russia and by last year this number had increased to 78. These releases are linked to movie theaters around the country, from Moscow, Kazan, Tatarstan, St. Petersburg, all the way up to Siberia.

The Russian camcording piracy problem was also highlighted in IIPA’s recent Special 301 submission to the US Trade Representative.

“Russia remains the home to some of the world’s most prolific criminal release groups of motion pictures.” IIPA wrote last month. “The illicit camcords that are sourced from Russia are only of fair quality, but they remain in high demand by international criminal syndicates.”

With help from the Russian-Anti Piracy Organization over a dozen cammers were caught last year. In addition, four criminal cases were launched.

IIPA hopes that these will result in convictions, to create a deterrent effect. In addition, the group highlights that Russia could strengthen its laws, perhaps with a little push from the US.

A copy of Kevin Rosenbaum’s statement before the United States International Trade Commission is available here (pdf). In addition to Russia, it also highlights issues in other countries.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Spanish Netflix Competitor Filmin Partnered With Leading Pirate Site

Post Syndicated from Ernesto original https://torrentfreak.com/spanish-netflix-competitor-partnered-leading-pirate-site-180310/

In 2011 Hollywood’s MPAA highlighted SeriesYonkis as one of the most prolific pirate sites on the Internet.

“With a worldwide Alexa rank of 855, Seriesyonkis.com is one the most visited websites in the world for locating and streaming unauthorized copies of motion picture and television content,” Hollywood’s industry group informed the US Government.

While the MPAA was calling for tough enforcement actions, film industry partners in Spain came up with a different plan. They signed an unprecedented deal with the pirate site in 2011, hoping to convert its users into paying customers.

The main figures in this unusual episode are Juan Carlos Tous, the founder of the legal streaming platform Filmin, and SeriesYonkis owner Alexis Hoepfner, who operated the pirate site under his company Burn Media.

With help from lawyer Andy Ramos they negotiated a unique deal that would ‘merge’ both businesses. According to local newspaper El Confidencial, which has seen a copy of the agreement, SeriesYonkis company would get a 23% stake in Filmin, on the condition that pirate links were replaced with legal ones within a set period.

The entire agreement was kept secret by a confidentiality clause, which worked well until a few days ago.

SeriesYonkis also made two loans of 250,000 euros available, which were convertible into shares. In addition to the above, Filmin also offered compensation for every pirate it converted, up to 10 euros per user that signed up for an annual subscription.

The agreement further stipulated that SeriesYonkis had to apologize for its pirate ways. Point five stressed that SeriesYonkis and other Burn Media sites had to “carry out communication and awareness actions so that the users of the websites understand the need to legally access audiovisual content.”

Interestingly, SeriesYonkis wasn’t planning to go down and let other pirate sites take its traffic. The agreement included a clause that obligated Filmin to spend 25,000 euros to shut down or reduce traffic to other pirate sites.

The episode took place when Spain was about to implement its Sinde law, which would make it hard for local pirate sites in a country that was considered a “safe haven” at the time. However, not everything went according to plan.

The Sinde law didn’t destroy all Spanish pirate sites and six months after signing the agreement, SeriesYonkis stopped deleting pirate links. Even worse, its owner launched several new pirate sites, such as SeriesCoco and SeriesKiwi.

Filmin’s founder was outraged and sent an email demanding answers.

“I would like to hear your opinion on the progress and explanation of your plan with SeriesCoco! I do not understand anything! I thought you were going to decrease, and I see that you are opening portals!! WTF!” Tous wrote.

The deal eventually fell apart. Filmin kept its shares and stopped paying for new referrals. SeriesYonkis’ company Burn Media filed a lawsuit to get back its money, but thus far that hasn’t happened.

According to an insider close to the deal, the idea was brilliant. SeriesYonkis reportedly earned millions of euros at the time, more than Filmin, and used this money to go legal and destroy the competition ahead of a tough new anti-piracy law.

“The pirate not only abandons its weapons, but is integrated into the industry, and uses capital earned from piracy to fight against it,” a source told El Confidencial.

“It was a winning deal for everyone,” another source added, regretting that it didn’t work out. “It was a very bold agreement, something unusual in this sector, that would have changed the scenario if it had worked.”

Today, roughly seven years after the agreement was set into motion, Filmin is one of the larger streaming platforms in Spain. SeriesYonkis is also still around, but was sold by Hoefner in 2016 and no longer links to pirated content.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Serverless Dynamic Web Pages in AWS: Provisioned with CloudFormation

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/serverless-dynamic-web-pages-in-aws-provisioned-with-cloudformation/

***This blog is authored by Mike Okner of Monsanto, an AWS customer. It originally appeared on the Monsanto company blog. Minor edits were made to the original post.***

Recently, I was looking to create a status page app to monitor a few important internal services. I wanted this app to be as lightweight, reliable, and hassle-free as possible, so using a “serverless” architecture that doesn’t require any patching or other maintenance was quite appealing.

I also don’t deploy anything in a production AWS environment outside of some sort of template (usually CloudFormation) as a rule. I don’t want to have to come back to something I created ad hoc in the console after 6 months and try to recall exactly how I architected all of the resources. I’ll inevitably forget something and create more problems before solving the original one. So building the status page in a template was a requirement.

The Design
I settled on a design using two Lambda functions, both written in Python 3.6.

The first Lambda function makes requests out to a list of important services and writes their current status to a DynamoDB table. This function is executed once per minute via CloudWatch Event Rule.

The second Lambda function reads each service’s status & uptime information from DynamoDB and renders a Jinja template. This function is behind an API Gateway that has been configured to return text/html instead of its default application/json Content-Type.

The CloudFormation Template
AWS provides a Serverless Application Model template transformer to streamline the templating of Lambda + API Gateway designs, but it assumes (like everything else about the API Gateway) that you’re actually serving an API that returns JSON content. So, unfortunately, it won’t work for this use-case because we want to return HTML content. Instead, we’ll have to enumerate every resource like usual.

The Skeleton
We’ll be using YAML for the template in this example. I find it easier to read than JSON, but you can easily convert between the two with a converter if you disagree.

AWSTemplateFormatVersion: '2010-09-09'
Description: Serverless status page app
  # [...Resources]

The Status-Checker Lambda Resource
This one is triggered on a schedule by CloudWatch, and looks like:

# Status Checker Lambda
  Type: AWS::Lambda::Function
    Code: ./lambda.zip
        TABLE_NAME: !Ref DynamoTable
    Handler: checker.handler
      - CheckerLambdaRole
      - Arn
    Runtime: python3.6
    Timeout: 45
  Type: AWS::IAM::Role
    - arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
    - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Version: '2012-10-17'
      - Action:
        - sts:AssumeRole
        Effect: Allow
          - lambda.amazonaws.com
  Type: AWS::Events::Rule
    ScheduleExpression: rate(1 minute)
    - Id: CheckerLambdaTimerLambdaTarget
        - CheckerLambda
        - Arn
  Type: AWS::Lambda::Permission
    Action: lambda:invokeFunction
    FunctionName: !Ref CheckerLambda
      - CheckerLambdaTimer
      - Arn
    Principal: events.amazonaws.com

Let’s break that down a bit.

The CheckerLambda is the actual Lambda function. The Code section is a local path to a ZIP file containing the code and its dependencies. I’m using CloudFormation’s packaging feature to automatically push the deployable to S3.

The CheckerLambdaRole is the IAM role the Lambda will assume which grants it access to DynamoDB in addition to the usual Lambda logging permissions.

The CheckerLambdaTimer is the CloudWatch Events Rule that triggers the checker to run once per minute.

The CheckerLambdaTimerPermission grants CloudWatch the ability to invoke the checker Lambda function on its interval.

The Web Page Gateway
The API Gateway handles incoming requests for the web page, invokes the Lambda, and then returns the Lambda’s results as HTML content. Its template looks like:

# API Gateway for Web Page Lambda
  Type: AWS::ApiGateway::RestApi
    Name: Service Checker Gateway
  Type: AWS::ApiGateway::Resource
    RestApiId: !Ref PageGateway
      - PageGateway
      - RootResourceId
    PathPart: page
  Type: AWS::ApiGateway::Method
    AuthorizationType: NONE
    HttpMethod: GET
      Type: AWS
      IntegrationHttpMethod: POST
        Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${WebRenderLambda.Arn}/invocations
        application/json: |
              "method": "$context.httpMethod",
              "body" : $input.json('$'),
              "headers": {
                  #foreach($param in $input.params().header.keySet())
                  "$param": "$util.escapeJavaScript($input.params().header.get($param))"
      - StatusCode: 200
          method.response.header.Content-Type: "'text/html'"
          text/html: "$input.path('$')"
    ResourceId: !Ref PageResource
    RestApiId: !Ref PageGateway
    - StatusCode: 200
        method.response.header.Content-Type: true
  Type: AWS::ApiGateway::Stage
    DeploymentId: !Ref PageGatewayDeployment
    RestApiId: !Ref PageGateway
    StageName: Prod
  Type: AWS::ApiGateway::Deployment
  DependsOn: PageGatewayMethod
    RestApiId: !Ref PageGateway
    Description: PageGateway deployment
    StageName: Stage

There’s a lot going on here, but the real meat is in the PageGatewayMethod section. There are a couple properties that deviate from the default which is why we couldn’t use the SAM transformer.

First, we’re passing request headers through to the Lambda in theRequestTemplates section. I’m doing this so I can validate incoming auth headers. The API Gateway can do some types of auth, but I found it easier to check auth myself in the Lambda function since the Gateway is designed to handle API calls and not browser requests.

Next, note that in the IntegrationResponses section we’re defining the Content-Type header to be ‘text/html’ (with single-quotes) and defining the ResponseTemplate to be $input.path(‘$’). This is what makes the request render as a HTML page in your browser instead of just raw text.

Due to the StageName and PathPart values in the other sections, your actual page will be accessible at https://someId.execute-api.region.amazonaws.com/Prod/page. I have the page behind an existing reverse-proxy and give it a saner URL for end-users. The reverse proxy also attaches the auth header I mentioned above. If that header isn’t present, the Lambda will render an error page instead so the proxy can’t be bypassed.

The Web Page Rendering Lambda
This Lambda is invoked by calls to the API Gateway and looks like:

# Web Page Lambda
  Type: AWS::Lambda::Function
    Code: ./lambda.zip
        TABLE_NAME: !Ref DynamoTable
    Handler: web.handler
      - WebRenderLambdaRole
      - Arn
    Runtime: python3.6
    Timeout: 30
  Type: AWS::IAM::Role
    - arn:aws:iam::aws:policy/AmazonDynamoDBReadOnlyAccess
    - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Version: '2012-10-17'
      - Action:
        - sts:AssumeRole
        Effect: Allow
          - lambda.amazonaws.com
  Type: AWS::Lambda::Permission
    FunctionName: !Ref WebRenderLambda
    Action: lambda:invokeFunction
    Principal: apigateway.amazonaws.com
      - arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/*/*/*
      - __ApiId__: !Ref PageGateway

The WebRenderLambda and WebRenderLambdaRole should look familiar.

The WebRenderLambdaGatewayPermission is similar to the Status Checker’s CloudWatch permission, only this time it allows the API Gateway to invoke this Lambda.

The DynamoDB Table
This one is straightforward.

# DynamoDB table
  Type: AWS::DynamoDB::Table
    - AttributeName: name
      AttributeType: S
      WriteCapacityUnits: 1
      ReadCapacityUnits: 1
    TableName: status-page-checker-results
    - KeyType: HASH
      AttributeName: name

The Deployment
We’ve made it this far defining every resource in a template that we can check in to version control, so we might as well script the deployment as well rather than manually manage the CloudFormation Stack via the AWS web console.

Since I’m using the packaging feature, I first run:

$ aws cloudformation package \
    --template-file template.yaml \
    --s3-bucket <some-bucket-name> \
    --output-template-file template-packaged.yaml
Uploading to 34cd6e82c5e8205f9b35e71afd9e1548 1922559 / 1922559.0 (100.00%) Successfully packaged artifacts and wrote output template to file template-packaged.yaml.

Then to deploy the template (whether new or modified), I run:

$ aws cloudformation deploy \
    --region '<aws-region>' \
    --template-file template-packaged.yaml \
    --stack-name '<some-name>' \
    --capabilities CAPABILITY_IAM
Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - <some-name>

And that’s it! You’ve just created a dynamic web page that will never require you to SSH anywhere, patch a server, recover from a disaster after Amazon terminates your unhealthy EC2, or any other number of pitfalls that are now the problem of some ops person at AWS. And you can reproduce deployments and make changes with confidence because everything is defined in the template and can be tracked in version control.

U.S. Border Seizures of DMCA Circumvention Devices Surges

Post Syndicated from Ernesto original https://torrentfreak.com/u-s-border-seizures-of-dmca-circumvention-devices-surges-180309/

In the United States, citizens are generally prohibited from tampering with DRM and other technological protection measures.

This means that Blu-ray rippers are not allowed, nor are mod chips for gaming consoles, and some pirate streaming boxes could fall into this category as well.

Despite possible sanctions, there are plenty of manufacturers who ship these devices to the US, often to individual consumers. To arrive at their destination, however, they first have to pass the border control.

Not all make it to their final destination. A new report released by Homeland Security shows that the number of “intellectual property” related seizures increased by 8%, from 31,560 in 2016 to 34,143 a year later.

The vast majority of these seized items are traditional counterfeit goods. This includes fake brand clothing, shoes, replica watches, toys, as well as consumer electronics.

What caught our eye, however, is a sharp increase in “circumvention devices” that were found to violate the DMCA. Last year, the number of these seized items U.S. Customs and Border Protection increased by 324%.

“CBP seized 297 shipments of circumvention devices for violations of the Digital Millennium Copyright Act (DMCA), a 324 percent increase from 70 such seizures in FY 2016,” the report reads.

While the relative increase is quite dramatic, the absolute numbers are perhaps not as impressive, with less than one seized device per day. The report gives no explanation for the surge, nor is there an estimate of how many devices slip through.

What we did notice is that the International Intellectual Property Alliance (IIPA) recently framed streaming boxes as possible circumvention tools. The strong enforcement focus of rightsholders on these devices may have been communicated to border patrols as well.

When we previously reached out to Customs and Border Protection (CBP) to find out more about what type of circumvention devices are seized under the DMCA, a spokesperson provided us with the following definition.

“[P]roducts, devices, components, or parts thereof that are primarily designed or produced for the purpose of circumventing protection afforded by a technological measure that effectively protects a right of a copyright owner, and have only limited commercially significant purposes or uses other than to circumvent such protection measures.”

TorrentFreak reached out to CBP again this week to ask if streaming boxes are seen as circumvention devices, but at the time of writing, we have yet to receive a response.

In a press release commenting on the news, CBP Acting Commissioner Kevin McAleenan said that his organization is happy with last year’s results.

“The theft of intellectual property and trade in counterfeit and pirated goods causes harm to an innovation-based economy by threatening the competitiveness of businesses and the livelihoods of workers,” McAleenan said.

“Another record-breaking year of IPR seizures highlights the vigilance of CBP and ICE personnel in preventing counterfeit goods from entering our stream of commerce and their dedication to protecting the American people,” he added.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Coding is for girls

Post Syndicated from magda original https://www.raspberrypi.org/blog/coding-is-for-girls/

Less than four years ago, Magda Jadach was convinced that programming wasn’t for girls. On International Women’s Day, she tells us how she discovered that it definitely is, and how she embarked on the new career that has brought her to Raspberry Pi as a software developer.

“Coding is for boys”, “in order to be a developer you have to be some kind of super-human”, and “it’s too late to learn how to code” – none of these three things is true, and I am going to prove that to you in this post. By doing this I hope to help some people to get involved in the tech industry and digital making. Programming is for anyone who loves to create and loves to improve themselves.

In the summer of 2014, I started the journey towards learning how to code. I attended my first coding workshop at the recommendation of my boyfriend, who had constantly told me about the skill and how great it was to learn. I was convinced that, at 28 years old, I was already too old to learn. I didn’t have a technical background, I was under the impression that “coding is for boys”, and I lacked the superpowers I was sure I needed. I decided to go to the workshop only to prove him wrong.

Later on, I realised that coding is a skill like any other. You can compare it to learning any language: there’s grammar, vocabulary, and other rules to acquire.

Log In or Sign Up to View

See posts, photos and more on Facebook.

Alien message in console

To my surprise, the workshop was completely inspiring. Within six hours I was able to create my first web page. It was a really simple page with a few cats, some colours, and ‘Hello world’ text. This was a few years ago, but I still remember when I first clicked “view source” to inspect the page. It looked like some strange alien message, as if I’d somehow broken the computer.

I wanted to learn more, but with so many options, I found myself a little overwhelmed. I’d never taught myself any technical skill before, and there was a lot of confusing jargon and new terms to get used to. What was HTML? CSS and JavaScript? What were databases, and how could I connect together all the dots and choose what I wanted to learn? Luckily I had support and was able to keep going.

At times, I felt very isolated. Was I the only girl learning to code? I wasn’t aware of many female role models until I started going to more workshops. I met a lot of great female developers, and thanks to their support and help, I kept coding.

Another struggle I faced was the language barrier. I am not a native speaker of English, and diving into English technical documentation wasn’t easy. The learning curve is daunting in the beginning, but it’s completely normal to feel uncomfortable and to think that you’re really bad at coding. Don’t let this bring you down. Everyone thinks this from time to time.

Play with Raspberry Pi and quit your job

I kept on improving my skills, and my interest in developing grew. However, I had no idea that I could do this for a living; I simply enjoyed coding. Since I had a day job as a journalist, I was learning in the evenings and during the weekends.

I spent long hours playing with a Raspberry Pi and setting up so many different projects to help me understand how the internet and computers work, and get to grips with the basics of electronics. I built my first ever robot buggy, retro game console, and light switch. For the first time in my life, I had a soldering iron in my hand. Day after day I become more obsessed with digital making.

Magdalena Jadach on Twitter

solderingiron Where have you been all my life? Weekend with #raspberrypi + @pimoroni + @Pololu + #solder = best time! #electricity

One day I realised that I couldn’t wait to finish my job and go home to finish some project that I was working on at the time. It was then that I decided to hand over my resignation letter and dive deep into coding.

For the next few months I completely devoted my time to learning new skills and preparing myself for my new career path.

I went for an interview and got my first ever coding internship. Two years, hundreds of lines of code, and thousands of hours spent in front of my computer later, I have landed my dream job at the Raspberry Pi Foundation as a software developer, which proves that dreams come true.

Animated GIF – Find & Share on GIPHY

Discover & share this Animated GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.

Where to start?

I recommend starting with HTML & CSS – the same path that I chose. It is a relatively straightforward introduction to web development. You can follow my advice or choose a different approach. There is no “right” or “best” way to learn.

Below is a collection of free coding resources, both from Raspberry Pi and from elsewhere, that I think are useful for beginners to know about. There are other tools that you are going to want in your developer toolbox aside from HTML.

  • HTML and CSS are languages for describing, structuring, and styling web pages
  • You can learn JavaScript here and here
  • Raspberry Pi (obviously!) and our online learning projects
  • Scratch is a graphical programming language that lets you drag and combine code blocks to make a range of programs. It’s a good starting point
  • Git is version control software that helps you to work on your own projects and collaborate with other developers
  • Once you’ve got started, you will need a code editor. Sublime Text or Atom are great options for starting out

Coding gives you so much new inspiration, you learn new stuff constantly, and you meet so many amazing people who are willing to help you develop your skills. You can volunteer to help at a Code Club or  Coder Dojo to increase your exposure to code, or attend a Raspberry Jam to meet other like-minded makers and start your own journey towards becoming a developer.

The post Coding is for girls appeared first on Raspberry Pi.

Improve the Operational Efficiency of Amazon Elasticsearch Service Domains with Automated Alarms Using Amazon CloudWatch

Post Syndicated from Veronika Megler original https://aws.amazon.com/blogs/big-data/improve-the-operational-efficiency-of-amazon-elasticsearch-service-domains-with-automated-alarms-using-amazon-cloudwatch/

A customer has been successfully creating and running multiple Amazon Elasticsearch Service (Amazon ES) domains to support their business users’ search needs across products, orders, support documentation, and a growing suite of similar needs. The service has become heavily used across the organization.  This led to some domains running at 100% capacity during peak times, while others began to run low on storage space. Because of this increased usage, the technical teams were in danger of missing their service level agreements.  They contacted me for help.

This post shows how you can set up automated alarms to warn when domains need attention.

Solution overview

Amazon ES is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time analytics capabilities along with the availability, scalability, and security that production workloads require.  The service offers built-in integrations with a number of other components and AWS services, enabling customers to go from raw data to actionable insights quickly and securely.

One of these other integrated services is Amazon CloudWatch. CloudWatch is a monitoring service for AWS Cloud resources and the applications that you run on AWS. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.

CloudWatch collects metrics for Amazon ES. You can use these metrics to monitor the state of your Amazon ES domains, and set alarms to notify you about high utilization of system resources.  For more information, see Amazon Elasticsearch Service Metrics and Dimensions.

While the metrics are automatically collected, the missing piece is how to set alarms on these metrics at appropriate levels for each of your domains. This post includes sample Python code to evaluate the current state of your Amazon ES environment, and to set up alarms according to AWS recommendations and best practices.

There are two components to the sample solution:

  • es-check-cwalarms.py: This Python script checks the CloudWatch alarms that have been set, for all Amazon ES domains in a given account and region.
  • es-create-cwalarms.py: This Python script sets up a set of CloudWatch alarms for a single given domain.

The sample code can also be found in the amazon-es-check-cw-alarms GitHub repo. The scripts are easy to extend or combine, as described in the section “Extensions and Adaptations”.

Assessing the current state

The first script, es-check-cwalarms.py, is used to give an overview of the configurations and alarm settings for all the Amazon ES domains in the given region. The script takes the following parameters:

python es-checkcwalarms.py -h
usage: es-checkcwalarms.py [-h] [-e ESPREFIX] [-n NOTIFY] [-f FREE][-p PROFILE] [-r REGION]
Checks a set of recommended CloudWatch alarms for Amazon Elasticsearch Service domains (optionally, those beginning with a given prefix).
optional arguments:
  -h, --help   		show this help message and exit
  -e ESPREFIX, --esprefix ESPREFIX	Only check Amazon Elasticsearch Service domains that begin with this prefix.
  -n NOTIFY, --notify NOTIFY    List of CloudWatch alarm actions; e.g. ['arn:aws:sns:xxxx']
  -f FREE, --free FREE  Minimum free storage (MB) on which to alarm
  -p PROFILE, --profile PROFILE     IAM profile name to use
  -r REGION, --region REGION       AWS region for the domain. Default: us-east-1

The script first identifies all the domains in the given region (or, optionally, limits them to the subset that begins with a given prefix). It then starts running a set of checks against each one.

The script can be run from the command line or set up as a scheduled Lambda function. For example, for one customer, it was deemed appropriate to regularly run the script to check that alarms were correctly set for all domains. In addition, because configuration changes—cluster size increases to accommodate larger workloads being a common change—might require updates to alarms, this approach allowed the automatic identification of alarms no longer appropriately set as the domain configurations changed.

The output shown below is the output for one domain in my account.

Starting checks for Elasticsearch domain iotfleet , version is 53
Iotfleet Automated snapshot hour (UTC): 0
Iotfleet Instance configuration: 1 instances; type:m3.medium.elasticsearch
Iotfleet Instance storage definition is: 4 GB; free storage calced to: 819.2 MB
iotfleet Desired free storage set to (in MB): 819.2
iotfleet WARNING: Not using VPC Endpoint
iotfleet WARNING: Does not have Zone Awareness enabled
iotfleet WARNING: Instance count is ODD. Best practice is for an even number of data nodes and zone awareness.
iotfleet WARNING: Does not have Dedicated Masters.
iotfleet WARNING: Neither index nor search slow logs are enabled.
iotfleet WARNING: EBS not in use. Using instance storage only.
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.yellow-Alarm ClusterStatus.yellow
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-ClusterStatus.red-Alarm ClusterStatus.red
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-CPUUtilization-Alarm CPUUtilization
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-JVMMemoryPressure-Alarm JVMMemoryPressure
iotfleet WARNING: Missing alarm!! ('ClusterIndexWritesBlocked', 'Maximum', 60, 5, 'GreaterThanOrEqualToThreshold', 1.0)
iotfleet Alarm ok; definition matches. Test-Elasticsearch-iotfleet-AutomatedSnapshotFailure-Alarm AutomatedSnapshotFailure
iotfleet Alarm: Threshold does not match: Test-Elasticsearch-iotfleet-FreeStorageSpace-Alarm Should be:  819.2 ; is 3000.0

The output messages fall into the following categories:

  • System overview, Informational: The Amazon ES version and configuration, including instance type and number, storage, automated snapshot hour, etc.
  • Free storage: A calculation for the appropriate amount of free storage, based on the recommended 20% of total storage.
  • Warnings: best practices that are not being followed for this domain. (For more about this, read on.)
  • Alarms: An assessment of the CloudWatch alarms currently set for this domain, against a recommended set.

The script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. Using the array allows alarm parameters (such as free space) to be updated within the code based on current domain statistics and configurations.

For a given domain, the script checks if each alarm has been set. If the alarm is set, it checks whether the values match those in the array esAlarms. In the output above, you can see three different situations being reported:

  • Alarm ok; definition matches. The alarm set for the domain matches the settings in the array.
  • Alarm: Threshold does not match. An alarm exists, but the threshold value at which the alarm is triggered does not match.
  • WARNING: Missing alarm!! The recommended alarm is missing.

All in all, the list above shows that this domain does not have a configuration that adheres to best practices, nor does it have all the recommended alarms.

Setting up alarms

Now that you know that the domains in their current state are missing critical alarms, you can correct the situation.

To demonstrate the script, set up a new domain named “ver”, in us-west-2. Specify 1 node, and a 10-GB EBS disk. Also, create an SNS topic in us-west-2 with a name of “sendnotification”, which sends you an email.

Run the second script, es-create-cwalarms.py, from the command line. This script creates (or updates) the desired CloudWatch alarms for the specified Amazon ES domain, “ver”.

python es-create-cwalarms.py -r us-west-2 -e test -c ver -n "['arn:aws:sns:us-west-2:xxxxxxxxxx:sendnotification']"
EBS enabled: True type: gp2 size (GB): 10 No Iops 10240  total storage (MB)
Desired free storage set to (in MB): 2048.0
Creating  Test-Elasticsearch-ver-ClusterStatus.yellow-Alarm
Creating  Test-Elasticsearch-ver-ClusterStatus.red-Alarm
Creating  Test-Elasticsearch-ver-CPUUtilization-Alarm
Creating  Test-Elasticsearch-ver-JVMMemoryPressure-Alarm
Creating  Test-Elasticsearch-ver-FreeStorageSpace-Alarm
Creating  Test-Elasticsearch-ver-ClusterIndexWritesBlocked-Alarm
Creating  Test-Elasticsearch-ver-AutomatedSnapshotFailure-Alarm
Successfully finished creating alarms!

As with the first script, this script contains an array of recommended CloudWatch alarms, based on best practices for these metrics and statistics. This approach allows you to add or modify alarms based on your use case (more on that below).

After running the script, navigate to Alarms on the CloudWatch console. You can see the set of alarms set up on your domain.

Because the “ver” domain has only a single node, cluster status is yellow, and that alarm is in an “ALARM” state. It’s already sent a notification that the alarm has been triggered.

What to do when an alarm triggers

After alarms are set up, you need to identify the correct action to take for each alarm, which depends on the alarm triggered. For ideas, guidance, and additional pointers to supporting documentation, see Get Started with Amazon Elasticsearch Service: Set CloudWatch Alarms on Key Metrics. For information about common errors and recovery actions to take, see Handling AWS Service Errors.

In most cases, the alarm triggers due to an increased workload. The likely action is to reconfigure the system to handle the increased workload, rather than reducing the incoming workload. Reconfiguring any backend store—a category of systems that includes Elasticsearch—is best performed when the system is quiescent or lightly loaded. Reconfigurations such as setting zone awareness or modifying the disk type cause Amazon ES to enter a “processing” state, potentially disrupting client access.

Other changes, such as increasing the number of data nodes, may cause Elasticsearch to begin moving shards, potentially impacting search performance on these shards while this is happening. These actions should be considered in the context of your production usage. For the same reason I also do not recommend running a script that resets all domains to match best practices.

Avoid the need to reconfigure during heavy workload by setting alarms at a level that allows a considered approach to making the needed changes. For example, if you identify that each weekly peak is increasing, you can reconfigure during a weekly quiet period.

While Elasticsearch can be reconfigured without being quiesced, it is not a best practice to automatically scale it up and down based on usage patterns. Unlike some other AWS services, I recommend against setting a CloudWatch action that automatically reconfigures the system when alarms are triggered.

There are other situations where the planned reconfiguration approach may not work, such as low or zero free disk space causing the domain to reject writes. If the business is dependent on the domain continuing to accept incoming writes and deleting data is not an option, the team may choose to reconfigure immediately.

Extensions and adaptations

You may wish to modify the best practices encoded in the scripts for your own environment or workloads. It’s always better to avoid situations where alerts are generated but routinely ignored. All alerts should trigger a review and one or more actions, either immediately or at a planned date. The following is a list of common situations where you may wish to set different alarms for different domains:

  • Dev/test vs. production
    You may have a different set of configuration rules and alarms for your dev environment configurations than for test. For example, you may require zone awareness and dedicated masters for your production environment, but not for your development domains. Or, you may not have any alarms set in dev. For test environments that mirror your potential peak load, test to ensure that the alarms are appropriately triggered.
  • Differing workloads or SLAs for different domains
    You may have one domain with a requirement for superfast search performance, and another domain with a heavy ingest load that tolerates slower search response. Your reaction to slow response for these two workloads is likely to be different, so perhaps the thresholds for these two domains should be set at a different level. In this case, you might add a “max CPU utilization” alarm at 100% for 1 minute for the fast search domain, while the other domain only triggers an alarm when the average has been higher than 60% for 5 minutes. You might also add a “free space” rule with a higher threshold to reflect the need for more space for the heavy ingest load if there is danger that it could fill the available disk quickly.
  • “Normal” alarms versus “emergency” alarms
    If, for example, free disk space drops to 25% of total capacity, an alarm is triggered that indicates action should be taken as soon as possible, such as cleaning up old indexes or reconfiguring at the next quiet period for this domain. However, if free space drops below a critical level (20% free space), action must be taken immediately in order to prevent Amazon ES from setting the domain to read-only. Similarly, if the “ClusterIndexWritesBlocked” alarm triggers, the domain has already stopped accepting writes, so immediate action is needed. In this case, you may wish to set “laddered” alarms, where one threshold causes an alarm to be triggered to review the current workload for a planned reconfiguration, but a different threshold raises a “DefCon 3” alarm that immediate action is required.

The sample scripts provided here are a starting point, intended for you to adapt to your own environment and needs.

Running the scripts one time can identify how far your current state is from your desired state, and create an initial set of alarms. Regularly re-running these scripts can capture changes in your environment over time and adjusting your alarms for changes in your environment and configurations. One customer has set them up to run nightly, and to automatically create and update alarms to match their preferred settings.

Removing unwanted alarms

Each CloudWatch alarm costs approximately $0.10 per month. You can remove unwanted alarms in the CloudWatch console, under Alarms. If you set up a “ver” domain above, remember to remove it to avoid continuing charges.


Setting CloudWatch alarms appropriately for your Amazon ES domains can help you avoid suboptimal performance and allow you to respond to workload growth or configuration issues well before they become urgent. This post gives you a starting point for doing so. The additional sleep you’ll get knowing you don’t need to be concerned about Elasticsearch domain performance will allow you to focus on building creative solutions for your business and solving problems for your customers.


Additional Reading

If you found this post useful, be sure to check out Analyzing Amazon Elasticsearch Service Slow Logs Using Amazon CloudWatch Logs Streaming and Kibana and Get Started with Amazon Elasticsearch Service: How Many Shards Do I Need?


About the Author

Dr. Veronika Megler is a senior consultant at Amazon Web Services. She works with our customers to implement innovative big data, AI and ML projects, helping them accelerate their time-to-value when using AWS.




HDD vs SSD: What Does the Future for Storage Hold?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ssd-vs-hdd-future-of-storage/

SSD 60 TB drive

This is part one of a series. Use the Join button above to receive notification of future posts on this and other topics.

Customers frequently ask us whether and when we plan to move our cloud backup and data storage to SSDs (Solid-State Drives). That’s not a surprising question considering the many advantages SSDs have over magnetic platter type drives, also known as HDDs (Hard-Disk Drives).

We’re a large user of HDDs in our data centers (currently 100,000 hard drives holding over 500 petabytes of data). We want to provide the best performance, reliability, and economy for our cloud backup and cloud storage services, so we continually evaluate which drives to use for operations and in our data centers. While we use SSDs for some applications, which we’ll describe below, there are reasons why HDDs will continue to be the primary drives of choice for us and other cloud providers for the foreseeable future.

HDDs vs SSDs


The laptop computer I am writing this on has a single 512GB SSD, which has become a common feature in higher end laptops. The SSD’s advantages for a laptop are easy to understand: they are smaller than an HDD, faster, quieter, last longer, and are not susceptible to vibration and magnetic fields. They also have much lower latency and access times.

Today’s typical online price for a 2.5” 512GB SSD is $140 to $170. The typical online price for a 3.5” 512 GB HDD is $44 to $65. That’s a pretty significant difference in price, but since the SSD helps make the laptop lighter, enables it to be more resistant to the inevitable shocks and jolts it will experience in daily use, and adds of benefits of faster booting, faster waking from sleep, and faster launching of applications and handling of big files, the extra cost for the SSD in this case is worth it.

Some of these SSD advantages, chiefly speed, also will apply to a desktop computer, so desktops are increasingly outfitted with SSDs, particularly to hold the operating system, applications, and data that is accessed frequently. Replacing a boot drive with an SSD has become a popular upgrade option to breathe new life into a computer, especially one that seems to take forever to boot or is used for notoriously slow-loading applications such as Photoshop.

We covered upgrading your computer with an SSD in our blog post SSD 101: How to Upgrade Your Computer With An SSD.

Data centers are an entirely different kettle of fish. The primary concerns for data center storage are reliability, storage density, and cost. While SSDs are strong in the first two areas, it’s the third where they are not yet competitive. At Backblaze we adopt higher density HDDs as they become available — we’re currently using both 10TB and 12TB drives (among other capacities) in our data centers. Higher density drives provide greater storage density per Storage Pod and Vault and reduce our overhead cost through less required maintenance and lower total power requirements. Comparable SSDs in those sizes would cost roughly $1,000 per terabyte, considerably higher than the corresponding HDD. Simply put, SSDs are not yet in the price range to make their use economical for the benefits they provide, which is the reason why we expect to be using HDDs as our primary storage media for the foreseeable future.

What Are HDDs?

HDDs have been around over 60 years since IBM introduced them in 1956. The first disk drive was the size of a car, stored a mere 3.75 megabytes, and cost $300,000 in today’s dollars.

IBM 350 Disk Storage System — 3.75MB in 1956

The 350 Disk Storage System was a major component of the IBM 305 RAMAC (Random Access Method of Accounting and Control) system, which was introduced in September 1956. It consisted of 40 platters and a dual read/write head on a single arm that moved up and down the stack of magnetic disk platters.

The basic mechanism of an HDD remains unchanged since then, though it has undergone continual refinement. An HDD uses magnetism to store data on a rotating platter. A read/write head is affixed to an arm that floats above the spinning platter reading and writing data. The faster the platter spins, the faster an HDD can perform. Typical laptop drives today spin at either 5400 RPM (revolutions per minute) or 7200 RPM, though some server-based platters spin at even higher speeds.

Exploded drawing of a hard drive

Exploded drawing of a hard drive

The platters inside the drives are coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code. If all this sounds like an HDD is vulnerable to shocks and vibration, you’d be right. They also are vulnerable to magnets, which is one way to destroy the data on an HDD if you’re getting rid of it.

The major advantage of an HDD is that it can store lots of data cheaply. One and two terabyte (1,024 and 2,048 gigabytes) hard drives are not unusual for a laptop these days, and 10TB and 12TB drives are now available for desktops and servers. Densities and rotation speeds continue to grow. However, if you compare the cost of common HDDs vs SSDs for sale online, the SSDs are roughly 3-5x the cost per gigabyte. So if you want cheap storage and lots of it, using a standard hard drive is definitely the more economical way to go.

What are the best uses for HDDs?

  • Disk arrays (NAS, RAID, etc.) where high capacity is needed
  • Desktops when low cost is priority
  • Media storage (photos, videos, audio not currently being worked on)
  • Drives with extreme number of reads and writes

What Are SSDs?

SSDs go back almost as far as HDDs, with the first semiconductor storage device compatible with a hard drive interface introduced in 1978, the StorageTek 4305.

Storage Technology 4305 SSD

The StorageTek was an SSD aimed at the IBM mainframe compatible market. The STC 4305 was seven times faster than IBM’s popular 2305 HDD system (and also about half the price). It consisted of a cabinet full of charge-coupled devices and cost $400,000 for 45MB capacity with throughput speeds up to 1.5 MB/sec.

SSDs are based on a type of non-volatile memory called NAND (named for the Boolean operator “NOT AND,” and one of two main types of flash memory). Flash memory stores data in individual memory cells, which are made of floating-gate transistors. Though they are semiconductor-based memory, they retain their information when no power is applied to them — a feature that’s obviously a necessity for permanent data storage.

Samsung SSD

Samsung SSD 850 Pro

Compared to an HDD, SSDs have higher data-transfer rates, higher areal storage density, better reliability, and much lower latency and access times. For most users, it’s the speed of an SSD that primarily attracts them. When discussing the speed of drives, what we are referring to is the speed at which they can read and write data.

For HDDs, the speed at which the platters spin strongly determines the read/write times. When data on an HDD is accessed, the read/write head must physically move to the location where the data was encoded on a magnetic section on the platter. If the file being read was written sequentially to the disk, it will be read quickly. As more data is written to the disk, however, it’s likely that the file will be written across multiple sections, resulting in fragmentation of the data. Fragmented data takes longer to read with an HDD as the read head has to move to different areas of the platter(s) to completely read all the data requested.

Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. Fragmentation is not an issue for SSDs. Files can be written anywhere with little impact on read/write times, resulting in read times far faster than any HDD, regardless of fragmentation.

Samsung SSD 850 Pro (back)

Due to the way data is written and read to the drive, however, SSD cells can wear out over time. SSD cells push electrons through a gate to set its state. This process wears on the cell and over time reduces its performance until the SSD wears out. This effect takes a long time and SSDs have mechanisms to minimize this effect, such as the TRIM command. Flash memory writes an entire block of storage no matter how few pages within the block are updated. This requires reading and caching the existing data, erasing the block and rewriting the block. If an empty block is available, a write operation is much faster. The TRIM command, which must be supported in both the OS and the SSD, enables the OS to inform the drive which blocks are no longer needed. It allows the drive to erase the blocks ahead of time in order to make empty blocks available for subsequent writes.

The effect of repeated reading and erasing on an SSD is cumulative and an SSD can slow down and even display errors with age. It’s more likely, however, that the system using the SSD will be discarded for obsolescence before the SSD begins to display read/write errors. Hard drives eventually wear out from constant use as well, since they use physical recording methods, so most users won’t base their selection of an HDD or SSD drive based on expected longevity.

SSD internals

SSD circuit board

Overall, SSDs are considered far more durable than HDDs due to a lack of mechanical parts. The moving mechanisms within an HDD are susceptible to not only wear and tear over time, but to damage due to movement or forceful contact. If one were to drop a laptop with an HDD, there is a high likelihood that all those moving parts will collide, resulting in potential data loss and even destructive physical damage that could kill the HDD outright. SSDs have no moving parts so, while they hold the risk of a potentially shorter life span due to high use, they can survive the rigors we impose upon our portable devices and laptops.

What are the best uses for SSDs?

  • Notebooks, laptops, where performance, lightweight, areal storage density, resistance to shock and general ruggedness are desirable
  • Boot drives holding operating system and applications, which will speed up booting and application launching
  • Working files (media that is being edited: photos, video, audio, etc.)
  • Swap drives where SSD will speed up disk paging
  • Cache drives
  • Database servers
  • Revitalizing an older computer. If you’ve got a computer that seems slow to start up and slow to load applications and files, updating the boot drive with an SSD could make it seem, if not new, at least as if it just came back refreshed from spending some time on the beach.

Stay Tuned for Part 2 of HDD vs SSD

That’s it for part 1. In our second part we’ll take a deeper look at the differences between HDDs and SSDs, how both HDD and SSD technologies are evolving, and how Backblaze takes advantage of SSDs in our operations and data centers.

Here's a tip!Here’s a tip on finding all the posts tagged with SSD on our blog. Just follow https://www.backblaze.com/blog/tag/ssd/.

Don’t miss future posts on HDDs, SSDs, and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post HDD vs SSD: What Does the Future for Storage Hold? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Welcome Lin – Our Newest Support Tech!

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-lin-newest-support-tech/

As Backblaze continues to grow a couple of our departments need to grow right along with it. One of the quickest-growing departments we have at Backblaze is Customer Support. We do all of our support in-house and the team grows to accommodate our growing customer base! We have a new person joining us in support, Lin! Lets take a moment to learn a bit more about her shall we?

What is your Backblaze Title?
Jr. Support Technician.

Where are you originally from?
Ventura, CA. It’s okay if you haven’t heard of it, it is very, very, small.

What attracted you to Backblaze?
The company culture, the delightful ads on Critical Role, and how immediately genuinely friendly everyone I met was.

Where else have you worked?
I previously did content management at Wish, and an awful lot of temp gigs. I did a few years at a coffee shop in the beginning of college, but my first job ever was a JoAnn’s Fabrics.

Where did you go to school?
San Francisco State University

What’s your dream job?
Magical Girl!

Favorite place you’ve traveled?
Tokyo, but Disneyworld is a real close second.

Favorite hobby?
I spend an awful lot of time playing video games, and possibly even more making silly costumes.

Star Trek or Star Wars?
Truthfully I love both. But I was raised on original series and next generation Trek.

Coke or Pepsi?
Coke … definitely coke.

Favorite food?
Cupcakes. Especially funfetti cupcakes.

Anything else you’d like you’d like to tell us?
I discovered Sailor Moon as a child and it possibly influenced my life way too much. Like many people here I am a huge Disney fan; Anyone who spends longer than a few hours with me will probably tell you I can go on for hours about my cat (but in my defense he’s adorable and fluffy and I have the pictures to prove it).

We keep hiring folks that love Disney! It’s kind of amazing. It’s also nice to have folks in the office that can chat about the latest Critical Role episode! Welcome aboard Lin, we’ll try to get some funfetti stocked for the cupcakes that come in!

The post Welcome Lin – Our Newest Support Tech! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Spotify Owned uTorrent Before BitTorrent Inc. Acquired It

Post Syndicated from Ernesto original https://torrentfreak.com/spotify-owned-utorrent-before-bittorrent-acquired-it-180305/

When Spotify launched its first beta in the fall of 2008, we described it as “an alternative to music piracy.

From the start, the Swedish company set out to compete with pirate services by offering a better user experience. Now, a decade later, it has come a long way.

The company successfully transformed into a billion-dollar enterprise and is planning to go public with a listing on the New York Stock Exchange. While it hasn’t completely evaporated music piracy, it has converted dozens of millions of people into paying customers.

While Spotify sees itself as a piracy remedy, backed by the major labels, its piracy roots are undeniable.

In a detailed feature, Swedish newspaper Breakit put a spotlight on one of Spotify’s earliest employees, developer Ludvig Strigeus.

With a significant stake in the company, he is about to become a multi-millionaire, one with a noteworthy file-sharing past. It’s unclear what is current stake in Spotify is, but according to Swedish media it’s worth more than a billion Kroner, which is over $100 million.

Strigeus was the one who launched uTorrent in September 2005, when the BitTorrent protocol was still fairly new. Where most BitTorrent clients at the time were bloatware, uTorrent chose a minimalist approach, but with all essential features.

This didn’t go unnoticed. In just a few months, millions of torrent users downloaded the application which quickly became the dominant file-sharing tool.

Little more than a year after its launch the application was acquired by BitTorrent Inc., which still owns it today. While that part of history is commonly known, there’s a step missing.

Strigeus’ coding talent also piqued the interest of Spotify, which reportedly beat BitTorrent Inc. by a few months. Multiple sources confirm that the streaming startup, which had yet to release its service at the time, bought uTorrent in 2006.

While some thought that Spotify was mainly interested in the technology, others see Strigeus as the target.

“Spotify bought μTorrent, but what we really wanted was Ludvig Strigeus,” former Spotify CEO Andreas Ehn told Breakit.

This indeed sounds plausible as Spotify sold uTorrent to BitTorrent Inc. after a few months, keeping the developer on board. Not a bad decision for the latter, as his Spotify stake makes him a billionaire. At the same time, it was an important move for Spotify too.

Ludvig (Ludde) is still credited in recent uTorrent releases

In addition to having a very talented developer on board, who helped to implement the much needed P2P technology into Spotify, the deal with BitTorrent Inc. brought in cash that funded the development of the tiny, but ambitious, streaming service.

It might be too much to argue that Spotify wouldn’t be where it is without uTorrent and its creator, but their impact on the young company was significant.

The file-sharing angle was also very prominent in the early releases of Spotify. At the time, of all the tracks that were streamed over the Internet by Spotify users, the majority were streamed via P2P connections.

And we haven’t even mentioned that Spotify reportedly used pirate MP3s for its Beta release, including some tracks that were only available on The Pirate Bay.

Spotify’s brief ownership of uTorrent isn’t commonly known, to make an understatement. When BitTorrent Inc. announced that it acquired “uTorrent AB” there was no mention of Spotify, which was still an unknown company at the time.

Times change.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons