Tag Archives: uploaded

Pioneers winners: only you can save us

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/pioneers-winners-only-you-can-save-us/

She asked for help, and you came to her aid. Pioneers, the winners of the Only you can save us challenge have been picked!

Can you see me? Only YOU can save us!

I need your help. This is a call out for those between 11- and 16-years-old in the UK and Republic of Ireland. Something has gone very, very wrong and only you can save us. I’ve collected together as much information for you as I can. You’ll find it at http://www.raspberrypi.org/pioneers.

The challenge

In August we intercepted an emergency communication from a lonesome survivor. She seemed to be in quite a bit of trouble, and asked all you young people aged 11 to 16 to come up with something to help tackle the oncoming crisis, using whatever technology you had to hand. You had ten weeks to work in teams of two to five with an adult mentor to fulfil your mission.

The judges

We received your world-saving ideas, and our savvy survivor pulled together a ragtag bunch of apocalyptic experts to help us judge which ones would be the winning entries.

Dr Shini Somara

Dr Shini Somara is an advocate for STEM education and a mechanical engineer. She was host of The Health Show and has appeared in documentaries for the BBC, PBS Digital, and Sky. You can check out her work hosting Crash Course Physics on YouTube.

Prof Lewis Dartnell is an astrobiologist and author of the book The Knowledge: How to Rebuild Our World From Scratch.

Emma Stephenson has a background in aeronautical engineering and currently works in the Shell Foundation’s Access to Energy and Sustainable Mobility portfolio.

Currently sifting through the entries with the other judges of #makeyourideas with @raspberrypifoundation @_raspberrypi_

151 Likes, 3 Comments – Shini Somara (@drshinisomara) on Instagram: “Currently sifting through the entries with the other judges of #makeyourideas with…”

The winners

Our survivor is currently putting your entries to good use repairing, rebuilding, and defending her base. Our judges chose the following projects as outstanding examples of world-saving digital making.

Theme winner: Computatron

Raspberry Pioneers 2017 – Nerfus Dislikus Killer Robot

This is our entry to the pioneers ‘Only you can save us’ competition. Our team name is Computatrum. Hope you enjoy!

Are you facing an unknown enemy whose only weakness is Nerf bullets? Then this is the robot for you! We loved the especially apocalyptic feel of the Computatron’s cleverly hacked and repurposed elements. The team even used an old floppy disc mechanism to help fire their bullets!

Technically brilliant: Robot Apocalypse Committee

Pioneers Apocalypse 2017 – RationalPi

Thousands of lines of code… Many sheets of acrylic… A camera, touchscreen and fingerprint scanner… This is our entry into the Raspberry Pi Pioneers2017 ‘Only YOU can Save Us’ theme. When zombies or other survivors break into your base, you want a secure way of storing your crackers.

The Robot Apocalypse Committee is back, and this time they’ve brought cheese! The crew designed a cheese- and cracker-dispensing machine complete with face and fingerprint recognition to ensure those rations last until the next supply drop.

Best explanation: Pi Chasers

Tala – Raspberry Pi Pioneers Project

Hi! We are PiChasers and we entered the Raspberry Pi Pionners challenge last time when the theme was “Make it Outdoors!” but now we’ve been faced with another theme “Apocolypse”. We spent a while thinking of an original thing that would help in an apocolypse and decided upon a ‘text-only phone’ which uses local radio communication rather than cellular.

This text-based communication device encased in a tupperware container could be a lifesaver in a crisis! And luckily, the Pi Chasers produced an excellent video and amazing GitHub repo, ensuring that any and all survivors will be able to build their own in the safety of their base.

Most inspiring journey: Three Musketeers

Pioneers Entry – The Apocalypse

Pioneers Entry Team Name: The Three Musketeers Team Participants: James, Zach and Tom

We all know that zombies are terrible at geometry, and the Three Musketeers used this fact to their advantage when building their zombie security system. We were impressed to see the team working together to overcome the roadblocks they faced along the way.

We appreciate what you’re trying to do: Zombie Trolls

Zombie In The Middle

Uploaded by CDA Bodgers on 2017-12-01.

Playing piggy in the middle with zombies sure is a unique way of saving humankind from total extinction! We loved this project idea, and although the Zombie Trolls had a little trouble with their motors, we’re sure with a little more tinkering this zombie-fooling contraption could save us all.

Most awesome

Our judges also wanted to give a special commendation to the following teams for their equally awesome apocalypse-averting ideas:

  • PiRates, for their multifaceted zombie-proofing defence system and the high production value of their video
  • Byte them Pis, for their beautiful zombie-detecting doormat
  • Unatecxon, for their impressive bunker security system
  • Team Crompton, for their pressure-activated door system
  • Team Ernest, for their adventures in LEGO

The prizes

All our winning teams have secured exclusive digital maker boxes. These are jam-packed with tantalising tech to satisfy all tinkering needs, including:

Our theme winners have also secured themselves a place at Coolest Projects 2018 in Dublin, Ireland!

Thank you to everyone who got involved in this round of Pioneers. Look out for your awesome submission swag arriving in the mail!

The post Pioneers winners: only you can save us appeared first on Raspberry Pi.

ETTV: How an Upload Bot Became a Pirate Hero

Post Syndicated from Ernesto original https://torrentfreak.com/ettv-how-an-upload-bot-became-a-pirate-hero-171210/

Earlier this year, the torrent community was hit hard when another major torrent site suddenly shut its doors.

Just a few months after celebrating its tenth anniversary, ExtraTorrent’s operator threw in the towel. While an official explanation was never provided, it’s likely that he was pressed to make this decision.

The ExtraTorrent site was a safe harbor for millions of regular users, who became homeless overnight. But it was more than that. It was also the birth ground of several popular releasers and distribution groups.

ETTV and ETHD turned into well-known brands themselves. While the ET is derived from ExtraTorrent, the groups have shared TV and movie torrents on several other large torrent sites, and they still do. They even have their own site now.

With millions of people sharing their uploads every week, they’ve become icons and heroes to many. But how did this all come to be? We sat down with the team, virtually, to find out more.

“The idea for ettv/ethd was brought up by ExtraTorrent users,” the ETTV team says.

There was demand for a new group that would upload scene releases faster than the original EZTV, which was the dominant TV-torrent distribution group around 2011, when it all started.

“At the time the real EZTV was still active. They released stuff hours after it was released from the scene, leaving sites to wait very long for shows to arrive in public. In no way was ettv intended for competitive purposes. We had a lot of respect for Nova and the original EZTV operators.”

While ETTV is regularly referred to as a “group,” it was a one-person operation initially. Just a guy with a seedbox, grabbing scene releases and posting them on torrent sites.

It didn’t take long before people got wind of the new distribution ‘group,’ and interest for the torrents quickly exploded. This meant that a single seedbox was no longer sufficient, but help was not far away.

“It started off with one operator and a seedbox, but it became popular too fast. That’s when former ExtraTorrent owners stepped in to give ETTV the support and funding it needed to keep the story going.”

One of the earliest ETTV uploads on ExtraTorrent

In addition to the available disk space and bandwidth, the team itself expanded as well. At its height, a handful of people were working on the group. However, when things became more and more automated this number reduced again.

What many people don’t realize is that ETTV and ETHD are mostly run by lines of code. The entire distribution process is automated and requires minimal intervention from the people behind it.

“Ettv/Ethd is a bot, it doesn’t require human attention. It grabs what you tell the script to,” the team tells us.

The bot is set up to grab the latest copies of predefined shows from private servers where the latest scene release are posted. These are transferred to the seedbox and the torrents are then pushed out to the public – on ETTV.tv, but also on The Pirate Bay and elsewhere. Everything is automated.

Even most of the maintenance is taken care of by the ‘bot’ itself. When disk space is running out older content is purged, allowing fresh releases to come through.

“The only persons involve with the bots are the bill payers of our new home ettv.tv. All they do check bot logs to see if it has any errors and correct them,” the team explains.

One problem that couldn’t be easily solved with some code was the shutdown of ExtraTorrent. While the bills for the seedboxes were paid in advance until the end of 2017, the groups had to find a new home.

“The shutdown of ExtraTorrent didn’t affect the bots from running, it just left ettv/ethd homeless and caused fans to lose their way trying to find us. Not many knew where else we uploaded or didn’t like the other sites we uploaded to.”

After a few months had passed it became clear that they were not going anywhere. Quite the contrary, they started their very own site, ETTV.tv, where all the latest releases are published.

ETTV.tv

In the near future, the team will focus on turning the site into a new home for its followers. Just a few weeks ago it launched a new release “tag,” ETMovies, which specializes in lower resolution films with a smaller file size, for example.

“We recently introduced ETMovies which is basically for SD Movies, other than that the only plan ettv/ethd has is to give a home to the members that suffered from the sudden shut down of ExtraTorrent.”

Just this week, the site also expanded its reach by adding new categories such as music, games, software, and Books, where approved uploaders will publish content.

While they are doing their best to keep the site up and running, it’s not a given that ETTV will be around forever. As long as there are plenty of funds and no concrete legal pressure they might. But if recent history has shown us anything, it’s that there are no guarantees.

“No one is here seeking to be a millionaire, if the traffic pays the bills we keep going, if not then all we can say is (sorry we tried) we will not be the heroes that saved the day.

“Again and again, the troublesome history of torrent sites is clear. It’s a war no site owner can win. If we are ever in danger, we will choose freedom. It’s not like followers can bail you out if the worst were to happen,” the ETTV team concludes.

For now, however, the bot keeps on running.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The re:Invent 2017 Containers After-party Guide

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/the-reinvent-2017-containers-after-party-guide/

Feeling uncontainable? re:Invent 2017 might be over, but the containers party doesn’t have to stop. Here are some ways you can keep learning about containers on AWS.

Learn about containers in Austin and New York

Come join AWS this week at KubeCon in Austin, Texas! We’ll be sharing best practices for running Kubernetes on AWS and talking about Amazon ECS, AWS Fargate, and Amazon EKS. Want to take Amazon EKS for a test drive? Sign up for the preview.

We’ll also be talking Containers at the NYC Pop-up Loft during AWS Compute Evolved: Containers Day on December 13th. Register to attend.

Join an upcoming webinar

Didn’t get to attend re:Invent or want to hear a recap? Join our upcoming webinar, What You Missed at re:Invent 2017, on December 11th from 12:00 PM – 12:40 PM PT (3:00 PM – 3:40 PM ET). Register to attend.

Start (or finish) a workshop

All of the containers workshops given at re:Invent are available online. Get comfortable, fire up your browser, and start building!

re:Watch your favorite talks

All of the keynote and breakouts from re:Invent are available to watch on our YouTube playlist. Slides can be found as they are uploaded on the AWS Slideshare. Just slip into your pajamas, make some popcorn, and start watching!

Learn more about what’s new

Andy Jassy announced two big updates to the container landscape at re:Invent, AWS Fargate and Amazon EKS. Here are some resources to help you learn more about all the new features and products we announced, why we built them, and how they work.

AWS Fargate

AWS Fargate is a technology that allows you to run containers without having to manage servers or clusters.

Amazon Elastic Container Service for Kubernetes (Amazon EKS)

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to configure and operate your own Kubernetes clusters.

We hope you had a great re:Invent and look forward to seeing what you build on AWS in 2018!

– The AWS Containers Team

Eevee mugshot set for Doom

Post Syndicated from Eevee original https://eev.ee/release/2017/11/23/eevee-mugshot-set-for-doom/

Screenshot of Industrial Zone from Doom II, with an Eevee face replacing the usual Doom marine in the status bar

A full replacement of Doomguy’s vast array of 42 expressions.

You can get it yourself if you want to play Doom as me, for some reason? It does nothing but replace a few sprites, so it works with any Doom flavor (including vanilla) on 1, 2, or Final. Just run Doom with -file eeveemug.wad. With GZDoom, you can load it automatically.


I don’t entirely know why I did this. I drew the first one on a whim, then realized there was nothing really stopping me from making a full set, so I spent a day doing that.

The funny thing is that I usually play Doom with ZDoom’s “alternate” HUD. It’s a full-screen overlay rather than a huge bar, and — crucially — it does not show the mugshot. It can’t even be configured to show the mugshot. As far as I’m aware, it can’t even be modded to show the mugshot. So I have to play with the OG status bar if I want to actually use the thing I made.

Preview of the Eevee mugshot sprites arranged in a grid, where the Eevee becomes more beaten up in each subsequent column

I’m pretty happy with the results overall! I think I did a decent job emulating the Doom “surreal grit” style. I did the shading with Aseprite‘s shading mode — instead of laying down a solid color, it shifts pixels along a ramp of colors you select every time you draw over them. Doom’s palette has a lot of browns, so I made a ramp out of all of them and kept going over furry areas, nudging pixels into being lighter or darker, until I liked the texture. It was a lot like making a texture in a sketch with a lot of scratchy pencil strokes.

I also gleaned some interesting things about smoothness and how the eye interprets contours? I tried to explain this on Twitter and had a hell of a time putting it into words, but the short version is that it’s amazing to see the difference a single misplaced pixel can make, especially as you slide that pixel between dark and light.


Doom's palette of 256 colors, many of which are very long gradients of reds and browns

Speaking of which, Doom’s palette is incredibly weird to work with. Thank goodness Eevees are brown! The game does have to draw arbitrary levels of darkness all with the same palette, which partly explains the number of dark colors and gradients — but I believe a number of the colors are exact duplicates, so close they might as well be duplicates, or completely unused in stock Doom assets. I guess they had no reason to optimize for people trying to add arbitrary art to the game 25 years later, though. (And nowadays, GZDoom includes a truecolor software renderer, so the palette is becoming less and less important.)

I originally wanted the god mode sprite to be a Sylveon, but Sylveon is made of pink and azure and blurple, and I don’t think I could’ve pulled it off with this set of colors. I even struggled with the color of the mane a bit — I usually color it with pretty pale colors, but Doom only has a couple of those, and they’re very saturated. I ended up using a lot more dark yellows than I would normally, and thankfully it worked out pretty well.

The most significant change I made between the original sprite and the final set was the eye color:

A comparison between an original Doom mugshot sprite, the first sprite I drew, and how it ended up

(This is STFST20, a frame from the default three-frame “glacing around” animation that plays when the player has between 40 and 59 health. Doom Wiki has a whole article on the mugshot if you’re interested.)

The blue eyes in my original just do not work at all. The Doom palette doesn’t have a lot of subtle colors, and its blues in particular are incredibly bad. In the end, I made the eyes basically black, though with a couple pixels of very dark blue in them.

After I decided to make the full set, I started by making a neutral and completely healthy front pose, then derived the others from that (with a very complicated system of layers). You can see some of the side effects of that here: the face doesn’t actually turn when glancing around, because hoo boy that would’ve been a lot of work, and so the cheek fluff is visible on both sides.

I also notice that there are two columns of identical pixels in each eye! I fixed that in the glance to the right, but must’ve forgotten about it here. Oh, well; I didn’t even notice until I zoomed in just now.

A general comparison between the Doom mugshots and my Eevee ones, showing each pose in its healthy state plus the neutral pose in every state of deterioration

The original sprites might not be quite aligned correctly in the above image. The available space in the status bar is 35×31, of which a couple pixels go to an inset border, leaving 33×30. I drew all of my sprites at that size, but the originals are all cropped and have varying offsets (part of the Doom sprite format). I extremely can’t be assed to check all of those offsets for over a dozen sprites, so I just told ImageMagick to center them. (I only notice right now that some of the original sprites are even a full 31 pixels tall and draw over the top border that I was so careful to stay out of!)

Anyway, this is a representative sample of the Doom mugshot poses.

The top row shows all eight frames at full health. The first three are the “idle” state, drawn when nothing else is going on; the sprite usually faces forwards, but glances around every so often at random. The forward-facing sprite is the one I finalized first.

I tried to take a lot of cues from the original sprite, seeing as I wanted to match the style. I’d never tried drawing a sprite with a large palette and a small resolution before, and the first thing that struck me was Doomguy’s lips — the upper lip, lips themselves, and shadow under the lower lip are all created with only one row of pixels each. I thought that was amazing. Now I even kinda wish I’d exaggerated that effect a bit more, but I was wary of going too dark when there’s a shadow only a couple pixels away. I suppose Doomguy has the advantage of having, ah, a chin.

I did much the same for the eyebrows, which was especially necessary because Doomguy has more of a forehead than my Eevee does. I probably could’ve exaggerated those a bit more, as well! Still, I love how they came out — especially in the simple looking-around frames, where even a two-pixel eyebrow raise is almost comically smug.

The fourth frame is a wild-ass grin (even named STFEVL0), which shows for a short time after picking up a new weapon. Come to think of it, that’s a pretty rare occurrence when playing straight through one of the Doom games; you keep your weapons between levels.

The fifth through seventh are also a set. If the player takes damage, the status bar will briefly show one of these frames to indicate where the damage is coming from. You may notice that where Doomguy bravely faces the source of the pain, I drew myself wincing and recoiling away from it.

The middle frame of that set also appears while the player is firing continuously (regardless of damage), so I couldn’t really make it match the left and right ones. I like the result anyway. It was also great fun figuring out the expressions with the mouth — that’s another place where individual pixels make a huge difference.

Finally, the eighth column is the legendary “ouch” face, which appears when the player takes more than 20 damage at once. It may look completely alien to you, because vanilla Doom has a bug that only shows this face when the player gains 20 or more health while taking damage. This is vanishingly rare (though possible!), so the frame virtually never appears in vanilla Doom. Lots of source ports have fixed this bug, making the ouch face it a bit better known, but I usually play without the mugshot visible so it still looks super weird to me. I think my own spin on it is a bit less, ah, body horror?

The second row shows deterioration. It is pretty weird drawing yourself getting beaten up.

A lot of Doomguy’s deterioration is in the form of blood dripping from under his hair, which I didn’t think would translate terribly well to a character without hair. Instead, I went a little cartoony with it, adding bandages here and there. I had a little bit of a hard time with the bloodshot eyes at this resolution, which I realize as I type it is a very poor excuse when I had eyes three times bigger than Doomguy’s. I do love the drooping ears, with the possible exception of the fifth state, which I’m not sure is how that would actually look…? Oh well. I also like the bow becoming gradually unravelled, eventually falling off entirely when you die.

Oh, yes, the sixth frame there (before the gap) is actually for a dead player. Doomguy’s bleeding becomes markedly more extreme here, but again that didn’t really work for me, so I went a little sillier with it. A little. It’s still pretty weird drawing yourself dead.

That leaves only god mode, which is incredible. I love that glow. I love the faux whisker shapes it makes. I love how it fades into the background. I love that 100% pure “oh this is pretty good” smile. It all makes me want to just play Doom in god mode forever.

Now that I’ve looked closely at these sprites again, I spy a good half dozen little inconsistencies and nitpicks, which I’m going to refrain from spelling out. I did do this in only a day, and I think it came out pretty dang well considering.

Maybe I’ll try something else like this in the future. Not quite sure what, though; there aren’t many small and self-contained sets of sprites like this in Doom. Monsters are several times bigger and have a zillion different angles. Maybe some pickups, which only have one frame?


Hmm. Parting thought: I’m not quite sure where I should host this sort of one-off thing. It arguably belongs on Itch, but seems really out of place alongside entire released games. It also arguably belongs on the idgames archive, but I’m hesitant to put it there because it’s such an obscure thing of little interest to a general audience. At the moment it’s just a file I’ve uploaded to wherever on my own space, but I now have three little Doom experiments with no real permanent home.

How to Enable Caching for AWS CodeBuild

Post Syndicated from Karthik Thirugnanasambandam original https://aws.amazon.com/blogs/devops/how-to-enable-caching-for-aws-codebuild/

AWS CodeBuild is a fully managed build service. There are no servers to provision and scale, or software to install, configure, and operate. You just specify the location of your source code, choose your build settings, and CodeBuild runs build scripts for compiling, testing, and packaging your code.

A typical application build process includes phases like preparing the environment, updating the configuration, downloading dependencies, running unit tests, and finally, packaging the built artifact.

Downloading dependencies is a critical phase in the build process. These dependent files can range in size from a few KBs to multiple MBs. Because most of the dependent files do not change frequently between builds, you can noticeably reduce your build time by caching dependencies.

In this post, I will show you how to enable caching for AWS CodeBuild.

Requirements

  • Create an Amazon S3 bucket for storing cache archives (You can use existing s3 bucket as well).
  • Create a GitHub account (if you don’t have one).

Create a sample build project:

1. Open the AWS CodeBuild console at https://console.aws.amazon.com/codebuild/.

2. If a welcome page is displayed, choose Get started.

If a welcome page is not displayed, on the navigation pane, choose Build projects, and then choose Create project.

3. On the Configure your project page, for Project name, type a name for this build project. Build project names must be unique across each AWS account.

4. In Source: What to build, for Source provider, choose GitHub.

5. In Environment: How to build, for Environment image, select Use an image managed by AWS CodeBuild.

  • For Operating system, choose Ubuntu.
  • For Runtime, choose Java.
  • For Version,  choose aws/codebuild/java:openjdk-8.
  • For Build specification, select Insert build commands.

Note: The build specification file (buildspec.yml) can be configured in two ways. You can package it along with your source root directory, or you can override it by using a project environment configuration. In this example, I will use the override option and will use the console editor to specify the build specification.

6. Under Build commands, click Switch to editor to enter the build specification.

Copy the following text.

version: 0.2

phases:
  build:
    commands:
      - mvn install
      
cache:
  paths:
    - '/root/.m2/**/*'

Note: The cache section in the build specification instructs AWS CodeBuild about the paths to be cached. Like the artifacts section, the cache paths are relative to $CODEBUILD_SRC_DIR and specify the directories to be cached. In this example, Maven stores the downloaded dependencies to the /root/.m2/ folder, but other tools use different folders. For example, pip uses the /root/.cache/pip folder, and Gradle uses the /root/.gradle/caches folder. You might need to configure the cache paths based on your language platform.

7. In Artifacts: Where to put the artifacts from this build project:

  • For Type, choose No artifacts.

8. In Cache:

  • For Type, choose Amazon S3.
  • For Bucket, choose your S3 bucket.
  • For Path prefix, type cache/archives/

9. In Service role, the Create a service role in your account option will display a default role name.  You can accept the default name or type your own.

If you already have an AWS CodeBuild service role, choose Choose an existing service role from your account.

10. Choose Continue.

11. On the Review page, to run a build, choose Save and build.

Review build and cache behavior:

Let us review our first build for the project.

In the first run, where no cache exists, overall build time would look something like below (notice the time for DOWNLOAD_SOURCE, BUILD and POST_BUILD):

If you check the build logs, you will see log entries for dependency downloads. The dependencies are downloaded directly from configured external repositories. At the end of the log, you will see an entry for the cache uploaded to your S3 bucket.

Let’s review the S3 bucket for the cached archive. You’ll see the cache from our first successful build is uploaded to the configured S3 path.

Let’s try another build with the same CodeBuild project. This time the build should pick up the dependencies from the cache.

In the second run, there was a cache hit (cache was generated from the first run):

You’ll notice a few things:

  1. DOWNLOAD_SOURCE took slightly longer. Because, in addition to the source code, this time the build also downloaded the cache from user’s s3 bucket.
  2. BUILD time was faster. As the dependencies didn’t need to get downloaded, but were reused from cache.
  3. POST_BUILD took slightly longer, but was relatively the same.

Overall, build duration was improved with cache.

Best practices for cache

  • By default, the cache archive is encrypted on the server side with the customer’s artifact KMS key.
  • You can expire the cache by manually removing the cache archive from S3. Alternatively, you can expire the cache by using an S3 lifecycle policy.
  • You can override cache behavior by updating the project. You can use the AWS CodeBuild the AWS CodeBuild console, AWS CLI, or AWS SDKs to update the project. You can also invalidate cache setting by using the new InvalidateProjectCache API. This API forces a new InvalidationKey to be generated, ensuring that future builds receive an empty cache. This API does not remove the existing cache, because this could cause inconsistencies with builds currently in flight.
  • The cache can be enabled for any folders in the build environment, but we recommend you only cache dependencies/files that will not change frequently between builds. Also, to avoid unexpected application behavior, don’t cache configuration and sensitive information.

Conclusion

In this blog post, I showed you how to enable and configure cache setting for AWS CodeBuild. As you see, this can save considerable build time. It also improves resiliency by avoiding external network connections to an artifact repository.

I hope you found this post useful. Feel free to leave your feedback or suggestions in the comments.

Pirate Site Owner Found Guilty, But He Can Keep The Profits

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-site-owner-found-guilty-can-keep-profits/

Traditionally, Sweden has been rather tough on people who operate file-sharing sites, with The Pirate Bay case as the prime example.

In 2009, four people connected to the torrent site were found guilty of assisting copyright infringement. They all received stiff prison sentences and millions of dollars in fines.

The guilty sentence was upheld in an appeal. While the prison terms of Peter Sunde, Fredrik Neij and Carl Lundström were reduced to eight, ten and four months respectively, the fines swelled to $6.5 million.

This week another torrent related filesharing case concluded in Sweden, but with an entirely different outcome. IDG reports that the 47-year-old operator of Filmfix was sentenced to 120 hours of community service.

Filmfix.se offered community-curated links to a wide variety of pirated content hosted by external sources, including torrent sites. The operator charged users 10 Swedish Krona per month to access the service, which is little over a dollar at the current exchange rate.

With thousands of users, Filmfix provided a decent income. The site was active for more than six years and between April 2012 and October 2013 alone it generated over $88,000 in revenue. Interestingly, the court decided that the operator can keep this money.

Filmfix

While the District Court convicted the man for facilitating copyright infringement, there was no direct link between the subscription payments and pirated downloads. The paying members also had access to other unrelated features, such as the forums and chat.

Henrik Pontén, head of the local Rights Alliance, which reported the site to the police, stated that copyright holders have not demanded any damages. They may, however, launch a separate civil lawsuit in the future.

The man’s partner, who was suspected of helping out and owned the company where Filmfix’s money went to, was acquitted entirely by the District Court.

The 120-hours of community service stands in stark contrast to the prison sentences and millions of dollars in fines in The Pirate Bay case, despite there being quite a few similarities. Both relied on content uploaded by third parties and didn’t host any infringing files directly.

The lower sentence may in part be due to a fresh Supreme Court ruling in Sweden. In the case against an operator of the now-defunct private torrent tracker Swepirate, the Court recently ruled that prison sentences should not automatically be presumed in file-sharing cases.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Event-Driven Computing with Amazon SNS and AWS Compute, Storage, Database, and Networking Services

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/event-driven-computing-with-amazon-sns-compute-storage-database-and-networking-services/

Contributed by Otavio Ferreira, Manager, Software Development, AWS Messaging

Like other developers around the world, you may be tackling increasingly complex business problems. A key success factor, in that case, is the ability to break down a large project scope into smaller, more manageable components. A service-oriented architecture guides you toward designing systems as a collection of loosely coupled, independently scaled, and highly reusable services. Microservices take this even further. To improve performance and scalability, they promote fine-grained interfaces and lightweight protocols.

However, the communication among isolated microservices can be challenging. Services are often deployed onto independent servers and don’t share any compute or storage resources. Also, you should avoid hard dependencies among microservices, to preserve maintainability and reusability.

If you apply the pub/sub design pattern, you can effortlessly decouple and independently scale out your microservices and serverless architectures. A pub/sub messaging service, such as Amazon SNS, promotes event-driven computing that statically decouples event publishers from subscribers, while dynamically allowing for the exchange of messages between them. An event-driven architecture also introduces the responsiveness needed to deal with complex problems, which are often unpredictable and asynchronous.

What is event-driven computing?

Given the context of microservices, event-driven computing is a model in which subscriber services automatically perform work in response to events triggered by publisher services. This paradigm can be applied to automate workflows while decoupling the services that collectively and independently work to fulfil these workflows. Amazon SNS is an event-driven computing hub, in the AWS Cloud, that has native integration with several AWS publisher and subscriber services.

Which AWS services publish events to SNS natively?

Several AWS services have been integrated as SNS publishers and, therefore, can natively trigger event-driven computing for a variety of use cases. In this post, I specifically cover AWS compute, storage, database, and networking services, as depicted below.

Compute services

  • Auto Scaling: Helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You can configure Auto Scaling lifecycle hooks to trigger events, as Auto Scaling resizes your EC2 cluster.As an example, you may want to warm up the local cache store on newly launched EC2 instances, and also download log files from other EC2 instances that are about to be terminated. To make this happen, set an SNS topic as your Auto Scaling group’s notification target, then subscribe two Lambda functions to this SNS topic. The first function is responsible for handling scale-out events (to warm up cache upon provisioning), whereas the second is in charge of handling scale-in events (to download logs upon termination).

  • AWS Elastic Beanstalk: An easy-to-use service for deploying and scaling web applications and web services developed in a number of programming languages. You can configure event notifications for your Elastic Beanstalk environment so that notable events can be automatically published to an SNS topic, then pushed to topic subscribers.As an example, you may use this event-driven architecture to coordinate your continuous integration pipeline (such as Jenkins CI). That way, whenever an environment is created, Elastic Beanstalk publishes this event to an SNS topic, which triggers a subscribing Lambda function, which then kicks off a CI job against your newly created Elastic Beanstalk environment.

  • Elastic Load Balancing: Automatically distributes incoming application traffic across Amazon EC2 instances, containers, or other resources identified by IP addresses.You can configure CloudWatch alarms on Elastic Load Balancing metrics, to automate the handling of events derived from Classic Load Balancers. As an example, you may leverage this event-driven design to automate latency profiling in an Amazon ECS cluster behind a Classic Load Balancer. In this example, whenever your ECS cluster breaches your load balancer latency threshold, an event is posted by CloudWatch to an SNS topic, which then triggers a subscribing Lambda function. This function runs a task on your ECS cluster to trigger a latency profiling tool, hosted on the cluster itself. This can enhance your latency troubleshooting exercise by making it timely.

Storage services

  • Amazon S3: Object storage built to store and retrieve any amount of data.You can enable S3 event notifications, and automatically get them posted to SNS topics, to automate a variety of workflows. For instance, imagine that you have an S3 bucket to store incoming resumes from candidates, and a fleet of EC2 instances to encode these resumes from their original format (such as Word or text) into a portable format (such as PDF).In this example, whenever new files are uploaded to your input bucket, S3 publishes these events to an SNS topic, which in turn pushes these messages into subscribing SQS queues. Then, encoding workers running on EC2 instances poll these messages from the SQS queues; retrieve the original files from the input S3 bucket; encode them into PDF; and finally store them in an output S3 bucket.

  • Amazon EFS: Provides simple and scalable file storage, for use with Amazon EC2 instances, in the AWS Cloud.You can configure CloudWatch alarms on EFS metrics, to automate the management of your EFS systems. For example, consider a highly parallelized genomics analysis application that runs against an EFS system. By default, this file system is instantiated on the “General Purpose” performance mode. Although this performance mode allows for lower latency, it might eventually impose a scaling bottleneck. Therefore, you may leverage an event-driven design to handle it automatically.Basically, as soon as the EFS metric “Percent I/O Limit” breaches 95%, CloudWatch could post this event to an SNS topic, which in turn would push this message into a subscribing Lambda function. This function automatically creates a new file system, this time on the “Max I/O” performance mode, then switches the genomics analysis application to this new file system. As a result, your application starts experiencing higher I/O throughput rates.

  • Amazon Glacier: A secure, durable, and low-cost cloud storage service for data archiving and long-term backup.You can set a notification configuration on an Amazon Glacier vault so that when a job completes, a message is published to an SNS topic. Retrieving an archive from Amazon Glacier is a two-step asynchronous operation, in which you first initiate a job, and then download the output after the job completes. Therefore, SNS helps you eliminate polling your Amazon Glacier vault to check whether your job has been completed, or not. As usual, you may subscribe SQS queues, Lambda functions, and HTTP endpoints to your SNS topic, to be notified when your Amazon Glacier job is done.

  • AWS Snowball: A petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data.You can leverage Snowball notifications to automate workflows related to importing data into and exporting data from AWS. More specifically, whenever your Snowball job status changes, Snowball can publish this event to an SNS topic, which in turn can broadcast the event to all its subscribers.As an example, imagine a Geographic Information System (GIS) that distributes high-resolution satellite images to users via Web browser. In this example, the GIS vendor could capture up to 80 TB of satellite images; create a Snowball job to import these files from an on-premises system to an S3 bucket; and provide an SNS topic ARN to be notified upon job status changes in Snowball. After Snowball changes the job status from “Importing” to “Completed”, Snowball publishes this event to the specified SNS topic, which delivers this message to a subscribing Lambda function, which finally creates a CloudFront web distribution for the target S3 bucket, to serve the images to end users.

Database services

  • Amazon RDS: Makes it easy to set up, operate, and scale a relational database in the cloud.RDS leverages SNS to broadcast notifications when RDS events occur. As usual, these notifications can be delivered via any protocol supported by SNS, including SQS queues, Lambda functions, and HTTP endpoints.As an example, imagine that you own a social network website that has experienced organic growth, and needs to scale its compute and database resources on demand. In this case, you could provide an SNS topic to listen to RDS DB instance events. When the “Low Storage” event is published to the topic, SNS pushes this event to a subscribing Lambda function, which in turn leverages the RDS API to increase the storage capacity allocated to your DB instance. The provisioning itself takes place within the specified DB maintenance window.

  • Amazon ElastiCache: A web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.ElastiCache can publish messages using Amazon SNS when significant events happen on your cache cluster. This feature can be used to refresh the list of servers on client machines connected to individual cache node endpoints of a cache cluster. For instance, an ecommerce website fetches product details from a cache cluster, with the goal of offloading a relational database and speeding up page load times. Ideally, you want to make sure that each web server always has an updated list of cache servers to which to connect.To automate this node discovery process, you can get your ElastiCache cluster to publish events to an SNS topic. Thus, when ElastiCache event “AddCacheNodeComplete” is published, your topic then pushes this event to all subscribing HTTP endpoints that serve your ecommerce website, so that these HTTP servers can update their list of cache nodes.

  • Amazon Redshift: A fully managed data warehouse that makes it simple to analyze data using standard SQL and BI (Business Intelligence) tools.Amazon Redshift uses SNS to broadcast relevant events so that data warehouse workflows can be automated. As an example, imagine a news website that sends clickstream data to a Kinesis Firehose stream, which then loads the data into Amazon Redshift, so that popular news and reading preferences might be surfaced on a BI tool. At some point though, this Amazon Redshift cluster might need to be resized, and the cluster enters a ready-only mode. Hence, this Amazon Redshift event is published to an SNS topic, which delivers this event to a subscribing Lambda function, which finally deletes the corresponding Kinesis Firehose delivery stream, so that clickstream data uploads can be put on hold.At a later point, after Amazon Redshift publishes the event that the maintenance window has been closed, SNS notifies a subscribing Lambda function accordingly, so that this function can re-create the Kinesis Firehose delivery stream, and resume clickstream data uploads to Amazon Redshift.

  • AWS DMS: Helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.DMS also uses SNS to provide notifications when DMS events occur, which can automate database migration workflows. As an example, you might create data replication tasks to migrate an on-premises MS SQL database, composed of multiple tables, to MySQL. Thus, if replication tasks fail due to incompatible data encoding in the source tables, these events can be published to an SNS topic, which can push these messages into a subscribing SQS queue. Then, encoders running on EC2 can poll these messages from the SQS queue, encode the source tables into a compatible character set, and restart the corresponding replication tasks in DMS. This is an event-driven approach to a self-healing database migration process.

Networking services

  • Amazon Route 53: A highly available and scalable cloud-based DNS (Domain Name System). Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources.You can set CloudWatch alarms and get automated Amazon SNS notifications when the status of your Route 53 health check changes. As an example, imagine an online payment gateway that reports the health of its platform to merchants worldwide, via a status page. This page is hosted on EC2 and fetches platform health data from DynamoDB. In this case, you could configure a CloudWatch alarm for your Route 53 health check, so that when the alarm threshold is breached, and the payment gateway is no longer considered healthy, then CloudWatch publishes this event to an SNS topic, which pushes this message to a subscribing Lambda function, which finally updates the DynamoDB table that populates the status page. This event-driven approach avoids any kind of manual update to the status page visited by merchants.

  • AWS Direct Connect (AWS DX): Makes it easy to establish a dedicated network connection from your premises to AWS, which can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.You can monitor physical DX connections using CloudWatch alarms, and send SNS messages when alarms change their status. As an example, when a DX connection state shifts to 0 (zero), indicating that the connection is down, this event can be published to an SNS topic, which can fan out this message to impacted servers through HTTP endpoints, so that they might reroute their traffic through a different connection instead. This is an event-driven approach to connectivity resilience.

More event-driven computing on AWS

In addition to SNS, event-driven computing is also addressed by Amazon CloudWatch Events, which delivers a near real-time stream of system events that describe changes in AWS resources. With CloudWatch Events, you can route each event type to one or more targets, including:

Many AWS services publish events to CloudWatch. As an example, you can get CloudWatch Events to capture events on your ETL (Extract, Transform, Load) jobs running on AWS Glue and push failed ones to an SQS queue, so that you can retry them later.

Conclusion

Amazon SNS is a pub/sub messaging service that can be used as an event-driven computing hub to AWS customers worldwide. By capturing events natively triggered by AWS services, such as EC2, S3 and RDS, you can automate and optimize all kinds of workflows, namely scaling, testing, encoding, profiling, broadcasting, discovery, failover, and much more. Business use cases presented in this post ranged from recruiting websites, to scientific research, geographic systems, social networks, retail websites, and news portals.

Start now by visiting Amazon SNS in the AWS Management Console, or by trying the AWS 10-Minute Tutorial, Send Fan-out Event Notifications with Amazon SNS and Amazon SQS.

 

B2 Cloud Storage Roundup

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/b2-cloud-storage-roundup/

B2 Integrations
Over the past several months, B2 Cloud Storage has continued to grow like we planted magic beans. During that time we have added a B2 Java SDK, and certified integrations with GoodSync, Arq, Panic, UpdraftPlus, Morro Data, QNAP, Archiware, Restic, and more. In addition, B2 customers like Panna Cooking, Sermon Audio, and Fellowship Church are happy they chose B2 as their cloud storage provider. If any of that sounds interesting, read on.

The B2 Java SDK

While the Backblaze B2 API is well documented and straight-forward to implement, we were asked by a few of our Integration Partners if we had an SDK they could use. So we developed one as an open-course project on GitHub, where we hope interested parties will not only use our Java SDK, but make it better for everyone else.

There are different reasons one might use the Java SDK, but a couple of areas where the SDK can simplify the coding process are:

Expiring Authorization — B2 requires an application key for a given account be reissued once a day when using the API. If the application key expires while you are in the middle of transferring files or some other B2 activity (bucket list, etc.), the SDK can be used to detect and then update the application key on the fly. Your B2 related activities will continue without incident and without having to capture and code your own exception case.

Error Handling — There are different types of error codes B2 will return, from expired application keys to detecting malformed requests to command time-outs. The SDK can dramatically simplify the coding needed to capture and account for the various things that can happen.

While Backblaze has created the Java SDK, developers in the GitHub community have also created other SDKs for B2, for example, for PHP (https://github.com/cwhite92/b2-sdk-php,) and Go (https://github.com/kurin/blazer.) Let us know in the comments about other SDKs you’d like to see or perhaps start your own GitHub project. We will publish any updates in our next B2 roundup.

What You Can Do with Affordable and Available Cloud Storage

You’re probably aware that B2 is up to 75% less expensive than other similar cloud storage services like Amazon S3 and Microsoft Azure. Businesses and organizations are finding that projects that previously weren’t economically feasible with other Cloud Storage services are now not only possible, but a reality with B2. Here are a few recent examples:

SermonAudio logo SermonAudio wanted their media files to be readily available, but didn’t want to build and manage their own internal storage farm. Until B2, cloud storage was just too expensive to use. Now they use B2 to store their audio and video files, and also as the primary source of downloads and streaming requests from their subscribers.
Fellowship Church logo Fellowship Church wanted to escape from the ever increasing amount of time they were spending saving their data to their LTO-based system. Using B2 saved countless hours of personnel time versus LTO, fit easily into their video processing workflow, and provided instant access at any time to their media library.
Panna logo Panna Cooking replaced their closet full of archive hard drives with a cost-efficient hybrid-storage solution combining 45Drives and Backblaze B2 Cloud Storage. Archived media files that used to take hours to locate are now readily available regardless of whether they reside in local storage or in the B2 Cloud.

B2 Integrations

Leading companies in backup, archive, and sync continue to add B2 Cloud Storage as a storage destination for their customers. These companies realize that by offering B2 as an option, they can dramatically lower the total cost of ownership for their customers — and that’s always a good thing.

If your favorite application is not integrated to B2, you can do something about it. One integration partner told us they received over 200 customer requests for a B2 integration. The partner got the message and the integration is currently in beta test.

Below are some of the partner integrations completed in the past few months. You can check the B2 Partner Integrations page for a complete list.

Archiware — Both P5 Archive and P5 Backup can now store data in the B2 Cloud making your offsite media files readily available while keeping your off-site storage costs predictable and affordable.

Arq — Combine Arq and B2 for amazingly affordable backup of external drives, network drives, NAS devices, Windows PCs, Windows Servers, and Macs to the cloud.

GoodSync — Automatically synchronize and back up all your photos, music, email, and other important files between all your desktops, laptops, servers, external drives, and sync, or back up to B2 Cloud Storage for off-site storage.

QNAP — QNAP Hybrid Backup Sync consolidates backup, restoration, and synchronization functions into a single QTS application to easily transfer your data to local, remote, and cloud storage.

Morro Data — Their CloudNAS solution stores files in the cloud, caches them locally as needed, and syncs files globally among other CloudNAS systems in an organization.

Restic – Restic is a fast, secure, multi-platform command line backup program. Files are uploaded to a B2 bucket as de-duplicated, encrypted chunks. Each backup is a snapshot of only the data that has changed, making restores of a specific date or time easy.

Transmit 5 by Panic — Transmit 5, the gold standard for macOS file transfer apps, now supports B2. Upload, download, and manage files on tons of servers with an easy, familiar, and powerful UI.

UpdraftPlus — WordPress developers and admins can now use the UpdraftPlus Premium WordPress plugin to affordably back up their data to the B2 Cloud.

Getting Started with B2 Cloud Storage

If you’re using B2 today, thank you. If you’d like to try B2, but don’t know where to start, here’s a guide to getting started with the B2 Web Interface — no programming or scripting is required. You get 10 gigabytes of free storage and 1 gigabyte a day in free downloads. Give it a try.

The post B2 Cloud Storage Roundup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Facebook Fingerprinting Photos to Prevent Revenge Porn

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/facebook_finger.html

This is a pilot project in Australia:

Individuals who have shared intimate, nude or sexual images with partners and are worried that the partner (or ex-partner) might distribute them without their consent can use Messenger to send the images to be “hashed.” This means that the company converts the image into a unique digital fingerprint that can be used to identify and block any attempts to re-upload that same image.

I’m not sure I like this. It doesn’t prevent revenge porn in general; it only prevents the same photos being uploaded to Facebook in particular. And it requires the person to send Facebook copies of all their intimate photos.

Facebook will store these images for a short period of time before deleting them to ensure it is enforcing the policy correctly, the company said.

At least there’s that.

More articles.

Book Author Trolled Pirates With Fake Leak to Make a Point

Post Syndicated from Ernesto original https://torrentfreak.com/book-author-trolled-pirates-with-fake-leak-to-make-a-point-171104/

When it comes to how piracy affects sales, there are thousands of different opinions. This applies to music, movies, software and many other digital products, including ebooks.

When we interviewed Paulo Coelho nearly ten years ago, he pointed out how piracy helped him to sell more books. While a lot has changed since then, he still sees the benefits of piracy today.

However, for many other authors, piracy is a menace. They cringe at the sight of their book being shared online and believe that hurts their bottom line. This includes Maggie Stiefvater, who’s known for The Raven Cycle books, among others.

This week she responded to a tweet from a self-confessed pirate, stating that piracy got the box set of the Raven Cycle canceled. As is usual on social media, it quickly turned into a mess.

Instead of debating the controversial issue indefinitely in 140 character tweets, Stiefvater did what authors do best. She put her thoughts on paper. In a Tumblr post, she countered the belief that piracy doesn’t hurt authors and that pirates wouldn’t pay for a book anyway.

The story shared by Stiefvater isn’t hypothetical, it’s real-world experience. She had noticed that the third book in the Raven Cycle wasn’t doing as well as earlier editions. While this is not uncommon for a series, the sales drop was not equal across all formats, but mostly driven by a lack of eBook sales.

While her publisher wasn’t certain that piracy was to blame, Stiefvater was convinced it played an important role. After all, the interest in her book tours was growing and there was plenty of talk about the books online as well. So when the publisher said that the print run of her new book the Raven King would be cut in half compared to a previous release, she came up with a plan.

Instead of trying to take all pirated copies down following the new release, she created her own, with help from her brother. But one with a twist.

“It was impossible to take down every illegal pdf; I’d already seen that. So we were going to do the opposite. We created a pdf of the Raven King. It was the same length as the real book, but it was just the first four chapters over and over again,” Stiefvater writes.

“I knew we wouldn’t be able to hold the fort for long — real versions would slowly get passed around by hand through forum messaging — but I told my brother: I want to hold the fort for one week. Enough to prove a point. Enough to show everyone that this is no longer 2004. This is the smart phone generation, and a pirated book sometimes is a lost sale.”

And so it happened. When the book came out April last year, customized pirated copies were planted all over the Internet by the author’s brother. People were stumbling all over them, making it near impossible to find a real pirated copy.

“He uploaded dozens and dozens and dozens of these pdfs of The Raven King. You couldn’t throw a rock without hitting one of his pdfs. We sailed those epub seas with our own flag shredding the sky.”

This paid off. Many people could only find the “troll” copies and saw no other option than to buy the real deal.

“The effects were instant. The forums and sites exploded with bewildered activity. Fans asked if anyone had managed to find a link to a legit pdf. Dozens of posts appeared saying that since they hadn’t been able to find a pdf, they’d been forced to hit up Amazon and buy the book.”

As a result, the first print of the book sold out in two days. Stiefvater was on tour and at some stores she visited, the books were no longer available. The publisher had to print more and more until… the inevitable happened.

“Then the pdfs hit the forums and e-sales sagged and it was business as usual, but it didn’t matter: I’d proven the point. Piracy has consequences,” Stiefvater writes, summarizing the morale of her story.

While this is unlikely to change the minds of undeterred pirates, it might strike a chord with some people.

Of course Stiefvater’s anecdote is no better that Coelho’s, who argued the opposite in the past. Perhaps the real takeaway is that piracy doesn’t have any fixed effects and it certainly can’t be captured in oneliners either. It’s a complex puzzle of dozens of constantly changing factors, which will likely never be solved.

Maggie Stiefvater’s full Tumblr post is a recommended read and can be found here, or below.

http://maggie-stiefvater.tumblr.com/post/166952028861/ive-decided-to-tell-you-guys-a-story-about

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Prank your friends with the WhooPi Cushion

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/whoopi-cushion/

Learn about using switches and programming GPIO pins while you prank your friends with the Raspberry Pi-powered whoopee WhooPi Cushion!

Whoopee cushion PRANK with a Raspberry Pi: HOW-TO

Explore the world of Raspberry Pi physical computing with our free FutureLearn courses: http://rpf.io/futurelearn Free make your own Whoopi Cushion resource: http://rpf.io/whoopi For more information on Raspberry Pi and the charitable work of the Raspberry Pi Foundation, including Code Club and CoderDojo, visit http://rpf.io Our resources are free to use in schools, clubs, at home and at events.

The WhooPi Cushion

You might remember Carrie Anne and me showing off the WhooPi Cushion live on Facebook last year. The project was created as a simple proof of concept during a Pi Towers maker day. However, our viewers responded so enthusastically that we set about putting together a how-to resource for it.

A cartoon of a man sitting on a whoopee cushion - Raspberry Pi WhooPi Cushion Resource

When we made the resource available, it turned out to be so popular that we decided to include the project in one of our first FutureLearn courses and produced a WhooPi Cushion video tutorial to go with it.

A screen shot from our Raspberry Pi WhooPi Cushion Resource video

Our FutureLearn course attendees love the video, so last week we uploaded it to YouTube! Now everyone can follow along with James Robinson to make their own WhooPi Cushion out of easy-to-gather household items such as tinfoil, paper plates, and spongy material.

Build upon the WhooPi Cushion

Once you’ve completed your prank cushion, you’ll have learnt new skills that you can incorporate into other projects.

For example, you’ll know how to program an action in response to a button press — so how about playing a sound when the button is released instead? Just like that, you’ll have created a simple pressure-based alarm system. Or you could upgrade the functionality of the cushion by including a camera that takes a photo of your unwitting victim’s reaction!

A cartoon showing the stages of the Raspberry Pi Digital Curriculum from Creator to Builder, Developer and Maker

Building upon your skills to increase your knowledge of programming constructs and manufacturing techniques is key to becoming a digital maker. When you use the free Raspberry Pi resources, you’re also working through our digital curriculum, which guides you on this learning journey.

FutureLearn courses for free

Our FutureLearn courses are completely free and cover a variety of topics and skills, including object-oriented programming and teaching physical computing.

A GIF of a man on an island learning with FutureLearn

Regardless of your location, you can learn with us online to improve your knowledge of teaching digital making as well as your own hands-on digital skill set.

The post Prank your friends with the WhooPi Cushion appeared first on Raspberry Pi.

Artists Highlight YouTube Piracy and Poor Payments in New Ad Campaign

Post Syndicated from Ernesto original https://torrentfreak.com/artists-highlight-youtube-piracy-and-poor-payments-in-new-ad-campaign-171026/

YouTube is the world’s leading video and music service and has partnerships with thousands of artists and other publishers around the globe.

While many are happy with the revenue they’re generating from the Google-owned platform, there has been a lot of negative commentary as well.

Several major record labels are complaining about the so-called ‘value gap‘ and the low payouts per streaming view, for example. This view is shared by the Content Creators Coalition (c3), an artist-run advocacy organization for musicians.

The group has just released two new ads calling on the streaming service to give artists more options to prevent piracy while calling on Congress to update the DMCA.

Rehashing the old Apple vs. Microsoft ad theme, the first video depicts an artist who is trying to get pirated content removed from the site. In the ad, YouTube is not particularly helpful, suggesting that pirated content is quickly re-uploaded after it’s removed.

Interestingly, there is no mention of the Content-ID program which many creators successfully use to prevent pirated content from reappearing. The vast majority (98%) of all copyright complaints are currently handled automatically through the Content-ID system.

Takedown Whack-a-Mole?

The second ad complains about poor payments. In this video, the artist gets paid more from all smaller streaming services, even though these generated only a fraction of the views compared to YouTube.

This complaint is not new either. Over the past several years, YouTube has been called out repeatedly for not paying enough. Not only that, the streaming service has also been accused of running a DMCA protection racket, profiting from pirated streams while hiding behind the DMCA’s safe harbor protections.

Pennies?

The Content Creators Coalition says that the advertisements will run on YouTube and other digital platforms as part of a significant new ad buy.

“Google’s YouTube has shortchanged artists while earning billions of dollars of our music. Artists know YouTube can do better,” c3 President and award-winning bassist Melvin Gibbs says.

“So, rather than hiding behind outdated laws, YouTube and Google should work to give artists more control over our music and pay music creators fairly when our songs are played on their platform.”

While these complaints are nothing new for YouTube, they are also intended to rally support from the public and lawmakers.

“Our ads send a message to the executives in Mountain View that artists are fighting back and mobilizing fans to push Congress to update the DMCA and end the legal neglect that has given Big Tech too much power over our work and society,” Gibbs adds.

YouTube itself paints an entirely different picture. The company previously stated that it goes above and beyond what it’s required to do by law, while paying billions to copyright holders.

“Content ID goes beyond a simple ‘notice-and-takedown’ system to provide a set of automated tools that empowers rightsholders to automatically claim their content and choose whether to track, block or monetize it on YouTube,” senior policy counsel Katherine Oyama noted.

“YouTube has paid out over $2 billion to rightsholders who have monetized their content through Content ID since it first launched. In fact, today well over 90% of all Content ID claims across the platform result in monetization.”

This music industry vs YouTube battle is far from over.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Grafana 4.6 Released

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/26/grafana-4.6-released/

Release Highlights

The Grafana 4.6 release contains some exciting and much anticipated new additions:

This is a big release so check out the other features and fixes in the Changelog section below.

Annotations

Annotations provide a way to mark points on the graph with rich events. You can now add annotation events and regions right from the graph panel! Just hold CTRL/CMD + click or drag region to open the Add Annotation view. The
Annotations documentation is updated to include details on this new exciting feature.

Cloudwatch

Cloudwatch now supports alerting. You can now setup alert rules for any Cloudwatch metric!

Postgres

Grafana v4.6 now ships with a built-in datasource plugin for PostgreSQL. Have logs or metric data in Postgres? You can now visualize that data and
define alert rules on it like any of our other data sources.

Prometheus

New enhancements include support for instant queries (for a single point in time instead of a time range) and improvements to query editor in the form of autocomplete for label names and label values.

This makes exploring and filtering Prometheus data much easier.

Changelog

Here are just a few highlights from the Changelog.

New Features

  • Annotations: Add support for creating annotations from graph panel #8197
  • GCS: Adds support for Google Cloud Storage #8370 thx @chuhlomin
  • Prometheus: Adds /metrics endpoint for exposing Grafana metrics. #9187
  • Jaeger: Add support for open tracing using jaeger in Grafana. #9213
  • Unit types: New date & time unit types added, useful in singlestat to show dates & times. #3678, #6710, #2764
  • Prometheus: Add support for instant queries #5765, thx @mtanda
  • Cloudwatch: Add support for alerting using the cloudwatch datasource #8050, thx @mtanda
  • Pagerduty: Include triggering series in pagerduty notification #8479, thx @rickymoorhouse
  • Prometheus: Align $__interval with the step parameters. #9226, thx @alin-amana
  • Prometheus: Autocomplete for label name and label value #9208, thx @mtanda
  • Postgres: New Postgres data source #9209, thx @svenklemm
  • Datasources: Make datasource HTTP requests verify TLS by default. closes #9371, #5334, #8812, thx @mattbostock

Minor

  • SMTP: Make it possible to set specific HELO for smtp client. #9319
  • Alerting: Add diff and percent diff as series reducers #9386, thx @shanhuhai5739
  • Slack: Allow images to be uploaded to slack when Token is present #7175, thx @xginn8
  • Table: Add support for displaying the timestamp with milliseconds #9429, thx @s1061123
  • Hipchat: Add metrics, message and image to hipchat notifications #9110, thx @eloo
  • Kafka: Add support for sending alert notifications to kafka #7104, thx @utkarshcmu

Tech

  • Webpack: Changed from systemjs to webpack (see readme or building from source guide for new build instructions). Systemjs is still used to load plugins but now plugins can only import a limited set of dependencies. See PLUGIN_DEV.md for more details on how this can effect some plugins.

Download

Head to the v4.6 download page for download links & instructions.

Thanks

A big thanks to all the Grafana users who contribute by submitting PRs, bug reports, helping out on our community site and providing feedback!

Some notes about the Kaspersky affair

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/some-notes-about-kaspersky-affair.html

I thought I’d write up some notes about Kaspersky, the Russian anti-virus vendor that many believe has ties to Russian intelligence.

There’s two angles to this story. One is whether the accusations are true. The second is the poor way the press has handled the story, with mainstream outlets like the New York Times more intent on pushing government propaganda than informing us what’s going on.

The press

Before we address Kaspersky, we need to talk about how the press covers this.
The mainstream media’s stories have been pure government propaganda, like this one from the New York Times. It garbles the facts of what happened, and relies primarily on anonymous government sources that cannot be held accountable. It’s so messed up that we can’t easily challenge it because we aren’t even sure exactly what it’s claiming.
The Society of Professional Journalists have a name for this abuse of anonymous sources, the “Washington Game“. Journalists can identify this as bad journalism, but the big newspapers like The New York Times continues to do it anyway, because how dare anybody criticize them?
For all that I hate the anti-American bias of The Intercept, at least they’ve had stories that de-garble what’s going on, that explain things so that we can challenge them.

Our Government

Our government can’t tell us everything, of course. But at the same time, they need to tell us something, to at least being clear what their accusations are. These vague insinuations through the media hurt their credibility, not help it. The obvious craptitude is making us in the cybersecurity community come to Kaspersky’s defense, which is not the government’s aim at all.
There are lots of issues involved here, but let’s consider the major one insinuated by the NYTimes story, that Kaspersky was getting “data” files along with copies of suspected malware. This is troublesome if true.
But, as Kaspersky claims today, it’s because they had detected malware within a zip file, and uploaded the entire zip — including the data files within the zip.
This is reasonable. This is indeed how anti-virus generally works. It completely defeats the NYTimes insinuations.
This isn’t to say Kaspersky is telling the truth, of course, but that’s not the point. The point is that we are getting vague propaganda from the government further garbled by the press, making Kaspersky’s clear defense the credible party in the affair.
It’s certainly possible for Kaspersky to write signatures to look for strings like “TS//SI/OC/REL TO USA” that appear in secret US documents, then upload them to Russia. If that’s what our government believes is happening, they need to come out and be explicit about it. They can easily setup honeypots, in the way described in today’s story, to confirm it. However, it seems the government’s description of honeypots is that Kaspersky only upload files that were clearly viruses, not data.

Kaspersky

I believe Kaspersky is guilty, that the company and Eugene himself, works directly with Russian intelligence.
That’s because on a personal basis, people in government have given me specific, credible stories — the sort of thing they should be making public. And these stories are wholly unrelated to stories that have been made public so far.
You shouldn’t believe me, of course, because I won’t go into details you can challenge. I’m not trying to convince you, I’m just disclosing my point of view.
But there are some public reasons to doubt Kaspersky. For example, when trying to sell to our government, they’ve claimed they can help us against terrorists. The translation of this is that they could help our intelligence services. Well, if they are willing to help our intelligence services against customers who are terrorists, then why wouldn’t they likewise help Russian intelligence services against their adversaries?
Then there is how Russia works. It’s a violent country. Most of the people mentioned in that “Steele Dossier” have died. In the hacker community, hackers are often coerced to help the government. Many have simply gone missing.
Being rich doesn’t make Kaspersky immune from this — it makes him more of a target. Russian intelligence knows he’s getting all sorts of good intelligence, such as malware written by foreign intelligence services. It’s unbelievable they wouldn’t put the screws on him to get this sort of thing.
Russia is our adversary. It’d be foolish of our government to buy anti-virus from Russian companies. Likewise, the Russian government won’t buy such products from American companies.

Conclusion

I have enormous disrespect for mainstream outlets like The New York Times and the way they’ve handled the story. It makes me want to come to Kaspersky’s defense.

I have enormous respect for Kaspersky technology. They do good work.

But I hear stories. I don’t think our government should be trusting Kaspersky at all. For that matter, our government shouldn’t trust any cybersecurity products from Russia, China, Iran, etc.

Backing Up Linux to Backblaze B2 with Duplicity and Restic

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/

Linux users have a variety of options for handling data backup. The choices range from free and open-source programs to paid commercial tools, and include applications that are purely command-line based (CLI) and others that have a graphical interface (GUI), or both.

If you take a look at our Backblaze B2 Cloud Storage Integrations page, you will see a number of offerings that enable you to back up your Linux desktops and servers to Backblaze B2. These include CloudBerry, Duplicity, Duplicacy, 45 Drives, GoodSync, HashBackup, QNAP, Restic, and Rclone, plus other choices for NAS and hybrid uses.

In this post, we’ll discuss two popular command line and open-source programs: one older, Duplicity, and a new player, Restic.

Old School vs. New School

We’re highlighting Duplicity and Restic today because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

Old School (Duplicity)

In the old school model, data is written sequentially to the storage medium. Once a section of data is recorded, new data is written starting where that section of data ends. It’s not possible to go back and change the data that’s already been written.

This old-school model has long been associated with the use of magnetic tape, a prime example of which is the LTO (Linear Tape-Open) standard. In this “write once” model, files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. As a Linux/Unix user, you undoubtedly are familiar with the TAR archive format, which is an acronym for Tape ARchive. TAR has been around since 1979 and was originally developed to write data to sequential I/O devices with no file system of their own.

It is from the use of tape that we get the full backup/incremental backup approach to backups. A backup sequence beings with a full backup of data. Each incremental backup contains what’s been changed since the last full backup until the next full backup is made and the process starts over, filling more and more tape or whatever medium is being used.

This is the model used by Duplicity: full and incremental backups. Duplicity backs up files by producing encrypted, digitally signed, versioned, TAR-format volumes and uploading them to a remote location, including Backblaze B2 Cloud Storage. Released under the terms of the GNU General Public License (GPL), Duplicity is free software.

With Duplicity, the first archive is a complete (full) backup, and subsequent (incremental) backups only add differences from the latest full or incremental backup. Chains consisting of a full backup and a series of incremental backups can be recovered to the point in time that any of the incremental steps were taken. If any of the incremental backups are missing, then reconstructing a complete and current backup is much more difficult and sometimes impossible.

Duplicity is available under many Unix-like operating systems (such as Linux, BSD, and Mac OS X) and ships with many popular Linux distributions including Ubuntu, Debian, and Fedora. It also can be used with Windows under Cygwin.

We recently published a KB article on How to configure Backblaze B2 with Duplicity on Linux that demonstrates how to set up Duplicity with B2 and back up and restore a directory from Linux.

New School (Restic)

With the arrival of non-sequential storage medium, such as disk drives, and new ideas such as deduplication, comes the new school approach, which is used by Restic. Data can be written and changed anywhere on the storage medium. This efficiency comes largely through the use of deduplication. Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, greatly increasing storage efficiency and flexibility.

Restic is a recently available multi-platform command line backup software program that is designed to be fast, efficient, and secure. Restic supports a variety of backends for storing backups, including a local server, SFTP server, HTTP Rest server, and a number of cloud storage providers, including Backblaze B2.

Files are uploaded to a B2 bucket as deduplicated, encrypted chunks. Each time a backup runs, only changed data is backed up. On each backup run, a snapshot is created enabling restores to a specific date or time.

Restic assumes that the storage location for repository is shared, so it always encrypts the backed up data. This is in addition to any encryption and security from the storage provider.

Restic is open source and free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

There’s a lot more you can do with Restic, including adding tags, mounting a repository locally, and scripting. To learn more, you can review the documentation at https://restic.readthedocs.io.

Coincidentally with this blog post, we published a KB article, How to configure Backblaze B2 with Restic on Linux, in which we show how to set up Restic for use with B2 and how to back up and restore a home directory from Linux to B2.

Which is Right for You?

While Duplicity is a popular, widely-available, and useful program, many users of cloud storage solutions such as B2 are moving to new-school solutions like Restic that take better advantage of the non-sequential access capabilities and speed of modern storage media used by cloud storage providers.

Tell us how you’re backing up Linux

Please let us know in the comments what you’re using for Linux backups, and if you have experience using Duplicity, Restic, or other backup software with Backblaze B2.

The post Backing Up Linux to Backblaze B2 with Duplicity and Restic appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Abandon Proactive Copyright Filters, Huge Coalition Tells EU Heavyweights

Post Syndicated from Andy original https://torrentfreak.com/abandon-proactive-copyright-filters-huge-coalition-tells-eu-heavyweights-171017/

Last September, EU Commission President Jean-Claude Juncker announced plans to modernize copyright law in Europe.

The proposals (pdf) are part of the Digital Single Market reforms, which have been under development for the past several years.

One of the proposals is causing significant concern. Article 13 would require some online service providers to become ‘Internet police’, proactively detecting and filtering allegedly infringing copyright works, uploaded to their platforms by users.

Currently, users are generally able to share whatever they like but should a copyright holder take exception to their upload, mechanisms are available for that content to be taken down. It’s envisioned that proactive filtering, whereby user uploads are routinely scanned and compared to a database of existing protected content, will prevent content becoming available in the first place.

These proposals are of great concern to digital rights groups, who believe that such filters will not only undermine users’ rights but will also place unfair burdens on Internet platforms, many of which will struggle to fund such a program. Yesterday, in the latest wave of opposition to Article 13, a huge coalition of international rights groups came together to underline their concerns.

Headed up by Civil Liberties Union for Europe (Liberties) and European Digital Rights (EDRi), the coalition is formed of dozens of influential groups, including Electronic Frontier Foundation (EFF), Human Rights Watch, Reporters without Borders, and Open Rights Group (ORG), to name just a few.

In an open letter to European Commission President Jean-Claude Juncker, President of the European Parliament Antonio Tajani, President of the European Council Donald Tusk and a string of others, the groups warn that the proposals undermine the trust established between EU member states.

“Fundamental rights, justice and the rule of law are intrinsically linked and constitute
core values on which the EU is founded,” the letter begins.

“Any attempt to disregard these values undermines the mutual trust between member states required for the EU to function. Any such attempt would also undermine the commitments made by the European Union and national governments to their citizens.”

Those citizens, the letter warns, would have their basic rights undermined, should the new proposals be written into EU law.

“Article 13 of the proposal on Copyright in the Digital Single Market include obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens’ fundamental rights,” it notes.

A major concern is that by placing new obligations on Internet service providers that allow users to upload content – think YouTube, Facebook, Twitter and Instagram – they will be forced to err on the side of caution. Should there be any concern whatsoever that content might be infringing, fair use considerations and exceptions will be abandoned in favor of staying on the right side of the law.

“Article 13 appears to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they are to have any chance of staying in business,” the letter warns.

But while the potential problems for service providers and users are numerous, the groups warn that Article 13 could also be illegal since it contradicts case law of the Court of Justice.

According to the E-Commerce Directive, platforms are already required to remove infringing content, once they have been advised it exists. The new proposal, should it go ahead, would force the monitoring of uploads, something which goes against the ‘no general obligation to monitor‘ rules present in the Directive.

“The requirement to install a system for filtering electronic communications has twice been rejected by the Court of Justice, in the cases Scarlet Extended (C70/10) and Netlog/Sabam (C 360/10),” the rights groups warn.

“Therefore, a legislative provision that requires internet companies to install a filtering system would almost certainly be rejected by the Court of Justice because it would contravene the requirement that a fair balance be struck between the right to intellectual property on the one hand, and the freedom to conduct business and the right to freedom of expression, such as to receive or impart information, on the other.”

Specifically, the groups note that the proactive filtering of content would violate freedom of expression set out in Article 11 of the Charter of Fundamental Rights. That being the case, the groups expect national courts to disapply it and the rule to be annulled by the Court of Justice.

The latest protests against Article 13 come in the wake of large-scale objections earlier in the year, voicing similar concerns. However, despite the groups’ fears, they have powerful adversaries, each determined to stop the flood of copyrighted content currently being uploaded to the Internet.

Front and center in support of Article 13 is the music industry and its current hot-topic, the so-called Value Gap(1,2,3). The industry feels that platforms like YouTube are able to avoid paying expensive licensing fees (for music in particular) by exploiting the safe harbor protections of the DMCA and similar legislation.

They believe that proactively filtering uploads would significantly help to diminish this problem, which may very well be the case. But at what cost to the general public and the platforms they rely upon? Citizens and scholars feel that freedoms will be affected and it’s likely the outcry will continue.

The ball is now with the EU, whose members will soon have to make what could be the most important decision in recent copyright history. The rights groups, who are urging for Article 13 to be deleted, are clear where they stand.

The full letter is available here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Popcorn Time Creator Readies BitTorrent & Blockchain-Powered Video Platform

Post Syndicated from Andy original https://torrentfreak.com/popcorn-time-creator-readies-bittorrent-blockchain-powered-youtube-competitor-171012/

Without a doubt, YouTube is one of the most important websites available on the Internet today.

Its massive archive of videos brings pleasure to millions on a daily basis but its centralized nature means that owner Google always exercises control.

Over the years, people have looked to decentralize the YouTube concept and the latest project hoping to shake up the market has a particularly interesting player onboard.

Until 2015, only insiders knew that Argentinian designer Federico Abad was actually ‘Sebastian’, the shadowy figure behind notorious content sharing platform Popcorn Time.

Now he’s part of the team behind Flixxo, a BitTorrent and blockchain-powered startup hoping to wrestle a share of the video market from YouTube. Here’s how the team, which features blockchain startup RSK Labs, hope things will play out.

The Flixxo network will have no centralized storage of data, eliminating the need for expensive hosting along with associated costs. Instead, transfers will take place between peers using BitTorrent, meaning video content will be stored on the machines of Flixxo users. In practice, the content will be downloaded and uploaded in much the same way as users do on The Pirate Bay or indeed Abad’s baby, Popcorn Time.

However, there’s a twist to the system that envisions content creators, content consumers, and network participants (seeders) making revenue from their efforts.

At the heart of the Flixxo system are digital tokens (think virtual currency), called Flixx. These Flixx ‘coins’, which will go on sale in 12 days, can be used to buy access to content. Creators can also opt to pay consumers when those people help to distribute their content to others.

“Free from structural costs, producers can share the earnings from their content with the network that supports them,” the team explains.

“This way you get paid for helping us improve Flixxo, and you earn credits (in the form of digital tokens called Flixx) for watching higher quality content. Having no intermediaries means that the price you pay for watching the content that you actually want to watch is lower and fairer.”

The Flixxo team

In addition to earning tokens from helping to distribute content, people in the Flixxo ecosystem can also earn currency by watching sponsored content, i.e advertisements. While in a traditional system adverts are often considered a nuisance, Flixx tokens have real value, with a promise that users will be able to trade their Flixx not only for videos, but also for tangible and semi-tangible goods.

“Use your Flixx to reward the producers you follow, encouraging them to create more awesome content. Or keep your Flixx in your wallet and use them to buy a movie ticket, a pair of shoes from an online retailer, a chest of coins in your favourite game or even convert them to old-fashioned cash or up-and-coming digital assets, like Bitcoin,” the team explains.

The Flixxo team have big plans. After foundation in early 2016, the second quarter of 2017 saw the completion of a functional alpha release. In a little under two weeks, the project will begin its token generation event, with new offices in Los Angeles planned for the first half of 2018 alongside a premiere of the Flixxo platform.

“A total of 1,000,000,000 (one billion) Flixx tokens will be issued. A maximum of 300,000,000 (three hundred million) tokens will be sold. Some of these tokens (not more than 33% or 100,000,000 Flixx) may be sold with anticipation of the token allocation event to strategic investors,” Flixxo states.

Like all content platforms, Flixxo will live or die by the quality of the content it provides and whether, at least in the first instance, it can persuade people to part with their hard-earned cash. Only time will tell whether its content will be worth a premium over readily accessible YouTube content but with much-reduced costs, it may tempt creators seeking a bigger piece of the pie.

“Flixxo will also educate its community, teaching its users that in this new internet era value can be held and transferred online without intermediaries, a value that can be earned back by participating in a community, by contributing, being rewarded for every single social interaction,” the team explains.

Of course, the elephant in the room is what will happen when people begin sharing copyrighted content via Flixxo. Certainly, the fact that Popcorn Time’s founder is a key player and rival streaming platform Stremio is listed as a partner means that things could get a bit spicy later on.

Nevertheless, the team suggests that piracy and spam content distribution will be limited by mechanisms already built into the system.

“[A]uthors have to time-block tokens in a smart contract (set as a warranty) in order to upload content. This contract will also handle and block their earnings for a certain period of time, so that in the case of a dispute the unfair-uploader may lose those tokens,” they explain.

That being said, Flixxo also says that “there is no way” for third parties to censor content “which means that anyone has the chance of making any piece of media available on the network.” However, Flixxo says it will develop tools for filtering what it describes as “inappropriate content.”

At this point, things start to become a little unclear. On the one hand Flixxo says it could become a “revolutionary tool for uncensorable and untraceable media” yet on the other it says that it’s necessary to ensure that adult content, for example, isn’t seen by kids.

“We know there is a thin line between filtering or curating content and censorship, and it is a fact that we have an open network for everyone to upload any content. However, Flixxo as a platform will apply certain filtering based on clear rules – there should be a behavior-code for uploaders in order to offer the right content to the right user,” Flixxo explains.

To this end, Flixxo says it will deploy a centralized curation function, carried out by 101 delegates elected by the community, which will become progressively decentralized over time.

“This curation will have a cost, paid in Flixx, and will be collected from the warranty blocked by the content uploaders,” they add.

There can be little doubt that if Flixxo begins ‘curating’ unsuitable content, copyright holders will call on it to do the same for their content too. And, if the platform really takes off, 101 curators probably won’t scratch the surface. There’s also the not inconsiderable issue of what might happen to curators’ judgment when they’re incentivized to block curate content.

Finally, for those sick of “not available in your region” messages, there’s good and bad news. Flixxo insists there will be no geo-blocking of content on its part but individual creators will still have that feature available to them, should they choose.

The Flixx whitepaper can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Private Torrent Sites Allow Users to Mine Cryptocurrency for Upload Credit

Post Syndicated from Andy original https://torrentfreak.com/private-torrent-sites-allow-users-to-mine-cryptocurrency-for-upload-credit-171008/

Ever since The Pirate Bay crew added a cryptocurrency miner to their site last month, the debate over user mining has sizzled away in the background.

The basic premise is that a piece of software embedded in a website runs on a user’s machine, utilizing its CPU cycles in order to generate revenue for the site in question. But not everyone likes it.

The main problem has centered around consent. While some sites are giving users the option of whether to be involved or not, others simply run the miner without asking. This week, one site operator suggested to TF that since no one asks whether they can run “shitty” ads on a person’s machine, why should they ask permission to mine?

It’s a controversial point, but it would be hard to find users agreeing on either front. They almost universally insist on consent, wherever possible. That’s why when someone comes up with something innovative to solve a problem, it catches the eye.

Earlier this week a user on Reddit posted a screenshot of a fairly well known private tracker. The site had implemented a mining solution not dissimilar to that appearing on other similar platforms. This one, however, gives the user something back.

Mining for coins – with a twist

First of all, it’s important to note the implementation. The decision to mine is completely under the control of the user, with buttons to start or stop mining. There are even additional controls for how many CPU threads to commit alongside a percentage utilization selector. While still early days, that all sounds pretty fair.

Where this gets even more interesting is how this currency mining affects so-called “upload credit”, an important commodity on a private tracker without which users can be prevented from downloading any content at all.

Very quickly: when BitTorrent users download content, they simultaneously upload to other users too. The idea is that they download X megabytes and upload the same number (at least) to other users, to ensure that everyone in a torrent swarm (a number of users sharing together) gets a piece of the action, aka the content in question.

The amount of content downloaded and uploaded on a private tracker is monitored and documented by the site. If a user has 1TB downloaded and 2TB uploaded, for example, he has 1TB in credit. In basic terms, this means he can download at least 1TB of additional content before he goes into deficit, a position undesirable on a private tracker.

Now, getting more “upload credit” can be as simple as uploading more, but some users find that difficult, either due to the way a tracker’s economy works or simply due to not having resources. If this is the case, some sites allow people to donate real money to receive “upload credit”. On the tracker highlighted in the mining example above, however, it’s possible to virtually ‘trade-in’ some of the mining effort instead.

Tracker politics aside (some people believe this is simply a cash grab opportunity), from a technical standpoint the prospect is quite intriguing.

In a way, the current private tracker system allows users to “mine” upload credits by donating bandwidth to other users of the site. Now they have the opportunity to mine an actual cryptocurrency on the tracker and have some of it converted back into the tracker’s native ‘currency’ – upload credit – which can only be ‘spent’ on the site. Meanwhile, the site’s operator can make a few bucks towards site maintenance.

Another example showing how innovative these mining implementations can be was posted by a member of a second private tracker. Although it’s unclear whether mining is forced or optional, there appears to be complete transparency for the benefit of the user.

The mining ‘Top 10’ on a private tracker

In addition to displaying the total number of users mining and the hashes solved per second, the site publishes a ‘Top 10’ list of users mining the most currently, and overall. Again, some people might not like the concept of users mining at all, but psychologically this is a particularly clever implementation.

Utilizing the desire of many private tracker users to be recognizable among their peers due to their contribution to the platform, the charts give a user a measurable status in the community, at least among those who care about such things. Previously these charts would list top uploaders of content but the addition of a ‘Top miner’ category certainly adds some additional spice to the mix.

Mining is a controversial topic which isn’t likely to go away anytime soon. But, for all its faults, it’s still a way for sites to generate revenue, away from the pitfalls of increasingly hostile and easy-to-trace alternative payment systems. The Pirate Bay may have set the cat among the pigeons last month, but it also gave the old gray matter a boost too.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.