Tag Archives: competition

GoDaddy to Suspend ‘Pirate’ Domain Following Music Industry Complaints

Post Syndicated from Andy original https://torrentfreak.com/godaddy-to-suspend-pirate-domain-following-music-industry-complaints-180601/

Most piracy-focused sites online conduct their business with minimal interference from outside parties. In many cases, a heap of DMCA notices filed with Google represents the most visible irritant.

Others, particularly those with large audiences, can find themselves on the end of a web blockade. Mostly court-ordered, blocking measures restrict the ability of Internet users to visit a site due to ISPs restricting traffic.

In some regions, where copyright holders have the means to do so, they choose to tackle a site’s infrastructure instead, which could mean complaints to webhosts or other service providers. At times, this has included domain registries, who are asked to disable domains on copyright grounds.

This is exactly what has happened to Fox-MusicaGratis.com, a Spanish-language music piracy site that incurred the wrath of IFPI member UNIMPRO – the Peruvian Union of Phonographic Producers.

Pirate music, suspended domain

In a process that’s becoming more common in the region, UNIMPRO initially filed a complaint with the Copyright Commission (Comisión de Derecho de Autor (CDA)) which conducted an investigation into the platform’s activities.

“The CDA considered, among other things, the irreparable damage that would have been caused to the legitimate rights owners, taking into account the large number of users who could potentially have visited said website, which was making available endless musical recordings for commercial purposes, without authorization of the holders of rights,” a statement from CDA reads.

The administrative process was carried out locally with the involvement of the National Institute for the Defense of Competition and the Protection of Intellectual Property (Indecopi), an autonomous public body tasked with handling anti-competitive behavior, unfair competition, and intellectual property matters.

Indecopi HQ

The matter was decided in favor of the rightsholders and a subsequent ruling included an instruction for US-based domain name registry GoDaddy to suspend Fox-MusicaGratis.com. According to the copyright protection entity, GoDaddy agreed to comply, to prevent further infringement.

This latest action involving a music piracy site registered with GoDaddy follows on the heels of a similar enforcement process back in March.

Mp3Juices-Download-Free.com, Melodiavip.net, Foxmusica.site and Fulltono.me were all music sites offering MP3 content without copyright holders’ permission. They too were the subject of an UNIMPRO complaint which resulted in orders for GoDaddy to suspend their domains.

In the cases of all five websites, GoDaddy was given the chance to appeal but there is no indication that the company has done so. GoDaddy did not respond to a request for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

C is to low level

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/c-is-too-low-level.html

I’m in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it’s still the closest language to assembly. This is obvious by the fact it’s still the fastest compiled language. What we see is a typical academic out of touch with the real world.

The author makes the (wrong) observation that we’ve been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.


The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.

But here’s the thing, the Sparc Tx was a failure. To be fair, it’s mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.

Time after time, engineers keep finding that “out-of-order”, single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.

The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel’s dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don’t need to be as beefy as Intel’s processors, but they need to be close.

Even Intel’s “Xeon Phi” custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide “vector” (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.

As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.

This is an intellectually compelling argument, but so far bunk.

The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like “intrinsics” (C ‘functions’ that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enough, it still provides the best access to low level capabilities.

Given that C can access new functionality in CPUs, CPU designers add new paradigms, from SIMD to transaction processing. In other words, while in the 1980s CPUs were designed to optimize C (stacks, scaled pointers), these days CPUs are designed to optimize tasks regardless of language.

The author of that article criticizes the memory/cache hierarchy, claiming it has problems. Yes, it has problems, but only compared to how well it normally works. The author praises the many simple cores/threads idea as hiding memory latency with little caching, but misses the point that caches also dramatically increase memory bandwidth. Intel processors are optimized to read a whopping 256 bits every clock cycle from L1 cache. Main memory bandwidth is orders of magnitude slower.

The author goes onto criticize cache coherency as a problem. C uses it, but other languages like Erlang don’t need it. But that’s largely due to the problems each languages solves. Erlang solves the problem where a large number of threads work on largely independent tasks, needing to send only small messages to each other across threads. The problems C solves is when you need many threads working on a huge, common set of data.

For example, consider the “intrusion prevention system”. Any thread can process any incoming packet that corresponds to any region of memory. There’s no practical way of solving this problem without a huge coherent cache. It doesn’t matter which language or abstractions you use, it’s the fundamental constraint of the problem being solved. RDMA is an important concept that’s moved from supercomputer applications to the data center, such as with memcached. Again, we have the problem of huge quantities (terabytes worth) shared among threads rather than small quantities (kilobytes).

The fundamental issue the author of the the paper is ignoring is decreasing marginal returns. Moore’s Law has gifted us more transistors than we can usefully use. We can’t apply those additional registers to just one thing, because the useful returns we get diminish.

For example, Intel CPUs have two hardware threads per core. That’s because there are good returns by adding a single additional thread. However, the usefulness of adding a third or fourth thread decreases. That’s why many CPUs have only two threads, or sometimes four threads, but no CPU has 16 threads per core.

You can apply the same discussion to any aspect of the CPU, from register count, to SIMD width, to cache size, to out-of-order depth, and so on. Rather than focusing on one of these things and increasing it to the extreme, CPU designers make each a bit larger every process tick that adds more transistors to the chip.

The same applies to cores. It’s why the “more simpler cores” strategy fails, because more cores have their own decreasing marginal returns. Instead of adding cores tied to limited memory bandwidth, it’s better to add more cache. Such cache already increases the size of the cores, so at some point it’s more effective to add a few out-of-order features to each core rather than more cores. And so on.

The question isn’t whether we can change this paradigm and radically redesign CPUs to match some academic’s view of the perfect abstraction. Instead, the goal is to find new uses for those additional transistors. For example, “message passing” is a useful abstraction in languages like Go and Erlang that’s often more useful than sharing memory. It’s implemented with shared memory and atomic instructions, but I can’t help but think it couldn’t better be done with direct hardware support.

Of course, as soon as they do that, it’ll become an intrinsic in C, then added to languages like Go and Erlang.

Summary

Academics live in an ideal world of abstractions, the rest of us live in practical reality. The reality is that vast majority of programmers work with the C family of languages (JavaScript, Go, etc.), whereas academics love the epiphanies they learned using other languages, especially function languages. CPUs are only superficially designed to run C and “PDP-11 compatibility”. Instead, they keep adding features to support other abstractions, abstractions available to C. They are driven by decreasing marginal returns — they would love to add new abstractions to the hardware because it’s a cheap way to make use of additional transitions. Academics are wrong believing that the entire system needs to be redesigned from scratch. Instead, they just need to come up with new abstractions CPU designers can add.

This is a really lovely Raspberry Pi tricorder

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-tricorder-prop/

At the moment I’m spending my evenings watching all of Star Trek in order. Yes, I have watched it before (but with some really big gaps). Yes, including the animated series (I’m up to The Terratin Incident). So I’m gratified to find this beautiful The Original Series–style tricorder build.

Star Trek Tricorder with Working Display!

At this year’s Replica Prop Forum showcase, we meet up once again wtih Brian Mix, who brought his new Star Trek TOS Tricorder. This beautiful replica captures the weight and finish of the filming hand prop, and Brian has taken it one step further with some modern-day electronics!

A what now?

If you don’t know what a tricorder is, which I guess is faintly possible, the easiest way I can explain is to steal words that Liz wrote when Recantha made one back in 2013. It’s “a made-up thing used by the crew of the Enterprise to measure stuff, store data, and scout ahead remotely when exploring strange new worlds, seeking out new life and new civilisations, and all that jazz.”

A brief history of Picorders

We’ve seen other Raspberry Pi–based realisations of this iconic device. Recantha’s LEGO-cased tricorder delivered some authentic functionality, including temperature sensors, an ultrasonic distance sensor, a photosensor, and a magnetometer. Michael Hahn’s tricorder for element14’s Sci-Fi Your Pi competition in 2015 packed some similar functions, along with Original Series audio effects, into a neat (albeit non-canon) enclosure.

Brian Mix’s Original Series tricorder

Brian Mix’s tricorder, seen in the video above from Tested at this year’s Replica Prop Forum showcase, is based on a high-quality kit into which, he discovered, a Raspberry Pi just fits. He explains that the kit is the work of the late Steve Horch, a special effects professional who provided props for later Star Trek series, including the classic Deep Space Nine episode Trials and Tribble-ations.

A still from an episode of Star Trek: Deep Space Nine: Jadzia Dax, holding an Original Series-sylte tricorder, speaks with Benjamin Sisko

Dax, equipped for time travel

This episode’s plot required sets and props — including tricorders — replicating the USS Enterprise of The Original Series, and Steve Horch provided many of these. Thus, a tricorder kit from him is about as close to authentic as you can possibly find unless you can get your hands on a screen-used prop. The Pi allows Brian to drive a real display and a speaker: “Being the geek that I am,” he explains, “I set it up to run every single Original Series Star Trek episode.”

Even more wonderful hypothetical tricorders that I would like someone to make

This tricorder is beautiful, and it makes me think how amazing it would be to squeeze in some of the sensor functionality of the devices depicted in the show. Space in the case is tight, but it looks like there might be a little bit of depth to spare — enough for an IMU, maybe, or a temperature sensor. I’m certain the future will bring more Pi tricorder builds, and I, for one, can’t wait. Please tell us in the comments if you’re planning something along these lines, and, well, I suppose some other sci-fi franchises have decent Pi project potential too, so we could probably stand to hear about those.

If you’re commenting, no spoilers please past The Animated Series S1 E11. Thanks.

The post This is a really lovely Raspberry Pi tricorder appeared first on Raspberry Pi.

Stream to Twitch with the push of a button

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-twitch-streaming/

Stream your video gaming exploits to the internet at the touch of a button with the Twitch-O-Matic. Everyone else is doing it, so you should too.

Twitch-O-Matic: Raspberry Pi Twitch Streaming Device – Weekend Hacker #1804

Some gaming consoles make it easy to stream to Twitch, some gaming consoles don’t (come on, Nintendo). So for those that don’t, I’ve made this beta version of the “Twitch-O-Matic”. No it doesn’t chop onions or fold your laundry, but what it DOES do is stream anything with HDMI output to your Twitch channel with the simple push of a button!

eSports and online game streaming

Interest in eSports has skyrocketed over the last few years, with viewership numbers in the hundreds of millions, sponsorship deals increasing in value and prestige, and tournament prize funds reaching millions of dollars. So it’s no wonder that more and more gamers are starting to stream live to online platforms in order to boost their fanbase and try to cash in on this growing industry.

Streaming to Twitch

Launched in 2011, Twitch.tv is an online live-streaming platform with a primary focus on video gaming. Users can create accounts to contribute their comments and content to the site, as well as watching live-streamed gaming competitions and broadcasts. With a staggering fifteen million daily users, Twitch is accessible via smartphone and gaming console apps, smart TVs, computers, and tablets. But if you want to stream to Twitch, you may find yourself using third-party software in order to do so. And with more buttons to click and more wires to plug in for older, app-less consoles, streaming can get confusing.

Enter Tinkernut.

Side note: we ❤ Tinkernut

We’ve featured Tinkernut a few times on the Raspberry Pi blog – his tutorials are clear, his projects are interesting and useful, and his live-streamed comment videos for every build are a nice touch to sharing homebrew builds on the internet.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

So, yes, we love him. [This is true. Alex never shuts up about him. – Ed.] And since he has over 500K subscribers on YouTube, we’re obviously not the only ones. We wave our Tinkernut flags with pride.

Twitch-O-Matic

With a Raspberry Pi Zero W, an HDMI to CSI adapter, and a case to fit it all in, Tinkernut’s Twitch-O-Matic allows easy connection to the Twitch streaming service. You’ll also need a button – the bigger, the better in our opinion, though Tinkernut has opted for the Adafruit 16mm Illuminated Pushbutton for his build, and not the 100mm Massive Arcade Button that, sadly, we still haven’t found a reason to use yet.

Adafruit massive button

“I’m sorry, Dave…”

For added frills and pizzazz, Tinketnut has also incorporated Adafruit’s White LED Backlight Module into the case, though you don’t have to do so unless you’re feeling super fancy.

The setup

The Raspberry Pi Zero W is connected to the HDMI to CSI adapter via the camera connector, in the same way you’d attach the camera ribbon. Tinkernut uses a standard Raspbian image on an 8GB SD card, with SSH enabled for remote access from his laptop. He uses the simple command Raspivid to test the HDMI connection by recording ten seconds of video footage from his console.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

One lead is all you need

Once you have the Pi receiving video from your console, you can connect to Twitch using your Twitch stream key, which you can find by logging in to your account at Twitch.tv. Tinkernut’s tutorial gives you all the commands you need to stream from your Pi.

The frills

To up the aesthetic impact of your project, adding buttons and backlights is fairly straightforward.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

Pretty LED frills

To run the stream command, Tinketnut uses a button: press once to start the stream, press again to stop. Pressing the button also turns on the LED backlight, so it’s obvious when streaming is in progress.

The tutorial

For the full code and 3D-printable case STL file, head to Tinketnut’s hackster.io project page. And if you’re already using a Raspberry Pi for Twitch streaming, share your build setup with us. Cheers!

The post Stream to Twitch with the push of a button appeared first on Raspberry Pi.

Under-Fire “Kodi Box” Company “Sold to Chinese Investor” For US$8.82m

Post Syndicated from Andy original https://torrentfreak.com/under-fire-kodi-box-company-sold-to-chinese-investor-for-us8-82m-180426/

Back in 2016, an article appeared in Kiwi media discussing the rise of a new company pledging to beat media giant Sky TV at its own game.

My Box NZ owner Krish Reddy told the publication he was selling Android boxes loaded with Kodi software and augmented with third-party addons.

Without any hint of fear, he stated that these devices enabled customers to access movies, TV shows and live channels for free, after shelling out a substantial US$182 for the box first, that is.

“Why pay $80 minimum per month for Sky when for one payment you can have it free for good?” a claim on the company’s website asked.

Noting that he’d been importing the boxes from China, Reddy suggested that his lawyers hadn’t found any problem with the business plan.

“I don’t see why [Sky] would contact me but if they do contact me and … if there’s something of theirs that they feel I’ve unlawfully taken then yeah … but as it stands I don’t [have any concerns],” he said.

At this point, Reddy said he’d been selling the boxes for just six weeks and had shifted around 80 units. To get coverage from a national newspaper at this stage of the game must’ve been very much appreciated but Reddy didn’t stop there.

In a bulk advertising email sent out to 50,000 people, Reddy described his boxes as “better than Sky”. However, by design or misfortune, the email managed to land in the inboxes of 50 Sky TV staff and directors, something that didn’t go unnoticed by the TV giant.

With Reddy claiming sales of 8,000 units, Sky ran out of patience last April. In a letter from its lawyers, the pay-TV company said Reddy’s devices breached copyright law and the Fair Trading Act. Reddy responded by calling the TV giant “a playground bully”, again denying that he was breaking the law.

“From a legal perspective, what we do is completely within the law. We advertise Sky television channels being available through our website and social media platforms as these are available via streams which you can find through My Box,” he said.

“The content is already available, I’m not going out there and bringing the content so how am I infringing the copyright… the content is already there, if someone uses the box to search for the content, that’s what it is.”

The initial compensation demand from Sky against Reddy’s company My Box ran to NZD$1.4m, around US$1m. It was an amount that had the potential rise by millions if matters got drawn out and/or escalated. But despite picking a terrible opponent in a battle he was unlikely to win, Reddy refused to give up.

“[Sky’s] point of view is they own copyright and I’m destroying the market by giving people content for free. To me it is business; I have got something that is new … that’s competition,” he said.

The Auckland High Court heard the case against My Box last month with Judge Warwick Smith reserving his judgment and Reddy still maintaining that his business is entirely legal. Sales were fantastic, he said, with 20,000 devices sold to customers in 12 countries.

Then something truly amazing happened.

A company up to its eyeballs in litigation, selling a commodity product that an amateur can buy and configure at home for US$40, reportedly got a chance of a lifetime. Reddy revealed to Stuff that a Chinese investor had offered to buy his company for an eye-watering NZ$10 million (US$7.06m).

“We have to thank Sky,” he said. “If they had left us alone we would just have been selling a few boxes, but the controversy made us world famous.”

Reddy noted he’d been given 21 days to respond to the offer, but refused to name the company. Interestingly, he also acknowledged that if My Box lost its case, the company would be liable for damages. However, that wouldn’t bother the potential investor.

“It makes no difference to them whether we win or lose, because their operations won’t be in New Zealand,” Reddy said.

According to the entrepreneur, that’s how things are playing out.

The Chinese firm – which Reddy is still refusing to name – has apparently accepted a counter offer from Reddy of US$8.8m for My Box. As a result, Reddy will wrap up his New Zealand operations within the next 90 days and his six employees will be rendered unemployed.

Given that anyone with the ability to install Kodi and a few addons before putting a box in the mail could replicate Reddy’s business model, the multi-million dollar offer for My Box was never anything less than a bewildering business proposition. That someone carried through with it an even higher price is so fantastic as to be almost unbelievable.

In a sea of unhappy endings for piracy-enabled Kodi box sellers globally, this is the only big win to ever grace the headlines. Assuming this really is the end of the story (and that might not be the case) it will almost certainly be the last.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Congratulations to Oracle on MySQL 8.0

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2018/04/congratulations-to-oracle-on-mysql-80.html

Last week, Oracle announced the general availability of MySQL 8.0. This is good news for database users, as it means Oracle is still developing MySQL.

I decide to celebrate the event by doing a quick test of MySQL 8.0. Here follows a step-by-step description of my first experience with MySQL 8.0.
Note that I did the following without reading the release notes, as is what I have done with every MySQL / MariaDB release up to date; In this case it was not the right thing to do.

I pulled MySQL 8.0 from [email protected]:mysql/mysql-server.git
I was pleasantly surprised that ‘cmake . ; make‘ worked without without any compiler warnings! I even checked the used compiler options and noticed that MySQL was compiled with -Wall + several other warning flags. Good job MySQL team!

I did have a little trouble finding the mysqld binary as Oracle had moved it to ‘runtime_output_directory’; Unexpected, but no big thing.

Now it’s was time to install MySQL 8.0.

I did know that MySQL 8.0 has removed mysql_install_db, so I had to use the mysqld binary directly to install the default databases:
(I have specified datadir=/my/data3 in the /tmp/my.cnf file)

> cd runtime_output_directory
> mkdir /my/data3
> ./mysqld –defaults-file=/tmp/my.cnf –install

2018-04-22T12:38:18.332967Z 1 [ERROR] [MY-011011] [Server] Failed to find valid data directory.
2018-04-22T12:38:18.333109Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2018-04-22T12:38:18.333135Z 0 [ERROR] [MY-010119] [Server] Aborting

A quick look in mysqld –help –verbose output showed that the right command option is –-initialize. My bad, lets try again,

> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:39:31.910509Z 0 [ERROR] [MY-010457] [Server] –initialize specified but the data directory has files in it. Aborting.
2018-04-22T12:39:31.910578Z 0 [ERROR] [MY-010119] [Server] Aborting

Now I used the right options, but still didn’t work.
I took a quick look around:

> ls /my/data3/
binlog.index

So even if the mysqld noticed that the data3 directory was wrong, it still wrote things into it.  This even if I didn’t have –log-binlog enabled in the my.cnf file. Strange, but easy to fix:

> rm /my/data3/binlog.index
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:40:45.633637Z 0 [ERROR] [MY-011071] [Server] unknown variable ‘max-tmp-tables=100’
2018-04-22T12:40:45.633657Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you’re executing mysql_upgrade to correct the issue.
2018-04-22T12:40:45.633663Z 0 [ERROR] [MY-010119] [Server] Aborting

The warning about the privilege system confused me a bit, but I ignored it for the time being and removed from my configuration files the variables that MySQL 8.0 doesn’t support anymore. I couldn’t find a list of the removed variables anywhere so this was done with the trial and error method.

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:42:56.626583Z 0 [ERROR] [MY-010735] [Server] Can’t open the mysql.plugin table. Please run mysql_upgrade to create it.
2018-04-22T12:42:56.827685Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table ‘mysql.gtid_executed’ cannot be opened.
2018-04-22T12:42:56.838501Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2018-04-22T12:42:56.848375Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables
2018-04-22T12:42:56.848863Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-001146 – Table ‘mysql.component’ doesn’t exist
2018-04-22T12:42:56.848916Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-003543 – The mysql.component table is missing or has an incorrect definition.
….
2018-04-22T12:42:56.854141Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

I figured out that if there is a single wrong variable in the configuration file, running mysqld –initialize will leave the database in an inconsistent state. NOT GOOD! I am happy I didn’t try this in a production system!

Time to start over from the beginning:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:44:45.548960Z 5 [Note] [MY-010454] [Server] A temporary password is generated for [email protected]: px)NaaSp?6um
2018-04-22T12:44:51.221751Z 0 [System] [MY-013170] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld (mysqld 8.0.11) initializing of server has completed

Success!

I wonder why the temporary password is so complex; It could easily have been something that one could easily remember without decreasing security, it’s temporary after all. No big deal, one can always paste it from the logs. (Side note: MariaDB uses socket authentication on many system and thus doesn’t need temporary installation passwords).

Now lets start the MySQL server for real to do some testing:

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:45:43.683484Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

And the lets start the client:

> ./client/mysql –socket=/tmp/mysql.sock –user=root –password=”px)NaaSp?6um”
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Apparently MySQL 8.0 doesn’t work with old MySQL / MariaDB clients by default 🙁

I was testing this in a system with MariaDB installed, like all modern Linux system today, and didn’t want to use the MySQL clients or libraries.

I decided to try to fix this by changing the authentication to the native (original) MySQL authentication method.

> mysqld –skip-grant-tables

> ./client/mysql –socket=/tmp/mysql.sock –user=root
ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: NO)

Apparently –skip-grant-tables is not good enough anymore. Let’s try again with:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –socket=/tmp/mysql.sock –user=root mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 8.0.11 Source distribution

Great, we are getting somewhere, now lets fix “root”  to work with the old authenticaion:

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=password(“test”) where user=”root”;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘(“test”) where user=”root”‘ at line 1

A quick look in the MySQL 8.0 release notes told me that the PASSWORD() function is removed in 8.0. Why???? I don’t know how one in MySQL 8.0 is supposed to generate passwords compatible with old installations of MySQL. One could of course start an old MySQL or MariaDB version, execute the password() function and copy the result.

I decided to fix this the easy way and use an empty password:

(Update:: I later discovered that the right way would have been to use: FLUSH PRIVILEGES;  ALTER USER’ root’@’localhost’ identified by ‘test’  ; I however dislike this syntax as it has the password in clear text which is easy to grab and the command can’t be used to easily update the mysql.user table. One must also disable the –skip-grant mode to do use this)

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=”” where user=”root”;
Query OK, 1 row affected (0.077 sec)
Rows matched: 1 Changed: 1 Warnings: 0
 
I restarted mysqld:
> mysqld –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
ERROR 1862 (HY000): Your password has expired. To log in you must change it using a client that supports expired passwords.

Ouch, forgot that. Lets try again:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
MySQL [mysql]> update mysql.user set password_expired=”N” where user=”root”;

Now restart and test worked:

> ./mysqld –default_authentication_plugin=mysql_native_password

>./client/mysql –user=root –password=”” mysql

Finally I had a working account that I can use to create other users!

When looking at mysqld –help –verbose again. I noticed the option:

–initialize-insecure
Create the default database and exit. Create a super user
with empty password.

I decided to check if this would have made things easier:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure

2018-04-22T13:18:06.629548Z 5 [Warning] [MY-010453] [Server] [email protected] is created with an empty password ! Please consider switching off the –initialize-insecure option.

Hm. Don’t understand the warning as–initialize-insecure is not an option that one would use more than one time and thus nothing one would ‘switch off’.

> ./mysqld –defaults-file=/tmp/my.cnf

> ./client/mysql –user=root –password=”” mysql
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Back to the beginning 🙁

To get things to work with old clients, one has to initialize the database with:
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure –default_authentication_plugin=mysql_native_password

Now I finally had MySQL 8.0 up and running and thought I would take it up for a spin by running the “standard” MySQL/MariaDB sql-bench test suite. This was removed in MySQL 5.7, but as I happened to have MariaDB 10.3 installed, I decided to run it from there.

sql-bench is a single threaded benchmark that measures the “raw” speed for some common operations. It gives you the ‘maximum’ performance for a single query. Its different from other benchmarks that measures the maximum throughput when you have a lot of users, but sql-bench still tells you a lot about what kind of performance to expect from the database.

I tried first to be clever and create the “test” database, that I needed for sql-bench, with
> mkdir /my/data3/test

but when I tried to run the benchmark, MySQL 8.0 complained that the test database didn’t exist.

MySQL 8.0 has gone away from the original concept of MySQL where the user can easily
create directories and copy databases into the database directory. This may have serious
implication for anyone doing backup of databases and/or trying to restore a backup with normal OS commands.

I created the ‘test’ database with mysqladmin and then tried to run sql-bench:

> ./run-all-tests –user=root

The first run failed in test-ATIS:

Can’t execute command ‘create table class_of_service (class_code char(2) NOT NULL,rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_code))’
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_’ at line 1

This happened because ‘rank‘ is now a reserved word in MySQL 8.0. This is also reserved in ANSI SQL, but I don’t know of any other database that has failed to run test-ATIS before. I have in the past run it against Oracle, PostgreSQL, Mimer, MSSQL etc without any problems.

MariaDB also has ‘rank’ as a keyword in 10.2 and 10.3 but one can still use it as an identifier.

I fixed test-ATIS and then managed to run all tests on MySQL 8.0.

I did run the test both with MySQL 8.0 and MariaDB 10.3 with the InnoDB storage engine and by having identical values for all InnoDB variables, table-definition-cache and table-open-cache. I turned off performance schema for both databases. All test are run with a user with an empty password (to keep things comparable and because it’s was too complex to generate a password in MySQL 8.0)

The result are as follows
Results per test in seconds:

Operation         |MariaDB|MySQL-8|

———————————–
ATIS              | 153.00| 228.00|
alter-table       |  92.00| 792.00|
big-tables        | 990.00|2079.00|
connect           | 186.00| 227.00|
create            | 575.00|4465.00|
insert            |4552.00|8458.00|
select            | 333.00| 412.00|
table-elimination |1900.00|3916.00|
wisconsin         | 272.00| 590.00|
———————————–

This is of course just a first view of the performance of MySQL 8.0 in a single user environment. Some reflections about the results:

  • Alter-table test is slower (as expected) in 8.0 as some of the alter tests benefits of the instant add column in MariaDB 10.3.
  • connect test is also better for MariaDB as we put a lot of efforts to speed this up in MariaDB 10.2
  • table-elimination shows an optimization in MariaDB for the  Anchor table model, which MySQL doesn’t have.
  • CREATE and DROP TABLE is almost 8 times slower in MySQL 8.0 than in MariaDB 10.3. I assume this is the cost of ‘atomic DDL’. This may also cause performance problems for any thread using the data dictionary when another thread is creating/dropping tables.
  • When looking at the individual test results, MySQL 8.0 was slower in almost every test, in many significantly slower.
  • The only test where MySQL was faster was “update_with_key_prefix”. I checked this and noticed that there was a bug in the test and the columns was updated to it’s original value (which should be instant with any storage engine). This is an old bug that MySQL has found and fixed and that we have not been aware of in the test or in MariaDB.
  • While writing this, I noticed that MySQL 8.0 is now using utf8mb4 as the default character set instead of latin1. This may affect some of the benchmarks slightly (not much as most tests works with numbers and Oracle claims that utf8mb4 is only 20% slower than latin1), but needs to be verified.
  • Oracle claims that MySQL 8.0 is much faster on multi user benchmarks. The above test indicates that they may have done this by sacrificing single user performance.
  •  We need to do more and many different benchmarks to better understand exactly what is going on. Stay tuned!

Short summary of my first run with MySQL 8.0:

  • Using the new caching_sha2_password authentication as default for new installation is likely to cause a lot of problems for users. No old application will be able to use MySQL 8.0, installed with default options, without moving to MySQL’s client libraries. While working on this blog I saw MySQL users complain on IRC that not even MySQL Workbench can authenticate with MySQL 8.0. This is the first time in MySQL’s history where such an incompatible change has ever been done!
  • Atomic DDL is a good thing (We plan to have this in MariaDB 10.4), but it should not have such a drastic impact on performance. I am also a bit skeptical of MySQL 8.0 having just one copy of the data dictionary as if this gets corrupted you will lose all your data. (Single point of failure)
  • MySQL 8.0 has several new reserved words and has removed a lot of variables, which makes upgrades hard. Before upgrading to MySQL 8.0 one has to check all one’s databases and applications to ensure that there are no conflicts.
  • As my test above shows, if you have a single deprecated variable in your configuration files, the installation of MySQL will abort and can leave the database in inconsistent state. I did of course my tests by installing into an empty data dictionary, but one can assume that some of the problems may also happen when upgrading an old installation.

Conclusions:
In many ways, MySQL 8.0 has caught up with some earlier versions of MariaDB. For instance, in MariaDB 10.0, we introduced roles (four years ago). In MariaDB 10.1, we introduced encrypted redo/undo logs (three years ago). In MariaDB 10.2, we introduced window functions and CTEs (a year ago). However, some catch-up of MariaDB Server 10.2 features still remains for MySQL (such as check constraints, binlog compression, and log-based rollback).

MySQL 8.0 has a few new interesting features (mostly Atomic DDL and JSON TABLE functions), but at the same time MySQL has strayed away from some of the fundamental corner stone principles of MySQL:

From the start of the first version of MySQL in 1995, all development has been focused around 3 core principles:

  • Ease of use
  • Performance
  • Stability

With MySQL 8.0, Oracle has sacrifices 2 of 3 of these.

In addition (as part of ease of use), while I was working on MySQL, we did our best to ensure that the following should hold:

  • Upgrades should be trivial
  • Things should be kept compatible, if possible (don’t remove features/options/functions that are used)
  • Minimize reserved words, don’t remove server variables
  • One should be able to use normal OS commands to create and drop databases, copy and move tables around within the same system or between different systems. With 8.0 and data dictionary taking backups of specific tables will be hard, even if the server is not running.
  • mysqldump should always be usable backups and to move to new releases
  • Old clients and application should be able to use ‘any’ MySQL server version unchanged. (Some Oracle client libraries, like C++, by default only supports the new X protocol and can thus not be used with older MySQL or any MariaDB version)

We plan to add a data dictionary to MariaDB 10.4 or MariaDB 10.5, but in a way to not sacrifice any of the above principles!

The competition between MySQL and MariaDB is not just about a tactical arms race on features. It’s about design philosophy, or strategic vision, if you will.

This shows in two main ways: our respective view of the Storage Engine structure, and of the top-level direction of the roadmap.

On the Storage Engine side, MySQL is converging on InnoDB, even for clustering and partitioning. In doing so, they are abandoning the advantages of multiple ways of storing data. By contrast, MariaDB sees lots of value in the Storage Engine architecture: MariaDB Server 10.3 will see the general availability of MyRocks (for write-intensive workloads) and Spider (for scalable workloads). On top of that, we have ColumnStore for analytical workloads. One can use the CONNECT engine to join with other databases. The use of different storage engines for different workloads and different hardware is a competitive differentiator, now more than ever.

On the roadmap side, MySQL is carefully steering clear of features that close the gap between MySQL and Oracle. MariaDB has no such constraints. With MariaDB 10.3, we are introducing PL/SQL compatibility (Oracle’s stored procedures) and AS OF (built-in system versioned tables with point-in-time querying). For both of those features, MariaDB is the first Open Source database doing so. I don’t except Oracle to provide any of the above features in MySQL!

Also on the roadmap side, MySQL is not working with the ecosystem in extending the functionality. In 2017, MariaDB accepted more code contributions in one year, than MySQL has done during its entire lifetime, and the rate is increasing!

I am sure that the experience I had with testing MySQL 8.0 would have been significantly better if MySQL would have an open development model where the community could easily participate in developing and testing MySQL continuously. Most of the confusing error messages and strange behavior would have been found and fixed long before the GA release.

Before upgrading to MySQL 8.0 please read https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html to see what problems you can run into! Don’t expect that old installations or applications will work out of the box without testing as a lot of features and options has been removed (query cache, partition of myisam tables etc)! You probably also have to revise your backup methods, especially if you want to ever restore just a few tables. (With 8.0, I don’t know how this can be easily done).

According to the MySQL 8.0 release notes, one can’t use mysqldump to copy a database to MySQL 8.0. One has to first to move to a MySQL 5.7 GA version (with mysqldump, as recommended by Oracle) and then to MySQL 8.0 with in-place update. I assume this means that all old mysqldump backups are useless for MySQL 8.0?

MySQL 8.0 seams to be a one way street to an unknown future. Up to MySQL 5.7 it has been trivial to move to MariaDB and one could always move back to MySQL with mysqldump. All MySQL client libraries has worked with MariaDB and all MariaDB client libraries has worked with MySQL. With MySQL 8.0 this has changed in the wrong direction.

As long as you are using MySQL 5.7 and below you have choices for your future, after MySQL 8.0 you have very little choice. But don’t despair, as MariaDB will always be able to load a mysqldump file and it’s very easy to upgrade your old MySQL installation to MariaDB 🙂

I wish you good luck to try MySQL 8.0 (and also the upcoming MariaDB 10.3)!

Announcing Coolest Projects North America

Post Syndicated from Courtney Lentz original https://www.raspberrypi.org/blog/coolest-projects-north-america/

The Raspberry Pi Foundation loves to celebrate people who use technology to solve problems and express themselves creatively, so we’re proud to expand the incredibly successful event Coolest Projects to North America. This free event will be held on Sunday 23 September 2018 at the Discovery Cube Orange County in Santa Ana, California.

Coolest Projects North America logo Raspberry Pi CoderDojo

What is Coolest Projects?

Coolest Projects is a world-leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. The event is both a competition and an exhibition to give young digital makers aged 7 to 17 a platform to celebrate their successes, creativity, and ingenuity.

showcase crowd — Coolest Projects North America

In 2012, Coolest Projects was conceived as an opportunity for CoderDojo Ninjas to showcase their work and for supporters to acknowledge these achievements. Week after week, Ninjas would meet up to work diligently on their projects, hacks, and code; however, it can be difficult for them to see their long-term progress on a project when they’re concentrating on its details on a weekly basis. Coolest Projects became a dedicated time each year for Ninjas and supporters to reflect, celebrate, and share both the achievements and challenges of the maker’s journey.

three female coolest projects attendees — Coolest Projects North America

Coolest Projects North America

Not only is Coolest Projects expanding to North America, it’s also expanding its participant pool! Members of our team have met so many amazing young people creating in all areas of the world, that it simply made sense to widen our outreach to include Code Clubs, students of Raspberry Pi Certified Educators, and members of the Raspberry Jam community at large as well as CoderDojo attendees.

 a boy showing a technology project to an old man, with a girl playing on a laptop on the floor — Coolest Projects North America

Exhibit and attend Coolest Projects

Coolest Projects is a free, family- and educator-friendly event. Young people can apply to exhibit their projects, and the general public can register to attend this one-day event. Be sure to register today, because you make Coolest Projects what it is: the coolest.

The post Announcing Coolest Projects North America appeared first on Raspberry Pi.

Reddit Copyright Complaints Jump 138% But Almost Half Get Rejected

Post Syndicated from Andy original https://torrentfreak.com/reddit-copyright-complaints-jump-138-but-almost-half-get-rejected-180411/

So-called ‘transparency reports’ are becoming increasingly popular with Internet-based platforms and their users. Among other things, they provide much-needed insight into how outsiders attempt to censor content published online and what actions are taken in response.

Google first started publishing its report in 2010, Twitter followed in 2012, and they’ve now been joined by a multitude of major companies including Microsoft, Facebook and Cloudflare.

As one of the world’s most recognized sites, Reddit joined the transparency party fairly late, publishing its first report in early 2015. While light on detail, it revealed that in the previous year the site received just 218 requests to remove content, 81% of which were DMCA-style copyright notices. A significant 62% of those copyright-related requests were rejected.

Over time, Reddit’s reporting has become a little more detailed. Last April it revealed that in 2016, the platform received ‘just’ 3,294 copyright removal requests for the entire year. However, what really caught the eye is how many notices were rejected. In just 610 instances, Reddit was required to remove content from the site, a rejection rate of 81%.

Having been a year since Reddit’s last report, the company has just published its latest edition, covering the period January 1, 2017 to December 31, 2017.

“Reddit publishes this transparency report every year as part of our ongoing commitment to keep you aware of the trends on the various requests regarding private Reddit user account information or removal of content posted to Reddit,” the company said in a statement.

“Reddit believes that maintaining this transparency is extremely important. We want you to be aware of this information, consider it carefully, and ask questions to keep us accountable.”

The detailed report covers a wide range of topics, including government requests for the preservation or production of user information (there were 310) and even an instruction to monitor one Reddit user’s activities in real time via a so-called ‘Trap and Trace’ order.

In copyright terms, there has been significant movement. In 2017, Reddit received 7,825 notifications of alleged copyright infringement under the Digital Millennium Copyright Act, that’s up roughly 138% over the 3,294 notifications received in 2016.

For a platform of Reddit’s unquestionable size, these volumes are not big. While the massive percentage increase is notable, the site still receives less than 10 complaints each day. For comparison, Google receives millions every week.

But perhaps most telling is that despite receiving more than 7,800 DMCA-style takedown notices, these resulted in Reddit carrying out just 4,352 removals. This means that for whatever reasons (Reddit doesn’t specify), 3,473 requests were denied, a rejection rate of 44.38%. Google, on the other hand, removes around 90% of content reported.

DMCA notices can be declared invalid for a number of reasons, from incorrect formatting through to flat-out abuse. In many cases, copyright law is incorrectly applied and it’s not unknown for complainants to attempt a DMCA takedown to stifle speech or perceived competition.

Reddit says it tries to take all things into consideration before removing content.

“Reddit reviews each DMCA takedown notice carefully, and removes content where a valid report is received, as required by the law,” the company says.

“Reddit considers whether the reported content may fall under an exception listed in the DMCA, such as ‘fair use,’ and may ask for clarification that will assist in the review of the removal request.”

Considering the numbers of community-focused “subreddits” dedicated to piracy (not just general discussion, but actual links to content), the low numbers of copyright notices received by Reddit continues to baffle.

There are sections in existence right now offering many links to movies and TV shows hosted on various file-hosting sites. They’re the type of links that are targeted all the time whenever they appear in Google search but copyright owners don’t appear to notice or care about them on Reddit.

Finally, it would be nice if Reddit could provide more information in next year’s report, including detail on why so many requests are rejected. Perhaps regular submission of notices to the Lumen Database would be something Reddit would consider for the future.

Reddit’s Transparency Report for 2017 can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Backblaze Announces B2 Compute Partnerships

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/introducing-cloud-compute-services/

Backblaze Announces B2 Compute Partnerships

In 2015, we announced Backblaze B2 Cloud Storage — the most affordable, high performance storage cloud on the planet. The decision to release B2 as a service was in direct response to customers asking us if they could use the same cloud storage infrastructure we use for our Computer Backup service. With B2, we entered a market in direct competition with Amazon S3, Google Cloud Services, and Microsoft Azure Storage. Today, we have over 500 petabytes of data from customers in over 150 countries. At $0.005 / GB / month for storage (1/4th of S3) and $0.01 / GB for downloads (1/5th of S3), it turns out there’s a healthy market for cloud storage that’s easy and affordable.

As B2 has grown, customers wanted to use our cloud storage for a variety of use cases that required not only storage but compute. We’re happy to say that through partnerships with Packet & ServerCentral, today we’re announcing that compute is now available for B2 customers.

Cloud Compute and Storage

Backblaze has directly connected B2 with the compute servers of Packet and ServerCentral, thereby allowing near-instant (< 10 ms) data transfers between services. Also, transferring data between B2 and both our compute partners is free.

  • Storing data in B2 and want to run an AI analysis on it? — There are no fees to move the data to our compute partners.
  • Generating data in an application? — Run the application with one of our partners and store it in B2.
  • Transfers are free and you’ll save more than 50% off of the equivalent set of services from AWS.

These partnerships enable B2 customers to use compute, give our compute partners’ customers access to cloud storage, and introduce new customers to industry-leading storage and compute — all with high-performance, low-latency, and low-cost.

Is This a Big Deal? We Think So

Compute is one of the most requested services from our customers Why? Because it unlocks a number of use cases for them. Let’s look at three popular examples:

Transcoding Media Files

B2 has earned wide adoption in the Media & Entertainment (“M&E”) industry. Our affordable storage and download pricing make B2 great for a wide variety of M&E use cases. But many M&E workflows require compute. Content syndicators, like American Public Television, need the ability to transcode files to meet localization and distribution management requirements.

There are a multitude of reasons that transcode is needed — thumbnail and proxy generation enable M&E professionals to work efficiently. Without compute, the act of transcoding files remains cumbersome. Either the files need to be brought down from the cloud, transcoded, and then pushed back up or they must be kept locally until the project is complete. Both scenarios are inefficient.

Starting today, any content producer can spin up compute with one of our partners, pay by the hour for their transcode processing, and return the new media files to B2 for storage and distribution. The company saves money, moves faster, and ensures their files are safe and secure.

Disaster Recovery

Backblaze’s heritage is based on providing outstanding backup services. When you have incredibly affordable cloud storage, it ends up being a great destination for your backup data.

Most enterprises have virtual machines (“VMs”) running in their infrastructure and those VMs need to be backed up. In a disaster scenario, a business wants to know they can get back up and running quickly.

With all data stored in B2, a business can get up and running quickly. Simply restore your backed up VM to one of our compute providers, and your business will be able to get back online.

Since B2 does not place restrictions, delays, or penalties on getting data out, customers can get back up and running quickly and affordably.

Saving $74 Million (aka “The Dropbox Effect”)

Ten years ago, Backblaze decided that S3 was too costly a platform to build its cloud storage business. Instead, we created the Backblaze Storage Pod and our own cloud storage infrastructure. That decision enabled us to offer our customers storage at a previously unavailable price point and maintain those prices for over a decade. It also laid the foundation for Netflix Open Connect and Facebook Open Compute.

Dropbox recently migrated the majority of their cloud services off of AWS and onto Dropbox’s own infrastructure. By leaving AWS, Dropbox was able to build out their own data centers and still save over $74 Million. They achieved those savings by avoiding the fees AWS charges for storing and downloading data, which, incidentally, are five times higher than Backblaze B2.

For Dropbox, being able to realize savings was possible because they have access to enough capital and expertise that they can build out their own infrastructure. For companies that have such resources and scale, that’s a great answer.

“Before this offering, the economics of the cloud would have made our business simply unviable.” — Gabriel Menegatti, SlicingDice

The questions Backblaze and our compute partners pondered was “how can we democratize the Dropbox effect for our storage and compute customers? How can we help customers do more and pay less?” The answer we came up with was to connect Backblaze’s B2 storage with strategic compute partners and remove any transfer fees between them. You may not save $74 million as Dropbox did, but you can choose the optimal providers for your use case and realize significant savings in the process.

This Sounds Good — Tell Me More About Your Partners

We’re very fortunate to be launching our compute program with two fantastic partners in Packet and ServerCentral. These partners allow us to offer a range of computing services.

Packet

We recommend Packet for customers that need on-demand, high performance, bare metal servers available by the hour. They also have robust offerings for private / customized deployments. Their offerings end up costing 50-75% of the equivalent offerings from EC2.

To get started with Packet and B2, visit our partner page on Packet.net.

ServerCentral

ServerCentral is the right partner for customers that have business and IT challenges that require more than “just” hardware. They specialize in fully managed, custom cloud solutions that solve complex business and IT challenges. ServerCentral also has expertise in managed network solutions to address global connectivity and content delivery.

To get started with ServerCentral and B2, visit our partner page on ServerCentral.com.

What’s Next?

We’re excited to find out. The combination of B2 and compute unlocks use cases that were previously impossible or at least unaffordable.

“The combination of performance and price offered by this partnership enables me to create an entirely new business line. Before this offering, the economics of the cloud would have made our business simply unviable,” noted Gabriel Menegatti, co-founder at SlicingDice, a serverless data warehousing service. “Knowing that transfers between compute and B2 are free means I don’t have to worry about my business being successful. And, with download pricing from B2 at just $0.01 GB, I know I’m avoiding a 400% tax from AWS on data I retrieve.”

What can you do with B2 & compute? Please share your ideas with us in the comments. And, for those attending NAB 2018 in Las Vegas next week, please come by and say hello!

The post Backblaze Announces B2 Compute Partnerships appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

MagPi 68: an in-depth look at the new Raspberry Pi 3B+

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-68/

Hi folks, Rob from The MagPi here! You may remember that a couple of weeks ago, the Raspberry Pi 3 Model B+ was released, the updated version of the Raspberry Pi 3 Model B. It’s better, faster, and stronger than the original and it’s also the main topic in The MagPi issue 68, out now!

Everything you need to know about the new Raspberry Pi 3B+

What goes into ‘plussing’ a Raspberry Pi? We talked to Eben Upton and Roger Thornton about the work that went into making the Raspberry Pi 3B+, and we also have all the benchmarks to show you just how much the new Pi 3B+ has been improved.

Super fighting robots

Did you know that the next Pi Wars is soon? The 2018 Raspberry Pi robotics competition is taking place later in April, and we’ve got a full feature on what to expect, as well as top tips on how to make your own kick-punching robot for the next round.

More to read

Still want more after all that? Well, we have our usual excellent selection of outstanding project showcases, reviews, and tutorials to keep you entertained.

See pictures from Raspberry Pi’s sixth birthday, celebrated around the world!

This includes amazing projects like a custom Pi-powered, Switch-esque retro games console, a Minecraft Pi hack that creates a house at the touch of a button, and the Matrix Voice.

With a Pi and a 3D printer, you can make something as cool as this!

Get The MagPi 68

Issue 68 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days for a print copy. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

New subscription offer!

Want to support the Raspberry Pi Foundation and the magazine? We’ve launched a new way to subscribe to the print version of The MagPi: you can now take out a monthly £4 subscription to the magazine, effectively creating a rolling pre-order system that saves you money on each issue.

You can also take out a twelve-month print subscription and get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

That’s it for now. See you next month!

The post MagPi 68: an in-depth look at the new Raspberry Pi 3B+ appeared first on Raspberry Pi.

GoDaddy Ordered to Suspend Four Music Piracy Domains

Post Syndicated from Andy original https://torrentfreak.com/godaddy-ordered-to-suspend-four-music-piracy-domains-180327/

There are many methods used by copyright holders and the authorities in their quest to disable access to pirate sites.

Site blocking is one of the most popular but pressure can also be placed on web hosts to prevent them from doing business with questionable resources. A skip from one host to another usually solves the problem, however.

Another option is to target sites’ domains directly, by putting pressure on their registrars. It’s a practice that has famously seen The Pirate Bay burn through numerous domains in recent years, only for it to end up back on its original domain, apparently unscathed. Other sites, it appears, aren’t always so lucky.

As a full member of IFPI, the Peruvian Union of Phonographic Producers (UNIMPRO) protects the rights of record labels and musicians. Like its counterparts all over the world, UNIMPRO has a piracy problem and a complaint filed against four ‘pirate’ sites will now force the world’s largest domain registrar into action.

Mp3Juices-Download-Free.com, Melodiavip.net, Foxmusica.site and Fulltono.me were all music sites offering MP3 content without the copyright holders’ permission. None are currently available but the screenshot below shows how the first platform appeared before it was taken offline.

MP3 Juices Downnload Free

Following a complaint against the sites by UNIMPRO, the Copyright Commission (Comisión de Derecho de Autor) conducted an investigation into the platforms’ activities. The Commission found that the works they facilitated access to infringed copyright. It was also determined that each site generated revenue from advertising.

Given the illegal nature of the sites and the high volume of visitors they attract, the Commission determined that they were causing “irreparable damage” to legitimate copyright holders. Something, therefore, needed to be done.

The action against the sites involved the National Institute for the Defense of Competition and the Protection of Intellectual Property (Indecopi), an autonomous public body of the Peruvian state tasked with handling anti-competitive behavior, unfair competition, and intellectual property matters.

Indecopi HQ

After assessing the evidence, Indecopi, through the Copyright Commission, issued precautionary (interim) measures compelling US-based GoDaddy, the world’s largest domain registrar which handles the domains for all four sites, to suspend them with immediate effect.

“The Copyright Commission of INDECOPI issued four precautionary measures in order that the US company Godaddy.com, LLC (in its capacity as registrar of domain names) suspend the domains of four websites, through which it would have infringed the legislation on Copyright and Related Rights, by making available a large number of musical phonograms without the corresponding authorization, to the detriment of its legitimate owners,” Indecopi said in a statement.

“The suspension was based on the great evidence that was provided by the Commission, on the four websites that infringe copyright, and in the framework of the policy of support for the protection of intellectual property.”

Indecopi says that GoDaddy can file an appeal against the decision. At the time of writing, none of the four domains currently returns a working website.

TorrentFreak has requested a comment from GoDaddy but at the time of publication, we were yet to receive a response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Friday Squid Blogging: Giant Squid Stealing Food from Each Other

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/friday_squid_bl_617.html

An interesting hunting strategy:

Off of northern Spain, giant squid often feed on schools of fish called blue whiting. The schools swim 400 meters or less below the surface, while the squid prefer to hang out around a mile deep. The squid must ascend to hunt, probably seizing fish from below with their tentacles, then descend again. In this scenario, a squid could save energy by pirating food from its neighbor rather than hunting its own fish, Guerra says: If the target squid has already carried its prey back to the depths to eat, the pirate could save itself a trip up to the shallow water. Staying below would also protect a pirate from predators such as dolphins and sperm whales that hang around the fish schools.

If a pirate happened to kill its victim, it would also reduce competition. The scientists think that’s what happened with the Bares squid: Its tentacles were ripped off in the fight over food. “The victim, disoriented and wounded, could enter a warmer mass of water in which the efficiency of their blood decreases markedly,” the authors write in a recent paper in the journal Ecology. “In this way, the victim, almost asphyxiated, would be at the mercy of the marine currents, being dragged toward the coast.”

It’s called “food piracy.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

EFF Urges US Copyright Office To Reject Proactive ‘Piracy’ Filters

Post Syndicated from Andy original https://torrentfreak.com/eff-urges-us-copyright-office-to-reject-proactive-piracy-filters-180213/

Faced with millions of individuals consuming unlicensed audiovisual content from a variety of sources, entertainment industry groups have been seeking solutions closer to the roots of the problem.

As widespread site-blocking attempts to tackle ‘pirate’ sites in the background, greater attention has turned to legal platforms that host both licensed and unlicensed content.

Under current legislation, these sites and services can do business relatively comfortably due to the so-called safe harbor provisions of the US Digital Millennium Copyright Act (DMCA) and the European Union Copyright Directive (EUCD).

Both sets of legislation ensure that Internet platforms can avoid being held liable for the actions of others provided they themselves address infringement when they are made aware of specific problems. If a video hosting site has a copy of an unlicensed movie uploaded by a user, for example, it must be removed within a reasonable timeframe upon request from the copyright holder.

However, in both the US and EU there is mounting pressure to make it more difficult for online services to achieve ‘safe harbor’ protections.

Entertainment industry groups believe that platforms use the law to turn a blind eye to infringing content uploaded by users, content that is often monetized before being taken down. With this in mind, copyright holders on both sides of the Atlantic are pressing for more proactive regimes, ones that will see Internet platforms install filtering mechanisms to spot and discard infringing content before it can reach the public.

While such a system would be welcomed by rightsholders, Internet companies are fearful of a future in which they could be held more liable for the infringements of others. They’re supported by the EFF, who yesterday presented a petition to the US Copyright Office urging caution over potential changes to the DMCA.

“As Internet users, website owners, and online entrepreneurs, we urge you to preserve and strengthen the Digital Millennium Copyright Act safe harbors for Internet service providers,” the EFF writes.

“The DMCA safe harbors are key to keeping the Internet open to all. They allow anyone to launch a website, app, or other service without fear of crippling liability for copyright infringement by users.”

It is clear that pressure to introduce mandatory filtering is a concern to the EFF. Filters are blunt instruments that cannot fathom the intricacies of fair use and are liable to stifle free speech and stymie innovation, they argue.

“Major media and entertainment companies and their surrogates want Congress to replace today’s DMCA with a new law that would require websites and Internet services to use automated filtering to enforce copyrights.

“Systems like these, no matter how sophisticated, cannot accurately determine the copyright status of a work, nor whether a use is licensed, a fair use, or otherwise non-infringing. Simply put, automated filters censor lawful and important speech,” the EFF warns.

While its introduction was voluntary and doesn’t affect the company’s safe harbor protections, YouTube already has its own content filtering system in place.

ContentID is able to detect the nature of some content uploaded by users and give copyright holders a chance to remove or monetize it. The company says that the majority of copyright disputes are now handled by ContentID but the system is not perfect and mistakes are regularly flagged by users and mentioned in the media.

However, ContentID was also very expensive to implement so expecting smaller companies to deploy something similar on much more limited budgets could be a burden too far, the EFF warns.

“What’s more, even deeply flawed filters are prohibitively expensive for all but the largest Internet services. Requiring all websites to implement filtering would reinforce the market power wielded by today’s large Internet services and allow them to stifle competition. We urge you to preserve effective, usable DMCA safe harbors, and encourage Congress to do the same,” the EFF notes.

The same arguments, for and against, are currently raging in Europe where the EU Commission proposed mandatory upload filtering in 2016. Since then, opposition to the proposals has been fierce, with warnings of potential human rights breaches and conflicts with existing copyright law.

Back in the US, there are additional requirements for a provider to qualify for safe harbor, including having a named designated agent tasked with receiving copyright infringement notifications. This person’s name must be listed on a platform’s website and submitted to the US Copyright Office, which maintains a centralized online directory of designated agents’ contact information.

Under new rules, agents must be re-registered with the Copyright Office every three years, despite that not being a requirement under the DMCA. The EFF is concerned that by simply failing to re-register an agent, an otherwise responsible website could lose its safe harbor protections, even if the agent’s details have remained the same.

“We’re concerned that the new requirement will particularly disadvantage small and nonprofit websites. We ask you to reconsider this rule,” the EFF concludes.

The EFF’s letter to the Copyright Office can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Post-Quantum Algorithms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/post-quantum_al.html

NIST has organized a competition for public-key algorithms secure against a quantum computer. It recently published all of its Round 1 submissions. (Details of the NIST efforts are here. A timeline for the new algorithms is here.)

UEFA Obtains High Court Injunction To Block Live Soccer Streaming

Post Syndicated from Andy original https://torrentfreak.com/uefa-obtains-high-court-injunction-to-block-live-soccer-streaming-171226/

Earlier this year the English Premier League (EPL) obtained a unique High Court injunction which required ISPs including Sky, BT, and Virgin to block ‘pirate’ football streams in real-time.

When that temporary injunction ran out, the EPL went back to court for a new one, valid for the season that began in August. After what appeared to be a slow start, the effort began to produce significant results, blocking thousands of Internet subscribers from accessing illicit streams via websites, Kodi add-ons, and premium IPTV services.

Encouraged by its successes, the EPL has now been joined by an even bigger soccer organization. The Union of European Football Associations (UEFA) is the governing body of soccer in Europe and it too will jump onto the site and server-blocking bandwagon, almost certainly utilizing the same system being deployed by the Premier League.

UEFA first had to obtain permission from the High Court. That came in the form of an application for injunction filed by the organization against ISPs BT, EE, Plusnet, Sky, TalkTalk, and Virgin Media. It demanded that they “take measures to block, or at least impede, access by their customers to streaming servers which deliver infringing live streams of UEFA Competition matches to UK consumers.”

In other countries, ISPs have defended such cases but in the UK, the position is very different. All providers except TalkTalk actually supported the application, with BT, Sky, and Virgin filing evidence in its favor.

The application seemed somewhat academic. All parties previously agreed to its terms and it was supported by the Premier League and the Formula One World Championship, whose content is also streamed illegally by some of the same servers.

The High Court found that the application was broadly similar to that previously filed by the Premier League so the legal basis for granting the injunction remained the same.

Citing two big rulings from the EU Court this year (one involving The Pirate Bay, the other cloud-recording service VCAST), Mr Justice Arnold said that evidence filed by the Premier League showed that a similar order had proven “very effective”.

The Judge also noted that no evidence of over-blocking as a result of the previous injunctions had been presented and that this injunction would contain “an additional safeguard” in that respect. Details of this measure and almost every other technical aspect of the injunction remain confidential, as is the case with the Premier League’s efforts.

Justice Arnold’s order will take effect on 13 February 2018 and last until 26 May 2018. People reliant on pirate streams for their football/soccer fix will continue to experience issues, with many having no other choice than to resort to VPNs to access blocked streams.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pioneers winners: only you can save us

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/pioneers-winners-only-you-can-save-us/

She asked for help, and you came to her aid. Pioneers, the winners of the Only you can save us challenge have been picked!

Can you see me? Only YOU can save us!

I need your help. This is a call out for those between 11- and 16-years-old in the UK and Republic of Ireland. Something has gone very, very wrong and only you can save us. I’ve collected together as much information for you as I can. You’ll find it at http://www.raspberrypi.org/pioneers.

The challenge

In August we intercepted an emergency communication from a lonesome survivor. She seemed to be in quite a bit of trouble, and asked all you young people aged 11 to 16 to come up with something to help tackle the oncoming crisis, using whatever technology you had to hand. You had ten weeks to work in teams of two to five with an adult mentor to fulfil your mission.

The judges

We received your world-saving ideas, and our savvy survivor pulled together a ragtag bunch of apocalyptic experts to help us judge which ones would be the winning entries.

Dr Shini Somara

Dr Shini Somara is an advocate for STEM education and a mechanical engineer. She was host of The Health Show and has appeared in documentaries for the BBC, PBS Digital, and Sky. You can check out her work hosting Crash Course Physics on YouTube.

Prof Lewis Dartnell is an astrobiologist and author of the book The Knowledge: How to Rebuild Our World From Scratch.

Emma Stephenson has a background in aeronautical engineering and currently works in the Shell Foundation’s Access to Energy and Sustainable Mobility portfolio.

Currently sifting through the entries with the other judges of #makeyourideas with @raspberrypifoundation @_raspberrypi_

151 Likes, 3 Comments – Shini Somara (@drshinisomara) on Instagram: “Currently sifting through the entries with the other judges of #makeyourideas with…”

The winners

Our survivor is currently putting your entries to good use repairing, rebuilding, and defending her base. Our judges chose the following projects as outstanding examples of world-saving digital making.

Theme winner: Computatron

Raspberry Pioneers 2017 – Nerfus Dislikus Killer Robot

This is our entry to the pioneers ‘Only you can save us’ competition. Our team name is Computatrum. Hope you enjoy!

Are you facing an unknown enemy whose only weakness is Nerf bullets? Then this is the robot for you! We loved the especially apocalyptic feel of the Computatron’s cleverly hacked and repurposed elements. The team even used an old floppy disc mechanism to help fire their bullets!

Technically brilliant: Robot Apocalypse Committee

Pioneers Apocalypse 2017 – RationalPi

Thousands of lines of code… Many sheets of acrylic… A camera, touchscreen and fingerprint scanner… This is our entry into the Raspberry Pi Pioneers2017 ‘Only YOU can Save Us’ theme. When zombies or other survivors break into your base, you want a secure way of storing your crackers.

The Robot Apocalypse Committee is back, and this time they’ve brought cheese! The crew designed a cheese- and cracker-dispensing machine complete with face and fingerprint recognition to ensure those rations last until the next supply drop.

Best explanation: Pi Chasers

Tala – Raspberry Pi Pioneers Project

Hi! We are PiChasers and we entered the Raspberry Pi Pionners challenge last time when the theme was “Make it Outdoors!” but now we’ve been faced with another theme “Apocolypse”. We spent a while thinking of an original thing that would help in an apocolypse and decided upon a ‘text-only phone’ which uses local radio communication rather than cellular.

This text-based communication device encased in a tupperware container could be a lifesaver in a crisis! And luckily, the Pi Chasers produced an excellent video and amazing GitHub repo, ensuring that any and all survivors will be able to build their own in the safety of their base.

Most inspiring journey: Three Musketeers

Pioneers Entry – The Apocalypse

Pioneers Entry Team Name: The Three Musketeers Team Participants: James, Zach and Tom

We all know that zombies are terrible at geometry, and the Three Musketeers used this fact to their advantage when building their zombie security system. We were impressed to see the team working together to overcome the roadblocks they faced along the way.

We appreciate what you’re trying to do: Zombie Trolls

Zombie In The Middle

Uploaded by CDA Bodgers on 2017-12-01.

Playing piggy in the middle with zombies sure is a unique way of saving humankind from total extinction! We loved this project idea, and although the Zombie Trolls had a little trouble with their motors, we’re sure with a little more tinkering this zombie-fooling contraption could save us all.

Most awesome

Our judges also wanted to give a special commendation to the following teams for their equally awesome apocalypse-averting ideas:

  • PiRates, for their multifaceted zombie-proofing defence system and the high production value of their video
  • Byte them Pis, for their beautiful zombie-detecting doormat
  • Unatecxon, for their impressive bunker security system
  • Team Crompton, for their pressure-activated door system
  • Team Ernest, for their adventures in LEGO

The prizes

All our winning teams have secured exclusive digital maker boxes. These are jam-packed with tantalising tech to satisfy all tinkering needs, including:

Our theme winners have also secured themselves a place at Coolest Projects 2018 in Dublin, Ireland!

Thank you to everyone who got involved in this round of Pioneers. Look out for your awesome submission swag arriving in the mail!

The post Pioneers winners: only you can save us appeared first on Raspberry Pi.

YouTuber Convicted For Publishing Video Piracy ‘Tutorials’

Post Syndicated from Andy original https://torrentfreak.com/youtuber-convicted-for-publishing-video-piracy-tutorials-171212/

While piracy-focused tutorials have been around for many years, the advent of streaming piracy coupled with the rise of the YouTube star created a perfect storm online.

Even a cursory search on YouTube now turns up thousands of Kodi addon and IPTV-focused channels, each vying to become the ultimate location for the latest and hottest piracy tips. While these videos don’t appear to be a priority for copyright holders, a channel operator in Brazil has just discovered that they aren’t without consequences.

The case involves Marcelo Otto Nascimento, the operator of YouTube channel Café Tecnológico. It began, strangely, with videos about baking bread but later experimented with videos on technological topics including observations on streaming content without paying for it.

In time, this attracted the negative attention of local TV industry group Associação Brasileira de Televisões por Assinatura (Brazilian Association of Television By Signature / ABTA). The group eventually took legal action, complaining about the nature of Nascimento’s YouTube and Facebook pages.

ABTA told the court that Nascimento had been posting tutorials that “encourage the use of equipment and applications designed to allow access to services and content” of its members, despite that content being protected by copyright. The trade group called for the removal of the content, an injunction against Nascimento, an apology, plus compensation for “material and moral damages.”

In his defense, Nascimento said that he merely comments on IPTV systems, does not breach copyright, doesn’t represent unfair competition, and did not cause the TV companies to incur any losses. Overall, Judge Fernando Henrique de Oliveira Biolcati did not agree with his assertions.

“[T]he plain intention of the defendant was to guide users in order for them to obtain access to the restricted content of the applicant’s associates….while gaining advantages for this, especially via remuneration from the providers of the mentioned applications (YouTube and Facebook), proportional to the volumes of visitors,” the Judge wrote in his ruling.

“This is not a question of mere disinterested comments, in the exercise of freedom of expression,” he added.

As a result, Nascimento was ordered to remove all of his online content that could be deemed instructional for pirates, in order to protect the interests of ABTA’s members and their ability to earn revenue from their content. In addition, the channel operator was forbidden from publishing any more videos of a similar nature.

On top, Nascimento must now pay the copyright holders for material damages, yet to be determined, measured from the posting of the first ‘pirate’ tutorial until such a date when all of the tutorials have been removed.

The ruling (PDF via Mg, Portuguese) also requires Nascimento to pay the equivalent of US$7,600 for “moral damages” plus extra for legal costs, during the next 15 days.

In a statement, ABTA said that following this conviction, more people could fall under the spotlight.

“ABTA is also monitoring the activities of other channels on YouTube and on social networks that publish illegal content such as channel lists, movies and ‘free’ access TV series, as well as tutorials and comparisons of devices or applications intended for illicit use (such as Megabox, HtvBox, Kodi, Dejavu, IPTV, ITVGo, etc.),” the group said.

Meanwhile, Nascimento says that he would’ve taken the videos down if only ABTA had asked him to. He will be appealing the decision, claiming that the videos did not teach people about piracy, they only demonstrated functionality. YouTube declined to comment.

Update: Following publication, a spokesperson for TVAddons – which has previously published instructional videos for Kodi – commented to TorrentFreak on the apparent urgency to take this matter to court, rather than handle via YouTube’s established complaints procedure.

“Taking the matter to courts rather than going through YouTube’s takedown system is part of an increasing pattern of legal bullying in the realm of intellectual property enforcement. Fighting a lawsuit against a major corporation can cost more than buying a house, it’s not a fair playing field for your average individual,” he said.

One of the remaining IPTV-focused videos

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

Google: Netflix Searches Outweigh Those For Pirate Alternatives

Post Syndicated from Andy original https://torrentfreak.com/google-netflix-searches-outweigh-those-for-pirate-alternatives-171112/

When large-scale access to online pirated content began to flourish at the turn of the decade, entertainment industry groups claimed that if left to run riot, it could mean the end of their businesses.

More than seventeen years later that doomsday scenario hasn’t come to pass, not because piracy has been defeated – far from it – but because the music, movie and related industries have come to the market with their own offers.

The music industry were the quickest to respond, with services like iTunes and later Spotify making serious progress against pirate alternatives. It took the video industry far longer to attack the market but today, with platforms such as Netflix and Amazon Video, they have a real chance at scooping up what might otherwise be pirate consumption.

While there’s still a long way to go, it’s interesting to hear the progress that’s being made not only in the West but also piracy hotspots further afield. This week, Brazil’s Exame reported on a new study published by Google.

Focused on movies, one of its key findings is that local consumer interest in Netflix is now greater than pirate alternatives including torrents, streaming, and apps. As illustrated in the image below, the tipping point took place early November 2016, when searches for Netflix overtook those for unauthorized platforms.

Netflix vs Pirates (via Exame)

While the stats above don’t necessarily point to a reduction in piracy of movies and TV shows in Brazil, they show that Netflix’s library and ease of use is rewarded by widespread awareness among those seeking such content locally.

“We’re not lowering piracy but this does show how relevant the [Netflix] brand is when it comes to offering content online,” Google Brazil’s market intelligence chief Sérgio Tejido told Exame.

For Debora Bona, a director specializing in media and entertainment at Google Brazil, the success of Netflix is comparable to the rise of Spotify. In part thanks to The Pirate Bay, Sweden had a serious piracy problem in the middle of the last decade but by providing a viable alternative, the streaming service has become part of the solution.

“The event is interesting,” Bona says. “Since the launch of streaming solutions such as Netflix and Spotify, they have become alternatives to piracy. Sweden had many problems with music piracy and the arrival of Spotify reversed this curve.”

Netflix launched in Brazil back in 2011, but Exame notes that the largest increase in searches for the platform took place between 2013 and 2016, demonstrating a boost of 284%. Even more evidence of Netflix’s popularity was revealed in recent surveys which indicate that 77% of surveyed Brazilians had watched Netflix, up from 71% in 2016.

Importantly, nine out of ten users in Brazil said they were “extremely satisfied” or “very satisfied” with the service, up from 79% in the previous year. An impressive 66% of subscribers said that they were “not at all likely to cancel”, a welcome statistics for a company pumping billions into making its own content and increasingly protecting it (1,2), in the face of persistent pirate competition.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

MPAA: Almost 70% of 38 Million Kodi Users Are Pirates

Post Syndicated from Andy original https://torrentfreak.com/mpaa-almost-70-of-38-million-kodi-users-are-pirates-171104/

As torrents and other forms of file-sharing resolutely simmer away in the background, it is the streaming phenomenon that’s taking the Internet by storm.

This Tuesday, in a report by Canadian broadband management company Sandvine, it was revealed that IPTV traffic has grown to massive proportions.

Sandvine found that 6.5% of households in North American are now communicating with known TV piracy services. This translates to seven million subscribers and many more potential viewers. There’s little doubt that IPTV and all its variants, Kodi streaming included, are definitely here to stay.

The topic was raised again Wednesday during a panel discussion hosted by the Copyright Alliance in conjunction with the Creative Rights Caucus. Titled “Copyright Pirates’ New Strategies”, the discussion’s promotional graphic indicates some of the industry heavyweights in attendance.

The Copyright Alliance tweeted points from the discussion throughout the day and soon the conversation turned to the streaming phenomenon that has transformed piracy in recent times.

Previously dubbed Piracy 3.0 by the MPAA, Senior Vice President, Government and Regulatory Affairs Neil Fried was present to describe streaming devices and apps as the latest development in TV and movie piracy.

Like many before him, Fried explained that the Kodi platform in its basic form is legal. However, he noted that many of the add-ons for the media player provide access to pirated content, a point proven in a big screen demo.

Kodi demo by the MPAA via Copyright Alliance

According to the Copyright Alliance, Fried then delivered some interesting stats. The MPAA believes that there are around 38 million users of Kodi in the world, which sounds like a reasonable figure given that the system has been around for 15 years in various guises, including during its XBMC branding.

However, he also claimed that of those 38 million, a substantial 26 million users have piracy addons installed. That suggests around 68.5% or seven out of ten of all Kodi users are pirates of movies, TV shows, and other media. Taking the MPAA statement to its conclusion, only 12 million Kodi users are operating the software legitimately.

TorrentFreak contacted XBMC Foundation President Nathan Betzen for his stance on the figures but he couldn’t shine much light on usage.

“Unfortunately I do not have an up to date number on users, and because we don’t watch what our users are doing, we have no way of knowing how many do what with regards to streaming. [The MPAA’s] numbers could be completely correct or totally made up. We have no real way to know,” Betzen said.

That being said, the team does have the capability to monitor overall Kodi usage, even if they don’t publish the stats. This was revealed back in June 2011 when Kodi was still called XBMC.

“The addon system gives us the opportunity to measure the popularity of addons, measure user base, estimate the frequency that people update their systems, and even, ultimately, help users find the more popular addons,” the team wrote.

“Most interestingly, for the purposes of this post, is that we can get a pretty good picture of how many active XBMC installs there are without having to track what each individual user does.”

Using this system, the team concluded there were roughly 435,000 active XBMC instances around the globe in April 2011, but that figure was to swell dramatically. Just three months later, 789,000 XBMC installations had been active in the previous six weeks.

What’s staggering is that in 2017, the MPAA claims that there are now 38 million users of Kodi, of which 26 million are pirates. In the absence of any figures from the Kodi team, TF asked Kodi addon repository TVAddons what they thought of the MPAA’s stats.

“We’ve always banned the use of analytics within Kodi addons, so it’s really impossible to make such an estimate. It seems like the MPAA is throwing around numbers without much statistical evidence while mislabelling Kodi users as ‘pirate’ in the same way that they have mislabelled legitimate services like CloudFlare,” a spokesperson said.

“As far as general addon use goes, before our repository server (which contained hundreds of legitimate addons) was unlawfully seized, it had about 39 million active users per month, but even we don’t know how many users downloaded which addons. We never allowed for addon statistics for users because they are invasive to privacy and breed unhealthy competition.”

So, it seems that while there is some dispute over the number of potential pirates, there does at least appear to be some consensus on the number of users overall. The big question, however, is how groups like the MPAA will deal with this kind of unauthorized infringement in future.

At the moment the big push is to paint pirate platforms as dangerous places to be. Indeed, during the discussion this week, Copyright Alliance CEO Keith Kupferschmid claimed that users of pirate services are “28 times more likely” to be infected with malware.

Whether that strategy will pay off remains unclear but it’s obvious that at least for now, Piracy 3.0 is a massive deal, one that few people saw coming half a decade ago but is destined to keep growing.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.