Tag Archives: competition

GoDaddy to Suspend ‘Pirate’ Domain Following Music Industry Complaints

Post Syndicated from Andy original https://torrentfreak.com/godaddy-to-suspend-pirate-domain-following-music-industry-complaints-180601/

Most piracy-focused sites online conduct their business with minimal interference from outside parties. In many cases, a heap of DMCA notices filed with Google represents the most visible irritant.

Others, particularly those with large audiences, can find themselves on the end of a web blockade. Mostly court-ordered, blocking measures restrict the ability of Internet users to visit a site due to ISPs restricting traffic.

In some regions, where copyright holders have the means to do so, they choose to tackle a site’s infrastructure instead, which could mean complaints to webhosts or other service providers. At times, this has included domain registries, who are asked to disable domains on copyright grounds.

This is exactly what has happened to Fox-MusicaGratis.com, a Spanish-language music piracy site that incurred the wrath of IFPI member UNIMPRO – the Peruvian Union of Phonographic Producers.

Pirate music, suspended domain

In a process that’s becoming more common in the region, UNIMPRO initially filed a complaint with the Copyright Commission (Comisión de Derecho de Autor (CDA)) which conducted an investigation into the platform’s activities.

“The CDA considered, among other things, the irreparable damage that would have been caused to the legitimate rights owners, taking into account the large number of users who could potentially have visited said website, which was making available endless musical recordings for commercial purposes, without authorization of the holders of rights,” a statement from CDA reads.

The administrative process was carried out locally with the involvement of the National Institute for the Defense of Competition and the Protection of Intellectual Property (Indecopi), an autonomous public body tasked with handling anti-competitive behavior, unfair competition, and intellectual property matters.

Indecopi HQ

The matter was decided in favor of the rightsholders and a subsequent ruling included an instruction for US-based domain name registry GoDaddy to suspend Fox-MusicaGratis.com. According to the copyright protection entity, GoDaddy agreed to comply, to prevent further infringement.

This latest action involving a music piracy site registered with GoDaddy follows on the heels of a similar enforcement process back in March.

Mp3Juices-Download-Free.com, Melodiavip.net, Foxmusica.site and Fulltono.me were all music sites offering MP3 content without copyright holders’ permission. They too were the subject of an UNIMPRO complaint which resulted in orders for GoDaddy to suspend their domains.

In the cases of all five websites, GoDaddy was given the chance to appeal but there is no indication that the company has done so. GoDaddy did not respond to a request for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

C is to low level

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/c-is-too-low-level.html

I’m in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it’s still the closest language to assembly. This is obvious by the fact it’s still the fastest compiled language. What we see is a typical academic out of touch with the real world.

The author makes the (wrong) observation that we’ve been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.


The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.

But here’s the thing, the Sparc Tx was a failure. To be fair, it’s mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.

Time after time, engineers keep finding that “out-of-order”, single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.

The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel’s dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don’t need to be as beefy as Intel’s processors, but they need to be close.

Even Intel’s “Xeon Phi” custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide “vector” (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.

As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.

This is an intellectually compelling argument, but so far bunk.

The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like “intrinsics” (C ‘functions’ that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enough, it still provides the best access to low level capabilities.

Given that C can access new functionality in CPUs, CPU designers add new paradigms, from SIMD to transaction processing. In other words, while in the 1980s CPUs were designed to optimize C (stacks, scaled pointers), these days CPUs are designed to optimize tasks regardless of language.

The author of that article criticizes the memory/cache hierarchy, claiming it has problems. Yes, it has problems, but only compared to how well it normally works. The author praises the many simple cores/threads idea as hiding memory latency with little caching, but misses the point that caches also dramatically increase memory bandwidth. Intel processors are optimized to read a whopping 256 bits every clock cycle from L1 cache. Main memory bandwidth is orders of magnitude slower.

The author goes onto criticize cache coherency as a problem. C uses it, but other languages like Erlang don’t need it. But that’s largely due to the problems each languages solves. Erlang solves the problem where a large number of threads work on largely independent tasks, needing to send only small messages to each other across threads. The problems C solves is when you need many threads working on a huge, common set of data.

For example, consider the “intrusion prevention system”. Any thread can process any incoming packet that corresponds to any region of memory. There’s no practical way of solving this problem without a huge coherent cache. It doesn’t matter which language or abstractions you use, it’s the fundamental constraint of the problem being solved. RDMA is an important concept that’s moved from supercomputer applications to the data center, such as with memcached. Again, we have the problem of huge quantities (terabytes worth) shared among threads rather than small quantities (kilobytes).

The fundamental issue the author of the the paper is ignoring is decreasing marginal returns. Moore’s Law has gifted us more transistors than we can usefully use. We can’t apply those additional registers to just one thing, because the useful returns we get diminish.

For example, Intel CPUs have two hardware threads per core. That’s because there are good returns by adding a single additional thread. However, the usefulness of adding a third or fourth thread decreases. That’s why many CPUs have only two threads, or sometimes four threads, but no CPU has 16 threads per core.

You can apply the same discussion to any aspect of the CPU, from register count, to SIMD width, to cache size, to out-of-order depth, and so on. Rather than focusing on one of these things and increasing it to the extreme, CPU designers make each a bit larger every process tick that adds more transistors to the chip.

The same applies to cores. It’s why the “more simpler cores” strategy fails, because more cores have their own decreasing marginal returns. Instead of adding cores tied to limited memory bandwidth, it’s better to add more cache. Such cache already increases the size of the cores, so at some point it’s more effective to add a few out-of-order features to each core rather than more cores. And so on.

The question isn’t whether we can change this paradigm and radically redesign CPUs to match some academic’s view of the perfect abstraction. Instead, the goal is to find new uses for those additional transistors. For example, “message passing” is a useful abstraction in languages like Go and Erlang that’s often more useful than sharing memory. It’s implemented with shared memory and atomic instructions, but I can’t help but think it couldn’t better be done with direct hardware support.

Of course, as soon as they do that, it’ll become an intrinsic in C, then added to languages like Go and Erlang.

Summary

Academics live in an ideal world of abstractions, the rest of us live in practical reality. The reality is that vast majority of programmers work with the C family of languages (JavaScript, Go, etc.), whereas academics love the epiphanies they learned using other languages, especially function languages. CPUs are only superficially designed to run C and “PDP-11 compatibility”. Instead, they keep adding features to support other abstractions, abstractions available to C. They are driven by decreasing marginal returns — they would love to add new abstractions to the hardware because it’s a cheap way to make use of additional transitions. Academics are wrong believing that the entire system needs to be redesigned from scratch. Instead, they just need to come up with new abstractions CPU designers can add.

This is a really lovely Raspberry Pi tricorder

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-tricorder-prop/

At the moment I’m spending my evenings watching all of Star Trek in order. Yes, I have watched it before (but with some really big gaps). Yes, including the animated series (I’m up to The Terratin Incident). So I’m gratified to find this beautiful The Original Series–style tricorder build.

Star Trek Tricorder with Working Display!

At this year’s Replica Prop Forum showcase, we meet up once again wtih Brian Mix, who brought his new Star Trek TOS Tricorder. This beautiful replica captures the weight and finish of the filming hand prop, and Brian has taken it one step further with some modern-day electronics!

A what now?

If you don’t know what a tricorder is, which I guess is faintly possible, the easiest way I can explain is to steal words that Liz wrote when Recantha made one back in 2013. It’s “a made-up thing used by the crew of the Enterprise to measure stuff, store data, and scout ahead remotely when exploring strange new worlds, seeking out new life and new civilisations, and all that jazz.”

A brief history of Picorders

We’ve seen other Raspberry Pi–based realisations of this iconic device. Recantha’s LEGO-cased tricorder delivered some authentic functionality, including temperature sensors, an ultrasonic distance sensor, a photosensor, and a magnetometer. Michael Hahn’s tricorder for element14’s Sci-Fi Your Pi competition in 2015 packed some similar functions, along with Original Series audio effects, into a neat (albeit non-canon) enclosure.

Brian Mix’s Original Series tricorder

Brian Mix’s tricorder, seen in the video above from Tested at this year’s Replica Prop Forum showcase, is based on a high-quality kit into which, he discovered, a Raspberry Pi just fits. He explains that the kit is the work of the late Steve Horch, a special effects professional who provided props for later Star Trek series, including the classic Deep Space Nine episode Trials and Tribble-ations.

A still from an episode of Star Trek: Deep Space Nine: Jadzia Dax, holding an Original Series-sylte tricorder, speaks with Benjamin Sisko

Dax, equipped for time travel

This episode’s plot required sets and props — including tricorders — replicating the USS Enterprise of The Original Series, and Steve Horch provided many of these. Thus, a tricorder kit from him is about as close to authentic as you can possibly find unless you can get your hands on a screen-used prop. The Pi allows Brian to drive a real display and a speaker: “Being the geek that I am,” he explains, “I set it up to run every single Original Series Star Trek episode.”

Even more wonderful hypothetical tricorders that I would like someone to make

This tricorder is beautiful, and it makes me think how amazing it would be to squeeze in some of the sensor functionality of the devices depicted in the show. Space in the case is tight, but it looks like there might be a little bit of depth to spare — enough for an IMU, maybe, or a temperature sensor. I’m certain the future will bring more Pi tricorder builds, and I, for one, can’t wait. Please tell us in the comments if you’re planning something along these lines, and, well, I suppose some other sci-fi franchises have decent Pi project potential too, so we could probably stand to hear about those.

If you’re commenting, no spoilers please past The Animated Series S1 E11. Thanks.

The post This is a really lovely Raspberry Pi tricorder appeared first on Raspberry Pi.

Stream to Twitch with the push of a button

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-twitch-streaming/

Stream your video gaming exploits to the internet at the touch of a button with the Twitch-O-Matic. Everyone else is doing it, so you should too.

Twitch-O-Matic: Raspberry Pi Twitch Streaming Device – Weekend Hacker #1804

Some gaming consoles make it easy to stream to Twitch, some gaming consoles don’t (come on, Nintendo). So for those that don’t, I’ve made this beta version of the “Twitch-O-Matic”. No it doesn’t chop onions or fold your laundry, but what it DOES do is stream anything with HDMI output to your Twitch channel with the simple push of a button!

eSports and online game streaming

Interest in eSports has skyrocketed over the last few years, with viewership numbers in the hundreds of millions, sponsorship deals increasing in value and prestige, and tournament prize funds reaching millions of dollars. So it’s no wonder that more and more gamers are starting to stream live to online platforms in order to boost their fanbase and try to cash in on this growing industry.

Streaming to Twitch

Launched in 2011, Twitch.tv is an online live-streaming platform with a primary focus on video gaming. Users can create accounts to contribute their comments and content to the site, as well as watching live-streamed gaming competitions and broadcasts. With a staggering fifteen million daily users, Twitch is accessible via smartphone and gaming console apps, smart TVs, computers, and tablets. But if you want to stream to Twitch, you may find yourself using third-party software in order to do so. And with more buttons to click and more wires to plug in for older, app-less consoles, streaming can get confusing.

Enter Tinkernut.

Side note: we ❤ Tinkernut

We’ve featured Tinkernut a few times on the Raspberry Pi blog – his tutorials are clear, his projects are interesting and useful, and his live-streamed comment videos for every build are a nice touch to sharing homebrew builds on the internet.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

So, yes, we love him. [This is true. Alex never shuts up about him. – Ed.] And since he has over 500K subscribers on YouTube, we’re obviously not the only ones. We wave our Tinkernut flags with pride.

Twitch-O-Matic

With a Raspberry Pi Zero W, an HDMI to CSI adapter, and a case to fit it all in, Tinkernut’s Twitch-O-Matic allows easy connection to the Twitch streaming service. You’ll also need a button – the bigger, the better in our opinion, though Tinkernut has opted for the Adafruit 16mm Illuminated Pushbutton for his build, and not the 100mm Massive Arcade Button that, sadly, we still haven’t found a reason to use yet.

Adafruit massive button

“I’m sorry, Dave…”

For added frills and pizzazz, Tinketnut has also incorporated Adafruit’s White LED Backlight Module into the case, though you don’t have to do so unless you’re feeling super fancy.

The setup

The Raspberry Pi Zero W is connected to the HDMI to CSI adapter via the camera connector, in the same way you’d attach the camera ribbon. Tinkernut uses a standard Raspbian image on an 8GB SD card, with SSH enabled for remote access from his laptop. He uses the simple command Raspivid to test the HDMI connection by recording ten seconds of video footage from his console.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

One lead is all you need

Once you have the Pi receiving video from your console, you can connect to Twitch using your Twitch stream key, which you can find by logging in to your account at Twitch.tv. Tinkernut’s tutorial gives you all the commands you need to stream from your Pi.

The frills

To up the aesthetic impact of your project, adding buttons and backlights is fairly straightforward.

Tinkernut Raspberry Pi Zero W Twitch-O-Matic

Pretty LED frills

To run the stream command, Tinketnut uses a button: press once to start the stream, press again to stop. Pressing the button also turns on the LED backlight, so it’s obvious when streaming is in progress.

The tutorial

For the full code and 3D-printable case STL file, head to Tinketnut’s hackster.io project page. And if you’re already using a Raspberry Pi for Twitch streaming, share your build setup with us. Cheers!

The post Stream to Twitch with the push of a button appeared first on Raspberry Pi.

Under-Fire “Kodi Box” Company “Sold to Chinese Investor” For US$8.82m

Post Syndicated from Andy original https://torrentfreak.com/under-fire-kodi-box-company-sold-to-chinese-investor-for-us8-82m-180426/

Back in 2016, an article appeared in Kiwi media discussing the rise of a new company pledging to beat media giant Sky TV at its own game.

My Box NZ owner Krish Reddy told the publication he was selling Android boxes loaded with Kodi software and augmented with third-party addons.

Without any hint of fear, he stated that these devices enabled customers to access movies, TV shows and live channels for free, after shelling out a substantial US$182 for the box first, that is.

“Why pay $80 minimum per month for Sky when for one payment you can have it free for good?” a claim on the company’s website asked.

Noting that he’d been importing the boxes from China, Reddy suggested that his lawyers hadn’t found any problem with the business plan.

“I don’t see why [Sky] would contact me but if they do contact me and … if there’s something of theirs that they feel I’ve unlawfully taken then yeah … but as it stands I don’t [have any concerns],” he said.

At this point, Reddy said he’d been selling the boxes for just six weeks and had shifted around 80 units. To get coverage from a national newspaper at this stage of the game must’ve been very much appreciated but Reddy didn’t stop there.

In a bulk advertising email sent out to 50,000 people, Reddy described his boxes as “better than Sky”. However, by design or misfortune, the email managed to land in the inboxes of 50 Sky TV staff and directors, something that didn’t go unnoticed by the TV giant.

With Reddy claiming sales of 8,000 units, Sky ran out of patience last April. In a letter from its lawyers, the pay-TV company said Reddy’s devices breached copyright law and the Fair Trading Act. Reddy responded by calling the TV giant “a playground bully”, again denying that he was breaking the law.

“From a legal perspective, what we do is completely within the law. We advertise Sky television channels being available through our website and social media platforms as these are available via streams which you can find through My Box,” he said.

“The content is already available, I’m not going out there and bringing the content so how am I infringing the copyright… the content is already there, if someone uses the box to search for the content, that’s what it is.”

The initial compensation demand from Sky against Reddy’s company My Box ran to NZD$1.4m, around US$1m. It was an amount that had the potential rise by millions if matters got drawn out and/or escalated. But despite picking a terrible opponent in a battle he was unlikely to win, Reddy refused to give up.

“[Sky’s] point of view is they own copyright and I’m destroying the market by giving people content for free. To me it is business; I have got something that is new … that’s competition,” he said.

The Auckland High Court heard the case against My Box last month with Judge Warwick Smith reserving his judgment and Reddy still maintaining that his business is entirely legal. Sales were fantastic, he said, with 20,000 devices sold to customers in 12 countries.

Then something truly amazing happened.

A company up to its eyeballs in litigation, selling a commodity product that an amateur can buy and configure at home for US$40, reportedly got a chance of a lifetime. Reddy revealed to Stuff that a Chinese investor had offered to buy his company for an eye-watering NZ$10 million (US$7.06m).

“We have to thank Sky,” he said. “If they had left us alone we would just have been selling a few boxes, but the controversy made us world famous.”

Reddy noted he’d been given 21 days to respond to the offer, but refused to name the company. Interestingly, he also acknowledged that if My Box lost its case, the company would be liable for damages. However, that wouldn’t bother the potential investor.

“It makes no difference to them whether we win or lose, because their operations won’t be in New Zealand,” Reddy said.

According to the entrepreneur, that’s how things are playing out.

The Chinese firm – which Reddy is still refusing to name – has apparently accepted a counter offer from Reddy of US$8.8m for My Box. As a result, Reddy will wrap up his New Zealand operations within the next 90 days and his six employees will be rendered unemployed.

Given that anyone with the ability to install Kodi and a few addons before putting a box in the mail could replicate Reddy’s business model, the multi-million dollar offer for My Box was never anything less than a bewildering business proposition. That someone carried through with it an even higher price is so fantastic as to be almost unbelievable.

In a sea of unhappy endings for piracy-enabled Kodi box sellers globally, this is the only big win to ever grace the headlines. Assuming this really is the end of the story (and that might not be the case) it will almost certainly be the last.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Congratulations to Oracle on MySQL 8.0

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2018/04/congratulations-to-oracle-on-mysql-80.html

Last week, Oracle announced the general availability of MySQL 8.0. This is good news for database users, as it means Oracle is still developing MySQL.

I decide to celebrate the event by doing a quick test of MySQL 8.0. Here follows a step-by-step description of my first experience with MySQL 8.0.
Note that I did the following without reading the release notes, as is what I have done with every MySQL / MariaDB release up to date; In this case it was not the right thing to do.

I pulled MySQL 8.0 from [email protected]:mysql/mysql-server.git
I was pleasantly surprised that ‘cmake . ; make‘ worked without without any compiler warnings! I even checked the used compiler options and noticed that MySQL was compiled with -Wall + several other warning flags. Good job MySQL team!

I did have a little trouble finding the mysqld binary as Oracle had moved it to ‘runtime_output_directory’; Unexpected, but no big thing.

Now it’s was time to install MySQL 8.0.

I did know that MySQL 8.0 has removed mysql_install_db, so I had to use the mysqld binary directly to install the default databases:
(I have specified datadir=/my/data3 in the /tmp/my.cnf file)

> cd runtime_output_directory
> mkdir /my/data3
> ./mysqld –defaults-file=/tmp/my.cnf –install

2018-04-22T12:38:18.332967Z 1 [ERROR] [MY-011011] [Server] Failed to find valid data directory.
2018-04-22T12:38:18.333109Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2018-04-22T12:38:18.333135Z 0 [ERROR] [MY-010119] [Server] Aborting

A quick look in mysqld –help –verbose output showed that the right command option is –-initialize. My bad, lets try again,

> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:39:31.910509Z 0 [ERROR] [MY-010457] [Server] –initialize specified but the data directory has files in it. Aborting.
2018-04-22T12:39:31.910578Z 0 [ERROR] [MY-010119] [Server] Aborting

Now I used the right options, but still didn’t work.
I took a quick look around:

> ls /my/data3/
binlog.index

So even if the mysqld noticed that the data3 directory was wrong, it still wrote things into it.  This even if I didn’t have –log-binlog enabled in the my.cnf file. Strange, but easy to fix:

> rm /my/data3/binlog.index
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:40:45.633637Z 0 [ERROR] [MY-011071] [Server] unknown variable ‘max-tmp-tables=100’
2018-04-22T12:40:45.633657Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you’re executing mysql_upgrade to correct the issue.
2018-04-22T12:40:45.633663Z 0 [ERROR] [MY-010119] [Server] Aborting

The warning about the privilege system confused me a bit, but I ignored it for the time being and removed from my configuration files the variables that MySQL 8.0 doesn’t support anymore. I couldn’t find a list of the removed variables anywhere so this was done with the trial and error method.

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:42:56.626583Z 0 [ERROR] [MY-010735] [Server] Can’t open the mysql.plugin table. Please run mysql_upgrade to create it.
2018-04-22T12:42:56.827685Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table ‘mysql.gtid_executed’ cannot be opened.
2018-04-22T12:42:56.838501Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2018-04-22T12:42:56.848375Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables
2018-04-22T12:42:56.848863Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-001146 – Table ‘mysql.component’ doesn’t exist
2018-04-22T12:42:56.848916Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we’re sending the information to the error-log instead: MY-003543 – The mysql.component table is missing or has an incorrect definition.
….
2018-04-22T12:42:56.854141Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

I figured out that if there is a single wrong variable in the configuration file, running mysqld –initialize will leave the database in an inconsistent state. NOT GOOD! I am happy I didn’t try this in a production system!

Time to start over from the beginning:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize

2018-04-22T12:44:45.548960Z 5 [Note] [MY-010454] [Server] A temporary password is generated for [email protected]: px)NaaSp?6um
2018-04-22T12:44:51.221751Z 0 [System] [MY-013170] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld (mysqld 8.0.11) initializing of server has completed

Success!

I wonder why the temporary password is so complex; It could easily have been something that one could easily remember without decreasing security, it’s temporary after all. No big deal, one can always paste it from the logs. (Side note: MariaDB uses socket authentication on many system and thus doesn’t need temporary installation passwords).

Now lets start the MySQL server for real to do some testing:

> ./mysqld –defaults-file=/tmp/my.cnf

2018-04-22T12:45:43.683484Z 0 [System] [MY-010931] [Server] /home/my/mysql-8.0/runtime_output_directory/mysqld: ready for connections. Version: ‘8.0.11’ socket: ‘/tmp/mysql.sock’ port: 3306 Source distribution.

And the lets start the client:

> ./client/mysql –socket=/tmp/mysql.sock –user=root –password=”px)NaaSp?6um”
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Apparently MySQL 8.0 doesn’t work with old MySQL / MariaDB clients by default 🙁

I was testing this in a system with MariaDB installed, like all modern Linux system today, and didn’t want to use the MySQL clients or libraries.

I decided to try to fix this by changing the authentication to the native (original) MySQL authentication method.

> mysqld –skip-grant-tables

> ./client/mysql –socket=/tmp/mysql.sock –user=root
ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: NO)

Apparently –skip-grant-tables is not good enough anymore. Let’s try again with:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –socket=/tmp/mysql.sock –user=root mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 8.0.11 Source distribution

Great, we are getting somewhere, now lets fix “root”  to work with the old authenticaion:

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=password(“test”) where user=”root”;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘(“test”) where user=”root”‘ at line 1

A quick look in the MySQL 8.0 release notes told me that the PASSWORD() function is removed in 8.0. Why???? I don’t know how one in MySQL 8.0 is supposed to generate passwords compatible with old installations of MySQL. One could of course start an old MySQL or MariaDB version, execute the password() function and copy the result.

I decided to fix this the easy way and use an empty password:

(Update:: I later discovered that the right way would have been to use: FLUSH PRIVILEGES;  ALTER USER’ root’@’localhost’ identified by ‘test’  ; I however dislike this syntax as it has the password in clear text which is easy to grab and the command can’t be used to easily update the mysql.user table. One must also disable the –skip-grant mode to do use this)

MySQL [mysql]> update mysql.user set plugin=”mysql_native_password”,authentication_string=”” where user=”root”;
Query OK, 1 row affected (0.077 sec)
Rows matched: 1 Changed: 1 Warnings: 0
 
I restarted mysqld:
> mysqld –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
ERROR 1862 (HY000): Your password has expired. To log in you must change it using a client that supports expired passwords.

Ouch, forgot that. Lets try again:

> mysqld –skip-grant-tables –default_authentication_plugin=mysql_native_password

> ./client/mysql –user=root –password=”” mysql
MySQL [mysql]> update mysql.user set password_expired=”N” where user=”root”;

Now restart and test worked:

> ./mysqld –default_authentication_plugin=mysql_native_password

>./client/mysql –user=root –password=”” mysql

Finally I had a working account that I can use to create other users!

When looking at mysqld –help –verbose again. I noticed the option:

–initialize-insecure
Create the default database and exit. Create a super user
with empty password.

I decided to check if this would have made things easier:

> rm -r /my/data3/*
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure

2018-04-22T13:18:06.629548Z 5 [Warning] [MY-010453] [Server] [email protected] is created with an empty password ! Please consider switching off the –initialize-insecure option.

Hm. Don’t understand the warning as–initialize-insecure is not an option that one would use more than one time and thus nothing one would ‘switch off’.

> ./mysqld –defaults-file=/tmp/my.cnf

> ./client/mysql –user=root –password=”” mysql
ERROR 2059 (HY000): Plugin caching_sha2_password could not be loaded: /usr/local/mysql/lib/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory

Back to the beginning 🙁

To get things to work with old clients, one has to initialize the database with:
> ./mysqld –defaults-file=/tmp/my.cnf –initialize-insecure –default_authentication_plugin=mysql_native_password

Now I finally had MySQL 8.0 up and running and thought I would take it up for a spin by running the “standard” MySQL/MariaDB sql-bench test suite. This was removed in MySQL 5.7, but as I happened to have MariaDB 10.3 installed, I decided to run it from there.

sql-bench is a single threaded benchmark that measures the “raw” speed for some common operations. It gives you the ‘maximum’ performance for a single query. Its different from other benchmarks that measures the maximum throughput when you have a lot of users, but sql-bench still tells you a lot about what kind of performance to expect from the database.

I tried first to be clever and create the “test” database, that I needed for sql-bench, with
> mkdir /my/data3/test

but when I tried to run the benchmark, MySQL 8.0 complained that the test database didn’t exist.

MySQL 8.0 has gone away from the original concept of MySQL where the user can easily
create directories and copy databases into the database directory. This may have serious
implication for anyone doing backup of databases and/or trying to restore a backup with normal OS commands.

I created the ‘test’ database with mysqladmin and then tried to run sql-bench:

> ./run-all-tests –user=root

The first run failed in test-ATIS:

Can’t execute command ‘create table class_of_service (class_code char(2) NOT NULL,rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_code))’
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘rank tinyint(2) NOT NULL,class_description char(80) NOT NULL,PRIMARY KEY (class_’ at line 1

This happened because ‘rank‘ is now a reserved word in MySQL 8.0. This is also reserved in ANSI SQL, but I don’t know of any other database that has failed to run test-ATIS before. I have in the past run it against Oracle, PostgreSQL, Mimer, MSSQL etc without any problems.

MariaDB also has ‘rank’ as a keyword in 10.2 and 10.3 but one can still use it as an identifier.

I fixed test-ATIS and then managed to run all tests on MySQL 8.0.

I did run the test both with MySQL 8.0 and MariaDB 10.3 with the InnoDB storage engine and by having identical values for all InnoDB variables, table-definition-cache and table-open-cache. I turned off performance schema for both databases. All test are run with a user with an empty password (to keep things comparable and because it’s was too complex to generate a password in MySQL 8.0)

The result are as follows
Results per test in seconds:

Operation         |MariaDB|MySQL-8|

———————————–
ATIS              | 153.00| 228.00|
alter-table       |  92.00| 792.00|
big-tables        | 990.00|2079.00|
connect           | 186.00| 227.00|
create            | 575.00|4465.00|
insert            |4552.00|8458.00|
select            | 333.00| 412.00|
table-elimination |1900.00|3916.00|
wisconsin         | 272.00| 590.00|
———————————–

This is of course just a first view of the performance of MySQL 8.0 in a single user environment. Some reflections about the results:

  • Alter-table test is slower (as expected) in 8.0 as some of the alter tests benefits of the instant add column in MariaDB 10.3.
  • connect test is also better for MariaDB as we put a lot of efforts to speed this up in MariaDB 10.2
  • table-elimination shows an optimization in MariaDB for the  Anchor table model, which MySQL doesn’t have.
  • CREATE and DROP TABLE is almost 8 times slower in MySQL 8.0 than in MariaDB 10.3. I assume this is the cost of ‘atomic DDL’. This may also cause performance problems for any thread using the data dictionary when another thread is creating/dropping tables.
  • When looking at the individual test results, MySQL 8.0 was slower in almost every test, in many significantly slower.
  • The only test where MySQL was faster was “update_with_key_prefix”. I checked this and noticed that there was a bug in the test and the columns was updated to it’s original value (which should be instant with any storage engine). This is an old bug that MySQL has found and fixed and that we have not been aware of in the test or in MariaDB.
  • While writing this, I noticed that MySQL 8.0 is now using utf8mb4 as the default character set instead of latin1. This may affect some of the benchmarks slightly (not much as most tests works with numbers and Oracle claims that utf8mb4 is only 20% slower than latin1), but needs to be verified.
  • Oracle claims that MySQL 8.0 is much faster on multi user benchmarks. The above test indicates that they may have done this by sacrificing single user performance.
  •  We need to do more and many different benchmarks to better understand exactly what is going on. Stay tuned!

Short summary of my first run with MySQL 8.0:

  • Using the new caching_sha2_password authentication as default for new installation is likely to cause a lot of problems for users. No old application will be able to use MySQL 8.0, installed with default options, without moving to MySQL’s client libraries. While working on this blog I saw MySQL users complain on IRC that not even MySQL Workbench can authenticate with MySQL 8.0. This is the first time in MySQL’s history where such an incompatible change has ever been done!
  • Atomic DDL is a good thing (We plan to have this in MariaDB 10.4), but it should not have such a drastic impact on performance. I am also a bit skeptical of MySQL 8.0 having just one copy of the data dictionary as if this gets corrupted you will lose all your data. (Single point of failure)
  • MySQL 8.0 has several new reserved words and has removed a lot of variables, which makes upgrades hard. Before upgrading to MySQL 8.0 one has to check all one’s databases and applications to ensure that there are no conflicts.
  • As my test above shows, if you have a single deprecated variable in your configuration files, the installation of MySQL will abort and can leave the database in inconsistent state. I did of course my tests by installing into an empty data dictionary, but one can assume that some of the problems may also happen when upgrading an old installation.

Conclusions:
In many ways, MySQL 8.0 has caught up with some earlier versions of MariaDB. For instance, in MariaDB 10.0, we introduced roles (four years ago). In MariaDB 10.1, we introduced encrypted redo/undo logs (three years ago). In MariaDB 10.2, we introduced window functions and CTEs (a year ago). However, some catch-up of MariaDB Server 10.2 features still remains for MySQL (such as check constraints, binlog compression, and log-based rollback).

MySQL 8.0 has a few new interesting features (mostly Atomic DDL and JSON TABLE functions), but at the same time MySQL has strayed away from some of the fundamental corner stone principles of MySQL:

From the start of the first version of MySQL in 1995, all development has been focused around 3 core principles:

  • Ease of use
  • Performance
  • Stability

With MySQL 8.0, Oracle has sacrifices 2 of 3 of these.

In addition (as part of ease of use), while I was working on MySQL, we did our best to ensure that the following should hold:

  • Upgrades should be trivial
  • Things should be kept compatible, if possible (don’t remove features/options/functions that are used)
  • Minimize reserved words, don’t remove server variables
  • One should be able to use normal OS commands to create and drop databases, copy and move tables around within the same system or between different systems. With 8.0 and data dictionary taking backups of specific tables will be hard, even if the server is not running.
  • mysqldump should always be usable backups and to move to new releases
  • Old clients and application should be able to use ‘any’ MySQL server version unchanged. (Some Oracle client libraries, like C++, by default only supports the new X protocol and can thus not be used with older MySQL or any MariaDB version)

We plan to add a data dictionary to MariaDB 10.4 or MariaDB 10.5, but in a way to not sacrifice any of the above principles!

The competition between MySQL and MariaDB is not just about a tactical arms race on features. It’s about design philosophy, or strategic vision, if you will.

This shows in two main ways: our respective view of the Storage Engine structure, and of the top-level direction of the roadmap.

On the Storage Engine side, MySQL is converging on InnoDB, even for clustering and partitioning. In doing so, they are abandoning the advantages of multiple ways of storing data. By contrast, MariaDB sees lots of value in the Storage Engine architecture: MariaDB Server 10.3 will see the general availability of MyRocks (for write-intensive workloads) and Spider (for scalable workloads). On top of that, we have ColumnStore for analytical workloads. One can use the CONNECT engine to join with other databases. The use of different storage engines for different workloads and different hardware is a competitive differentiator, now more than ever.

On the roadmap side, MySQL is carefully steering clear of features that close the gap between MySQL and Oracle. MariaDB has no such constraints. With MariaDB 10.3, we are introducing PL/SQL compatibility (Oracle’s stored procedures) and AS OF (built-in system versioned tables with point-in-time querying). For both of those features, MariaDB is the first Open Source database doing so. I don’t except Oracle to provide any of the above features in MySQL!

Also on the roadmap side, MySQL is not working with the ecosystem in extending the functionality. In 2017, MariaDB accepted more code contributions in one year, than MySQL has done during its entire lifetime, and the rate is increasing!

I am sure that the experience I had with testing MySQL 8.0 would have been significantly better if MySQL would have an open development model where the community could easily participate in developing and testing MySQL continuously. Most of the confusing error messages and strange behavior would have been found and fixed long before the GA release.

Before upgrading to MySQL 8.0 please read https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html to see what problems you can run into! Don’t expect that old installations or applications will work out of the box without testing as a lot of features and options has been removed (query cache, partition of myisam tables etc)! You probably also have to revise your backup methods, especially if you want to ever restore just a few tables. (With 8.0, I don’t know how this can be easily done).

According to the MySQL 8.0 release notes, one can’t use mysqldump to copy a database to MySQL 8.0. One has to first to move to a MySQL 5.7 GA version (with mysqldump, as recommended by Oracle) and then to MySQL 8.0 with in-place update. I assume this means that all old mysqldump backups are useless for MySQL 8.0?

MySQL 8.0 seams to be a one way street to an unknown future. Up to MySQL 5.7 it has been trivial to move to MariaDB and one could always move back to MySQL with mysqldump. All MySQL client libraries has worked with MariaDB and all MariaDB client libraries has worked with MySQL. With MySQL 8.0 this has changed in the wrong direction.

As long as you are using MySQL 5.7 and below you have choices for your future, after MySQL 8.0 you have very little choice. But don’t despair, as MariaDB will always be able to load a mysqldump file and it’s very easy to upgrade your old MySQL installation to MariaDB 🙂

I wish you good luck to try MySQL 8.0 (and also the upcoming MariaDB 10.3)!

Announcing Coolest Projects North America

Post Syndicated from Courtney Lentz original https://www.raspberrypi.org/blog/coolest-projects-north-america/

The Raspberry Pi Foundation loves to celebrate people who use technology to solve problems and express themselves creatively, so we’re proud to expand the incredibly successful event Coolest Projects to North America. This free event will be held on Sunday 23 September 2018 at the Discovery Cube Orange County in Santa Ana, California.

Coolest Projects North America logo Raspberry Pi CoderDojo

What is Coolest Projects?

Coolest Projects is a world-leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. The event is both a competition and an exhibition to give young digital makers aged 7 to 17 a platform to celebrate their successes, creativity, and ingenuity.

showcase crowd — Coolest Projects North America

In 2012, Coolest Projects was conceived as an opportunity for CoderDojo Ninjas to showcase their work and for supporters to acknowledge these achievements. Week after week, Ninjas would meet up to work diligently on their projects, hacks, and code; however, it can be difficult for them to see their long-term progress on a project when they’re concentrating on its details on a weekly basis. Coolest Projects became a dedicated time each year for Ninjas and supporters to reflect, celebrate, and share both the achievements and challenges of the maker’s journey.

three female coolest projects attendees — Coolest Projects North America

Coolest Projects North America

Not only is Coolest Projects expanding to North America, it’s also expanding its participant pool! Members of our team have met so many amazing young people creating in all areas of the world, that it simply made sense to widen our outreach to include Code Clubs, students of Raspberry Pi Certified Educators, and members of the Raspberry Jam community at large as well as CoderDojo attendees.

 a boy showing a technology project to an old man, with a girl playing on a laptop on the floor — Coolest Projects North America

Exhibit and attend Coolest Projects

Coolest Projects is a free, family- and educator-friendly event. Young people can apply to exhibit their projects, and the general public can register to attend this one-day event. Be sure to register today, because you make Coolest Projects what it is: the coolest.

The post Announcing Coolest Projects North America appeared first on Raspberry Pi.

Backblaze Announces B2 Compute Partnerships

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/introducing-cloud-compute-services/

Backblaze Announces B2 Compute Partnerships

In 2015, we announced Backblaze B2 Cloud Storage — the most affordable, high performance storage cloud on the planet. The decision to release B2 as a service was in direct response to customers asking us if they could use the same cloud storage infrastructure we use for our Computer Backup service. With B2, we entered a market in direct competition with Amazon S3, Google Cloud Services, and Microsoft Azure Storage. Today, we have over 500 petabytes of data from customers in over 150 countries. At $0.005 / GB / month for storage (1/4th of S3) and $0.01 / GB for downloads (1/5th of S3), it turns out there’s a healthy market for cloud storage that’s easy and affordable.

As B2 has grown, customers wanted to use our cloud storage for a variety of use cases that required not only storage but compute. We’re happy to say that through partnerships with Packet & ServerCentral, today we’re announcing that compute is now available for B2 customers.

Cloud Compute and Storage

Backblaze has directly connected B2 with the compute servers of Packet and ServerCentral, thereby allowing near-instant (< 10 ms) data transfers between services. Also, transferring data between B2 and both our compute partners is free.

  • Storing data in B2 and want to run an AI analysis on it? — There are no fees to move the data to our compute partners.
  • Generating data in an application? — Run the application with one of our partners and store it in B2.
  • Transfers are free and you’ll save more than 50% off of the equivalent set of services from AWS.

These partnerships enable B2 customers to use compute, give our compute partners’ customers access to cloud storage, and introduce new customers to industry-leading storage and compute — all with high-performance, low-latency, and low-cost.

Is This a Big Deal? We Think So

Compute is one of the most requested services from our customers Why? Because it unlocks a number of use cases for them. Let’s look at three popular examples:

Transcoding Media Files

B2 has earned wide adoption in the Media & Entertainment (“M&E”) industry. Our affordable storage and download pricing make B2 great for a wide variety of M&E use cases. But many M&E workflows require compute. Content syndicators, like American Public Television, need the ability to transcode files to meet localization and distribution management requirements.

There are a multitude of reasons that transcode is needed — thumbnail and proxy generation enable M&E professionals to work efficiently. Without compute, the act of transcoding files remains cumbersome. Either the files need to be brought down from the cloud, transcoded, and then pushed back up or they must be kept locally until the project is complete. Both scenarios are inefficient.

Starting today, any content producer can spin up compute with one of our partners, pay by the hour for their transcode processing, and return the new media files to B2 for storage and distribution. The company saves money, moves faster, and ensures their files are safe and secure.

Disaster Recovery

Backblaze’s heritage is based on providing outstanding backup services. When you have incredibly affordable cloud storage, it ends up being a great destination for your backup data.

Most enterprises have virtual machines (“VMs”) running in their infrastructure and those VMs need to be backed up. In a disaster scenario, a business wants to know they can get back up and running quickly.

With all data stored in B2, a business can get up and running quickly. Simply restore your backed up VM to one of our compute providers, and your business will be able to get back online.

Since B2 does not place restrictions, delays, or penalties on getting data out, customers can get back up and running quickly and affordably.

Saving $74 Million (aka “The Dropbox Effect”)

Ten years ago, Backblaze decided that S3 was too costly a platform to build its cloud storage business. Instead, we created the Backblaze Storage Pod and our own cloud storage infrastructure. That decision enabled us to offer our customers storage at a previously unavailable price point and maintain those prices for over a decade. It also laid the foundation for Netflix Open Connect and Facebook Open Compute.

Dropbox recently migrated the majority of their cloud services off of AWS and onto Dropbox’s own infrastructure. By leaving AWS, Dropbox was able to build out their own data centers and still save over $74 Million. They achieved those savings by avoiding the fees AWS charges for storing and downloading data, which, incidentally, are five times higher than Backblaze B2.

For Dropbox, being able to realize savings was possible because they have access to enough capital and expertise that they can build out their own infrastructure. For companies that have such resources and scale, that’s a great answer.

“Before this offering, the economics of the cloud would have made our business simply unviable.” — Gabriel Menegatti, SlicingDice

The questions Backblaze and our compute partners pondered was “how can we democratize the Dropbox effect for our storage and compute customers? How can we help customers do more and pay less?” The answer we came up with was to connect Backblaze’s B2 storage with strategic compute partners and remove any transfer fees between them. You may not save $74 million as Dropbox did, but you can choose the optimal providers for your use case and realize significant savings in the process.

This Sounds Good — Tell Me More About Your Partners

We’re very fortunate to be launching our compute program with two fantastic partners in Packet and ServerCentral. These partners allow us to offer a range of computing services.

Packet

We recommend Packet for customers that need on-demand, high performance, bare metal servers available by the hour. They also have robust offerings for private / customized deployments. Their offerings end up costing 50-75% of the equivalent offerings from EC2.

To get started with Packet and B2, visit our partner page on Packet.net.

ServerCentral

ServerCentral is the right partner for customers that have business and IT challenges that require more than “just” hardware. They specialize in fully managed, custom cloud solutions that solve complex business and IT challenges. ServerCentral also has expertise in managed network solutions to address global connectivity and content delivery.

To get started with ServerCentral and B2, visit our partner page on ServerCentral.com.

What’s Next?

We’re excited to find out. The combination of B2 and compute unlocks use cases that were previously impossible or at least unaffordable.

“The combination of performance and price offered by this partnership enables me to create an entirely new business line. Before this offering, the economics of the cloud would have made our business simply unviable,” noted Gabriel Menegatti, co-founder at SlicingDice, a serverless data warehousing service. “Knowing that transfers between compute and B2 are free means I don’t have to worry about my business being successful. And, with download pricing from B2 at just $0.01 GB, I know I’m avoiding a 400% tax from AWS on data I retrieve.”

What can you do with B2 & compute? Please share your ideas with us in the comments. And, for those attending NAB 2018 in Las Vegas next week, please come by and say hello!

The post Backblaze Announces B2 Compute Partnerships appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

MagPi 68: an in-depth look at the new Raspberry Pi 3B+

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-68/

Hi folks, Rob from The MagPi here! You may remember that a couple of weeks ago, the Raspberry Pi 3 Model B+ was released, the updated version of the Raspberry Pi 3 Model B. It’s better, faster, and stronger than the original and it’s also the main topic in The MagPi issue 68, out now!

Everything you need to know about the new Raspberry Pi 3B+

What goes into ‘plussing’ a Raspberry Pi? We talked to Eben Upton and Roger Thornton about the work that went into making the Raspberry Pi 3B+, and we also have all the benchmarks to show you just how much the new Pi 3B+ has been improved.

Super fighting robots

Did you know that the next Pi Wars is soon? The 2018 Raspberry Pi robotics competition is taking place later in April, and we’ve got a full feature on what to expect, as well as top tips on how to make your own kick-punching robot for the next round.

More to read

Still want more after all that? Well, we have our usual excellent selection of outstanding project showcases, reviews, and tutorials to keep you entertained.

See pictures from Raspberry Pi’s sixth birthday, celebrated around the world!

This includes amazing projects like a custom Pi-powered, Switch-esque retro games console, a Minecraft Pi hack that creates a house at the touch of a button, and the Matrix Voice.

With a Pi and a 3D printer, you can make something as cool as this!

Get The MagPi 68

Issue 68 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days for a print copy. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

New subscription offer!

Want to support the Raspberry Pi Foundation and the magazine? We’ve launched a new way to subscribe to the print version of The MagPi: you can now take out a monthly £4 subscription to the magazine, effectively creating a rolling pre-order system that saves you money on each issue.

You can also take out a twelve-month print subscription and get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

That’s it for now. See you next month!

The post MagPi 68: an in-depth look at the new Raspberry Pi 3B+ appeared first on Raspberry Pi.

Friday Squid Blogging: Giant Squid Stealing Food from Each Other

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/friday_squid_bl_617.html

An interesting hunting strategy:

Off of northern Spain, giant squid often feed on schools of fish called blue whiting. The schools swim 400 meters or less below the surface, while the squid prefer to hang out around a mile deep. The squid must ascend to hunt, probably seizing fish from below with their tentacles, then descend again. In this scenario, a squid could save energy by pirating food from its neighbor rather than hunting its own fish, Guerra says: If the target squid has already carried its prey back to the depths to eat, the pirate could save itself a trip up to the shallow water. Staying below would also protect a pirate from predators such as dolphins and sperm whales that hang around the fish schools.

If a pirate happened to kill its victim, it would also reduce competition. The scientists think that’s what happened with the Bares squid: Its tentacles were ripped off in the fight over food. “The victim, disoriented and wounded, could enter a warmer mass of water in which the efficiency of their blood decreases markedly,” the authors write in a recent paper in the journal Ecology. “In this way, the victim, almost asphyxiated, would be at the mercy of the marine currents, being dragged toward the coast.”

It’s called “food piracy.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Pioneers winners: only you can save us

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/pioneers-winners-only-you-can-save-us/

She asked for help, and you came to her aid. Pioneers, the winners of the Only you can save us challenge have been picked!

Can you see me? Only YOU can save us!

I need your help. This is a call out for those between 11- and 16-years-old in the UK and Republic of Ireland. Something has gone very, very wrong and only you can save us. I’ve collected together as much information for you as I can. You’ll find it at http://www.raspberrypi.org/pioneers.

The challenge

In August we intercepted an emergency communication from a lonesome survivor. She seemed to be in quite a bit of trouble, and asked all you young people aged 11 to 16 to come up with something to help tackle the oncoming crisis, using whatever technology you had to hand. You had ten weeks to work in teams of two to five with an adult mentor to fulfil your mission.

The judges

We received your world-saving ideas, and our savvy survivor pulled together a ragtag bunch of apocalyptic experts to help us judge which ones would be the winning entries.

Dr Shini Somara

Dr Shini Somara is an advocate for STEM education and a mechanical engineer. She was host of The Health Show and has appeared in documentaries for the BBC, PBS Digital, and Sky. You can check out her work hosting Crash Course Physics on YouTube.

Prof Lewis Dartnell is an astrobiologist and author of the book The Knowledge: How to Rebuild Our World From Scratch.

Emma Stephenson has a background in aeronautical engineering and currently works in the Shell Foundation’s Access to Energy and Sustainable Mobility portfolio.

Currently sifting through the entries with the other judges of #makeyourideas with @raspberrypifoundation @_raspberrypi_

151 Likes, 3 Comments – Shini Somara (@drshinisomara) on Instagram: “Currently sifting through the entries with the other judges of #makeyourideas with…”

The winners

Our survivor is currently putting your entries to good use repairing, rebuilding, and defending her base. Our judges chose the following projects as outstanding examples of world-saving digital making.

Theme winner: Computatron

Raspberry Pioneers 2017 – Nerfus Dislikus Killer Robot

This is our entry to the pioneers ‘Only you can save us’ competition. Our team name is Computatrum. Hope you enjoy!

Are you facing an unknown enemy whose only weakness is Nerf bullets? Then this is the robot for you! We loved the especially apocalyptic feel of the Computatron’s cleverly hacked and repurposed elements. The team even used an old floppy disc mechanism to help fire their bullets!

Technically brilliant: Robot Apocalypse Committee

Pioneers Apocalypse 2017 – RationalPi

Thousands of lines of code… Many sheets of acrylic… A camera, touchscreen and fingerprint scanner… This is our entry into the Raspberry Pi Pioneers2017 ‘Only YOU can Save Us’ theme. When zombies or other survivors break into your base, you want a secure way of storing your crackers.

The Robot Apocalypse Committee is back, and this time they’ve brought cheese! The crew designed a cheese- and cracker-dispensing machine complete with face and fingerprint recognition to ensure those rations last until the next supply drop.

Best explanation: Pi Chasers

Tala – Raspberry Pi Pioneers Project

Hi! We are PiChasers and we entered the Raspberry Pi Pionners challenge last time when the theme was “Make it Outdoors!” but now we’ve been faced with another theme “Apocolypse”. We spent a while thinking of an original thing that would help in an apocolypse and decided upon a ‘text-only phone’ which uses local radio communication rather than cellular.

This text-based communication device encased in a tupperware container could be a lifesaver in a crisis! And luckily, the Pi Chasers produced an excellent video and amazing GitHub repo, ensuring that any and all survivors will be able to build their own in the safety of their base.

Most inspiring journey: Three Musketeers

Pioneers Entry – The Apocalypse

Pioneers Entry Team Name: The Three Musketeers Team Participants: James, Zach and Tom

We all know that zombies are terrible at geometry, and the Three Musketeers used this fact to their advantage when building their zombie security system. We were impressed to see the team working together to overcome the roadblocks they faced along the way.

We appreciate what you’re trying to do: Zombie Trolls

Zombie In The Middle

Uploaded by CDA Bodgers on 2017-12-01.

Playing piggy in the middle with zombies sure is a unique way of saving humankind from total extinction! We loved this project idea, and although the Zombie Trolls had a little trouble with their motors, we’re sure with a little more tinkering this zombie-fooling contraption could save us all.

Most awesome

Our judges also wanted to give a special commendation to the following teams for their equally awesome apocalypse-averting ideas:

  • PiRates, for their multifaceted zombie-proofing defence system and the high production value of their video
  • Byte them Pis, for their beautiful zombie-detecting doormat
  • Unatecxon, for their impressive bunker security system
  • Team Crompton, for their pressure-activated door system
  • Team Ernest, for their adventures in LEGO

The prizes

All our winning teams have secured exclusive digital maker boxes. These are jam-packed with tantalising tech to satisfy all tinkering needs, including:

Our theme winners have also secured themselves a place at Coolest Projects 2018 in Dublin, Ireland!

Thank you to everyone who got involved in this round of Pioneers. Look out for your awesome submission swag arriving in the mail!

The post Pioneers winners: only you can save us appeared first on Raspberry Pi.

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

5 Reasons Why AWS Leads the Cloud Market

Post Syndicated from Chris De Santis original https://www.anchor.com.au/blog/2017/10/5-reasons-aws-leads-cloud/

There is no doubt that in the cloud computing market, there is a lot of competition, but there is also a clear market leader. Amazon Web Services (AWS) leads the charge among other web services from similar tech giants such as Microsoft, IBM, and Google, but how did they get there and what’s taking so long for someone of the likes of Google to knock them off their pedestal?

5 Reasons Why AWS Leads the Cloud Market

Credit: Synergy Research Group

Recent research from Synergy Research shows that Amazon has a seemingly unbeatable lead. John Dinsdale, chief analyst at Synergy Research, told TechCrunch that, on paper, AWS is too far ahead of any competitor trying to gain a short-term advantage. The reason behind their spectacular lead is simple:

AWS was first.

If you start the race before everyone else and keep at the pace they’re running, you’re going to win, and that’s exactly what Amazon are doing. Yet, instead of sitting on their colossal market share like a throne, they’re continuing to rapidly innovate and differentiate.

Dinsdale continues to explain that AWS does five things continuously that allows them to stay on top of the cloud market:

  1. Invest considerable amounts in infrastructure
  2. Expand their fleet of services
  3. Execute it all well
  4. Grow its business with enterprises
  5. Has the full long-term backing of Amazon

What can we take from this?

Well, according to Dinsdale, the Amazon formula involves:

  • Investing in your innovation
  • Constantly broadening your product/service range
  • Perform with minimal error
  • Aim for the high-profile customers
  • Look to receive stable funding and support

The post 5 Reasons Why AWS Leads the Cloud Market appeared first on AWS Managed Services by Anchor.

5 years with home NAS/RAID

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/5-years-with-home-nasraid.html

I have lots of data-sets (packet-caps, internet-scans), so I need a large RAID system to hole it all. As I described in 2012, I bought a home “NAS” system. I thought I’d give the 5 year perspective.

Reliability. I had two drives fail, which is about to be expected. Buying a new drive, swapping it in, and rebuilding the RAID went painless, though that’s because I used RAID6 (two drive redundancy). RAID5 (one drive redundancy) is for chumps.

Speed. I’ve been unhappy with the speed, but there’s not much I can do about it. Mechanical drives access times are slow, and I don’t see any way of fixing that.

Cost. It’s been $3000 over 5 years (including the two replacement drives). That comes out to $50/month. Amazon’s “Glacier” service is $108/month. Since we all have the same hardware costs, it’s unlikely that any online cloud storage can do better than doing it yourself.

Moore’s Law. For the same price as I spent 5 years ago, I can now get three times the storage, including faster processors in the NAS box. From that perspective, I’ve only spent $33/month on storage, as the remaining third still has value.

Ease-of-use: The reason to go with a NAS is ease-of-use, so I don’t have to mess with it. Yes, I’m a Linux sysadmin, but I have more than enough Linux boxen needing my attention. The NAS has been extremely easy to use, even dealing with the two disk failures.

Battery backup. The cheap $50 CyberPower UPS I bought never worked well and completely failed recently, so I’ve ordered a $150 APC unit to replace it.

Vendor. I chose Synology, and have no reason to complain. Of course they’ve had security vulnerabilities, but then, so have all their competition.

DLNA. This is a standard for streaming music among home devices. It never worked well. I suspect partly it’s Synology’s fault that they can’t transcode well. I suspect it’s also the apps I tried on the iPad which have obvious problems. I end up streaming to the iPad by simply using the SMB protocol to serve files rather than a video protocol.

Consumer vs. enterprise drives. I chose consumer rather than enterprise drives. I think this is always the best choice (RAID means inexpensive drives). But very smart people with experience in recovering data disagree with me.

If you are in the market. If you are building your own NAS, get a 4 or 5 bay device and RAID6. Two-drive redundancy is really important.

Announcing the 2017-18 European Astro Pi challenge!

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/announcing-2017-18-astro-pi/

Astro Pi is back! Today we’re excited to announce the 2017-18 European Astro Pi challenge in partnership with the European Space Agency (ESA). We are searching for the next generation of space scientists.

YouTube

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Astro Pi is an annual science and coding competition where student-written code is run on the International Space Station under the oversight of an ESA astronaut. The challenge is open to students from all 22 ESA member countries, including — for the first time — associate members Canada and Slovenia.

The format of the competition is changing slightly this year, and we also have a brand-new non-competitive mission in which participants are guaranteed to have their code run on the ISS for 30 seconds!

Mission Zero

Until now, students have worked on Astro Pi projects in an extra-curricular context and over multiple sessions. For teachers and students who don’t have much spare capacity, we wanted to provide an accessible activity that teams can complete in just one session.

So we came up with Mission Zero for young people no older than 14. To complete it, form a team of two to four people and use our step-by-step guide to help you write a simple Python program that shows your personal message and the ambient temperature on the Astro Pi. If you adhere to a few rules, your code is guaranteed to run in space for 30 seconds, and you’ll receive a certificate showing the exact time period during which your code has run in space. No special hardware is needed for this mission, since everything is done in a web browser.

Mission Zero is open until 26 November 2017! Find out more.

Mission Space Lab

Students aged up to 19 can take part in Mission Space Lab. Form a team of two to six people, and work like real space scientists to design your own experiment. Receive free kit to work with, and write the Python code to carry out your experiment.

There are two themes for Mission Space Lab teams to choose from for their projects:

  • Life in space
    You will make use of Astro Pi Vis (“Ed”) in the European Columbus module. You can use all of its sensors, but you cannot record images or videos.
  • Life on Earth
    You will make use of Astro Pi IR (“Izzy”), which will be aimed towards the Earth through a window. You can use all of its sensors and its camera.

The Astro Pi kit, delivered to Space Lab teams by ESA

If you achieve flight status, your code will be uploaded to the ISS and run for three hours (two orbits). All the data that your code records in space will be downloaded and returned to you for analysis. Then submit a short report on your findings to be in with a chance to win exclusive, money-can’t-buy prizes! You can also submit your project for a Bronze CREST Award.

Mission Space Lab registration is open until 29 October 2017, and accepted teams will continue to spring 2018. Find out more.

How do I get started?

There are loads of materials available that will help you begin your Astro Pi journey — check out the Getting started with the Sense HAT resource and this video explaining how to build the flight case.

Questions?

If you have any questions, please post them in the comments below. We’re standing by to answer them!

The post Announcing the 2017-18 European Astro Pi challenge! appeared first on Raspberry Pi.

FRED-209 Nerf gun tank

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/nerf-gun-tank-fred-209/

David Pride, known to many of you as an active member of our maker community, has done it again! His FRED-209 build combines a Nerf gun, 3D printing, a Raspberry Pi Zero, and robotics to make one neat remotely controlled Nerf tank.

FRED-209 – 3D printed Raspberry Pi Nerf Tank

Uploaded by David Pride on 2017-09-17.

A Nerf gun for FRED-209

David says he worked on FRED-209 over the summer in order to have some fun with Nerf guns, which weren’t around when he was a kid. He purchased an Elite Stryfe model at a car boot sale, and took it apart to see what made it tick. Then he set about figuring out how to power it with motors and a servo.

Nerf Elite Stryfe components for the FRED-209 Nerf tank of David Pride

To control the motors, David used a ZeroBorg add-on board for the Pi Zero, and he set up a PlayStation 3 controller to pilot his tank. These components were also part of a robot that David entered into the Pi Wars competition, so he had already written code for them.

3D printing for FRED-209

During prototyping for his Nerf tank, which David named after ED-209 from RoboCop, he used lots of eBay loot and several 3D-printed parts. He used the free OpenSCAD software package to design the parts he wanted to print. If you’re a novice at 3D printing, you might find the printing advice he shares in the write-up on his blog very useful.

3D-printed lid of FRED-209 nerf gun tank by David Pride

David found the 3D printing of the 24cm-long lid of FRED-209 tricky

On eBay, David found some cool-looking chunky wheels, but these turned out to be too heavy for the motors. In the end, he decided to use a Rover 5 chassis, which changed the look of FRED-209 from ‘monster truck’ to ‘tank’.

FRED-209 Nerf tank by David Pride

Next step: teach it to use stairs

The final result looks awesome, and David’s video demonstrates that it shoots very accurately as well. A make like this might be a great defensive project for our new apocalypse-themed Pioneers challenge!

Taking FRED-209 further

David will be uploading code and STL files for FRED-209 soon, so keep an eye on his blog or Twitter for updates. He’s also bringing the Nerf tank to the Cotswold Raspberry Jam this weekend. If you’re attending the event, make sure you catch him and try FRED-209 out yourself.

Never one to rest on his laurels, David is already working on taking his build to the next level. He wants to include a web interface controller and a camera, and is working on implementing OpenCV to give the Nerf tank the ability to autonomously detect targets.

Pi Wars 2018

I have a feeling we might get to see an advanced version of David’s project at next year’s Pi Wars!

The 2018 Pi Wars have just been announced. They will take place on 21-22 April at the Cambridge Computer Laboratory, and you have until 3 October to apply to enter the competition. What are you waiting for? Get making! And as always, do share your robot builds with us via social media.

The post FRED-209 Nerf gun tank appeared first on Raspberry Pi.

Choosing a Backup Provider (An Intro to Backblaze)

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/an-intro-to-backblaze/

Backblaze storage pods

Hi! We’re Backblaze — a backup and cloud storage company in sunny San Mateo, California. We’ve been in business since 2007, have a great track record, and have been on a mission to make backing up simple, inexpensive, and unobtrusive.

This post hopes to serve as an introduction to Backblaze for folks that might not be familiar with us. If you’re an avid reader already, you’ll note that we’ve written about many of these stories before. We won’t be offended if you tune back in for the next post. For everyone else, we thought we’d give you a look at who we are, how we’ve remained committed to unlimited backup, and why we think you should give us a shot.

A Bit About our Background

“We never had deep VC pockets to burn cash. If we were unsustainable, we would have gone out of business 9 years ago.” — Gleb Budman, Backblaze CEO and cofounder

Backblaze just turned 10 years old (thanks for the birthday wishes), and we have a solid track record as a successful company. Backblaze was started by five founders who went without salaries for two years until they got the company profitable. That’s an accomplishment in and of itself. A decade later, we’ve “only” raised $5.3 Million in funding. Don’t get us wrong, $5M is a lot of money, but we do think it shows that we run a responsible company by providing industry leading backup solutions at fair prices.

Backblaze is Committed To Customers & Unlimited Data Backup

Since 2007, many companies have come into the backup space. Many of those, at some point or another, offered an unlimited data storage plan. In 2017, Backblaze stands alone as the remaining player offering truly unlimited data backup.

What is “truly unlimited?” To us, that means getting our customers backed up as quickly as possible — with no limits on file types or sizes. While there are other backup companies out there, few of them if any, offer unlimited services at a flat rate. Many force customers to choose between service tiers, leading to confusion and customer apprehension about how much data they have now, or will have later. By contrast, we are focused on making Backblaze easy to use, and easy to understand.

At Backblaze, backup means running efficiently in the background to get a copy of your data securely into the cloud. Because we’re truly unlimited, we operate on an “exclusion” model. That means, by default, we backup all of the user data on your computer. Of course, you can exclude anything you don’t want backed up. Other companies operate on an “inclusion” model — you need to proactively select folders and files to be backed up. Why did we choose “exclusion” over “inclusion?” Because in our model, if you do nothing, you are fully covered. The alternative may leave you forgetting that new folder you created or those important files on your desktop.

Operating under the “inclusion model” would mean we would store less data (which would reduce our costs), but we’re not interested in reducing our costs if it means leaving our customers unprotected. Because of decisions like that, we’re currently storing over 350PB of our customer data.

Recently, we released version 5.0 of our industry leading computer backup product. Among other things in that release, we introduced file sharing via URL and faster backups. Through something called auto-threading, we’ve increased the speed at which your data gets backed up. Our internal tests have us over 10x the speed of the competition. That’s how one Reddit user backed up almost one terabyte of data in fewer than 24 hours.

Not only are we committed to our Personal Backup users, but we’re also a leading destination for businesses as well. Our latest Backblaze for Business update gives businesses of any size all of the same great backup and security, while also adding an administrative console and tools through our Backblaze Groups feature.

Best of all our Backblaze Groups feature is available to every Backblaze user, so if you’re the “Head of I.T.” for your household and managing a few computers, you can manage your families backups with Groups as well.

How We Do It

The question often comes up, “How do you do it? How can you continue offering unlimited backup in an era where most everyone else has stopped?” The answer lies in our origins — because we didn’t have a lot of cash, we had to create a sustainable business. Among other things, we created our own Storage Pods, Storage Vaults, and software. Our purpose-built infrastructure is what gives us incredibly low cloud storage costs. That same storage architecture is the basis for B2 Cloud Storage, the most affordable object storage on the planet (B2 is ¼ of the price of the offerings from Amazon, Microsoft and Google). Backblaze B2’s APIs, CLIs, and integration partners also give users the flexibility of backing up Macs, PCs, Linux, and servers their own way, if they want to take control.

We think that kind of dedication, innovation, and frugality supports our claim to be a trustworthy caretaker of your data — videos, photos, business docs, and other precious memories.

Give Us a Try!

Give us a try with our free 15-day trial. We’d love to welcome you to your new backup home.

Have questions? Sound off in the comments below! We love hearing from current customers as well as those looking to come aboard.

The post Choosing a Backup Provider (An Intro to Backblaze) appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How Aussie ecommerce stores can compete with the retail giant Amazon

Post Syndicated from chris desantis original https://www.anchor.com.au/blog/2017/08/aussie-ecommerce-stores-vs-amazon/

The powerhouse Amazon retail store is set to launch in Australia toward the end of 2018 and Aussie ecommerce retailers need to ready themselves for the competition storm ahead.

2018 may seem a while away but getting your ecommerce site in tip top shape and ready to compete can take time. Check out these helpful hints from the Anchor crew.

Speed kills

If you’ve ever heard of the tale of the tortoise and the hare, the moral is that “slow and steady wins the race”. This is definitely not the place for that phrase, because if your site loads as slowly as a 1995 dial up connection, your ecommerce store will not, I repeat, will not win the race.

Site speed can be impacted by a number of factors and getting the balance right between a site that loads at lightning speed and delivering engaging content to your audience. There are many ways to check the performance of your site including Anchor’s free hosting check up or pingdom.

Taking action can boost the performance of your site:

Here’s an interesting blog from the WebCEO team about site speed’s impact on conversion rates on-page, or check out our previous blog on maximising site performance.

Show me the money

As an ecommerce store, getting credit card details as fast as possible is probably at the top of your list, but it’s important to remember that it’s an actual person that needs to hand over the details.

Consider the customer’s experience whilst checking out. Making people log in to their account before checkout, can lead to abandoned carts as customers try to remember the vital details. Similarly, making a customer enter all their details before displaying shipping costs is more of an annoyance than a benefit.

Built for growth

Before you blast out a promo email to your entire database or spend up big on PPC, consider what happens when this 5 fold increase in traffic, all jumps onto your site at around the same time.

Will your site come screeching to a sudden halt with a 504 or 408 error message, or ride high on the wave of increased traffic? If you have fixed infrastructure such as a dedicated server, or are utilising a VPS, then consider the maximum concurrent users that your site can handle.

Consider this. Amazon.com.au will be built on the scalable cloud infrastructure of Amazon Web Services and will utilise all the microservices and data mining technology to offer customers a seamless, personalised shopping experience. How will your business compete?

Search ready

Being found online is important for any business, but for ecommerce sites, it’s essential. Gaining results from SEO practices can take time so beware of ‘quick fix guarantees’ from outsourced agencies.

Search Engine Optimisation (SEO) practices can have lasting effects. Good practices can ensure your site is found via organic search without huge advertising budgets, on the other hand ‘black hat’ practices can push your ecommerce store into search oblivion.

SEO takes discipline and focus to get right. Here are some of our favourite hints for SEO greatness from those who live and breathe SEO:

  • Optimise your site for mobile
  • Use Meta Tags wisely
  • Leverage Descriptive alt tags and image file names
  • Create content for people, not bots (keyword stuffing is a no no!)

SEO best practices are continually evolving, but creating a site that is designed to give users a great experience and give them the content they expect to find.

Google My Business is a free service that EVERY business should take advantage of. It is a listing service where your business can provide details such as address, phone number, website, and trading hours. It’s easy to update and manage, you can add photos, a physical address (if applicable), and display shopper reviews.

Get your site ship shape

Overwhelmed by these starter tips? If you are ready to get your site into tip top shape–get in touch. We work with awesome partners like eWave who can help create a seamless online shopping experience.

 

The post How Aussie ecommerce stores can compete with the retail giant Amazon appeared first on AWS Managed Services by Anchor.

Empowerment, Engagement, and Education for Women in Tech

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/empowerment-engagement-and-education-for-women-in-tech/

I’ve been earning a living in the technology industry since 1977, when I worked in one of the first computer stores in the country as a teenager. Looking back over the past 40 years, and realizing that the Altair, IMSAI, Sol-20, and North Star Horizon machines that I learned about, built, debugged, programmed, sold, and supported can now be seen in museums (Seattle’s own Living Computer Museum is one of the best), helps me to appreciate that the world I live in changes quickly, and to understand that I need to do the same. This applies to technology, to people, and to attitudes.

I lived in a suburb of Boston in my early teens. At that time, diversity meant that one person in my public school had come all the way from (gasp) England a few years earlier. When I went to college I began to meet people from other countries and continents and to appreciate the fresh vantage points and approaches that they brought to the workplace and to the problems that we tackled together.

Back in those days, there were virtually no women working as software engineers, managers, or entrepreneurs. Although the computer store was owned by a couple and the wife did all of the management, this was the exception rather than the rule at that time, and for too many years after that. Today, I am happy to be part of a team that brings together the most capable people, regardless of their gender, race, background, or anything other than their ability to do a kick-ass job (Ana, Tara, Randall, Tina, Devin, and Sara, I’m talking about all of you).

We want to do all that we can to encourage young women to prepare to become the next generation of engineers, managers, and entrepreneurs. AWS is proud to support Girls Who Code (including the Summer Immersion Program), Girls in Tech, and other organizations supporting women and underrepresented communities in tech. I sincerely believe that these organizations will be able to move the needle in the right direction. However, like any large-scale social change, this is going to take some time with results visible in years and decades, and only with support & participation from those of us already in the industry.

In conjunction with me&Eve, we were able to speak with some of the attendees at the most recent Girls in Tech Catalyst conference (that’s our booth in the picture). Click through to see what the attendees had to say:

I’m happy to be part of an organization that supports such a worthwhile cause, and that challenges us to make our organization ever-more diverse. While reviewing this post with my colleagues I learned about We Power Tech, an AWS program designed to build skills and foster community and to provide access to Amazon executives who are qualified to speak about the program and about diversity. In conjunction with our friends at Accenture, we have assembled a strong Diversity at re:Invent program.

Jeff;

PS – I did my best to convince Ana, Tara, Tina, or Sara to write this post. Tara finally won the day when she told me “You have raised girls into women, and you are passionate in seeing them succeed in their chose fields with respect and equity. Your post conveying that could be powerful.”