Tag Archives: CAD

New National Academies Report on Crypto Policy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/new_national_ac.html

The National Academies has just published “Decrypting the Encryption Debate: A Framework for Decision Makers.” It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

Weekly roundup: Lost time

Post Syndicated from Eevee original https://eev.ee/dev/2018/02/13/weekly-roundup-lost-time/

I ran out of brain pills near the end of January due to some regulatory kerfuffle, and spent something like a week and a half basically in a daze. I have incredibly a lot of stuff to do right now, too, so not great timing… but, well, I guess no time would be especially good. Oh well. I got a forced vacation and played some Avernum.

Anyway, in the last three weeks, the longest span I’ve ever gone without writing one of these:

  • anise: I added a ✨ completely new menu feature ✨ that looks super cool and amazing and will vastly improve the game.

  • blog: I wrote SUPER game night 3, featuring a bunch of games from GAMES MADE QUICK??? 2.0! It’s only a third of them though, oh my god, there were just so many.

    I also backfilled some release posts, including one for Strawberry Jam 2 — more on that momentarily.

  • ???: Figured out a little roadmap and started on an ???.

  • idchoppers: Went down a whole rabbit hole trying to port some academic C++ to Rust, ultimately so I could intersect arbitrary shapes, all so I could try out this ridiculous idea to infer the progression through a Doom map. This was kind of painful, and is basically the only useful thing I did while unmedicated. I might write about it.

  • misc: I threw together a little PICO-8 prime sieve inspired by this video. It’s surprisingly satisfying.

    (Hmm, does this deserve a release post? Where should its permanent home be? Argh.)

  • art: I started to draw my Avernum party but only finished one of them. I did finish a comic celebrating the return of my brain pills.

  • neon vn: I contributed some UI and bugfixing to a visual novel that’ll be released on Floraverse tomorrow.

  • alice vn: For Strawberry Jam 2, glip and I are making a ludicrously ambitious horny visual novel in Ren’Py. Turns out Ren’Py is impressively powerful, and I’ve been having a blast messing with it. But also our idea requires me to write about sixty zillion words by the end of the month. I guess we’ll see how that goes.

    I have a (NSFW) progress thread going on my smut alt, but honestly, most of the progress for the next week will be “did more writing”.

I’m behind again! Sorry. I still owe a blog post for last month, and a small project for last month, and now blog posts for this month, and Anise game is kinda in limbo, and I don’t know how any of this will happen with this huge jam game taking priority over basically everything else. I’ll see if I can squeeze other stuff in here and there. I intended to draw more regularly this month, too, but wow I don’t think I can even spare an hour a day.

The jam game is forcing me to do a lot of writing that I’d usually dance around and avoid, though, so I think I’ll come out the other side of it much better and faster and more confident.

Welp. Back to writing!

Kim Dotcom Begins New Fight to Avoid Extradition to United States

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-begins-new-fight-to-avoid-extradition-to-united-states-180212/

More than six years ago in January 2012, file-hosting site Megaupload was shut down by the United States government and founder Kim Dotcom and his associates were arrested in New Zealand.

What followed was an epic legal battle to extradite Dotcom, Mathias Ortmann, Finn Batato, and Bram van der Kolk to the United States to face several counts including copyright infringement, racketeering, and money laundering. Dotcom has battled the US government every inch of the way.

The most significant matters include the validity of the search warrants used to raid Dotcom’s Coatesville home on January 20, 2012. Despite a prolonged trip through the legal system, in 2014 the Supreme Court dismissed Dotcom’s appeals that the search warrants weren’t valid.

In 2015, the District Court later ruled that Dotcom and his associates are eligible for extradition. A subsequent appeal to the High Court failed when in February 2017 – and despite a finding that communicating copyright-protected works to the public is not a criminal offense in New Zealand – a judge also ruled in favor.

Of course, Dotcom and his associates immediately filed appeals and today in the Court of Appeal in Wellington, their hearing got underway.

Lawyer Grant Illingworth, representing Van der Kolk and Ortmann, told the Court that the case had “gone off the rails” during the initial 10-week extradition hearing in 2015, arguing that the case had merited “meaningful” consideration by a judge, something which failed to happen.

“It all went wrong. It went absolutely, totally wrong,” Mr. Illingworth said. “We were not heard.”

As expected, Illingworth underlined the belief that under New Zealand law, a person may only be extradited for an offense that could be tried in a criminal court locally. His clients’ cases do not meet that standard, the lawyer argued.

Turning back the clocks more than six years, Illingworth again raised the thorny issue of the warrants used to authorize the raids on the Megaupload defendants.

It had previously been established that New Zealand’s GCSB intelligence service had illegally spied on Dotcom and his associates in the lead up to their arrests. However, that fact was not disclosed to the District Court judge who authorized the raids.

“We say that there was misleading conduct at this stage because there was no reference to the fact that information had been gathered illegally by the GCSB,” he said.

But according to Justice Forrest Miller, even if this defense argument holds up the High Court had already found there was a prima facie case to answer “with bells on”.

“The difficulty that you face here ultimately is whether the judicial process that has been followed in both of the courts below was meaningful, to use the Canadian standard,” Justice Miller said.

“You’re going to have to persuade us that what Justice Gilbert [in the High Court] ended up with, even assuming your interpretation of the legislation is correct, was wrong.”

Although the US seeks to extradite Dotcom and his associates on 13 charges, including racketeering, copyright infringement, money laundering and wire fraud, the Court of Appeal previously confirmed that extradition could be granted based on just some of the charges.

The stakes couldn’t be much higher. The FBI says that the “Megaupload Conspiracy” earned the quartet $175m and if extradited to the US, they could face decades in jail.

While Dotcom was not in court today, he has been active on Twitter.

“The court process went ‘off the rails’ when the only copyright expert Judge in NZ was >removed< from my case and replaced by a non-tech Judge who asked if Mega was ‘cow storage’. He then simply copy/pasted 85% of the US submissions into his judgment," Dotcom wrote.

Dotcom also appeared to question the suitability of judges at both the High Court and Court of Appeal for the task in hand.

“Justice Miller and Justice Gilbert (he wrote that High Court judgment) were business partners at the law firm Chapman Tripp which represents the Hollywood Studios in my case. Both Judges are now at the Court of Appeal. Gilbert was promoted shortly after ruling against me,” Dotcom added.

Dotcom is currently suing the New Zealand government for billions of dollars in damages over the warrant which triggered his arrest and the demise of Megaupload.

The hearing is expected to last up to two-and-a-half weeks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

US Online Piracy Lawsuits Skyrocket in the New Year

Post Syndicated from Ernesto original https://torrentfreak.com/u-s-online-piracy-lawsuits-skyrocket-in-the-new-year-180211/

Since the turn of the last decade, numerous people have been sued for illegal file-sharing in US courts.

Initially, these lawsuits targeted hundreds or thousands of BitTorrent users per case, but this practice has been rooted out since. Now, most file-sharing cases target a single person, up to a dozen or two at most.

While there may be fewer defendants, there are still plenty of lawsuits filed every month. These generally come from a small group of companies, regularly referred to as “copyright trolls,” who are looking to settle with the alleged pirates.

According to Lex Machina, there were 1,019 file-sharing cases filed in the United States last year, which is an average of 85 per month. More than half of these came from adult entertainment outfit Malibu Media (X-Art), which alone was good for 550 lawsuits.

While those are decent numbers, they could easily be shattered this year. Data collected by TorrentFreak shows that during the first month of 2018, three copyright holders filed a total of 286 lawsuits against alleged pirates. That’s three times more than the monthly average for 2017.

As expected, Malibu Media takes the crown with 138 lawsuits, but not by a large margin. Strike 3 Holdings, which distributes its adult videos via the Blacked, Tushy, and Vixen websites, comes in second place with 133 cases.

Some Malibu Media cases

While Strike 3 Holdings is a relative newcomer, their cases follow a similar pattern. There are also clear links to Malibu Media, as one of the company’s former lawyers, Emilie Kennedy, now works as in-house counsel at Strike 3.

The only non-adult copyright holder that filed cases against alleged BitTorrent pirates was Bodyguard Productions. The company filed 15 cases against downloaders of The Hitman’s Bodyguard, totaling a few dozen defendants.

While these numbers are significant, it’s hard to predict whether the increase will persist. Lawsuits targeted at BitTorrent users often come in waves, and the same companies that flooded the courts with cases last month could easily take a break the next.

While copyright holders have every right to go after people who share their work without permission, these type of cases are not without controversy.

Several judges have referred used strong terms including “harassment,” to describe some of the tactics that are used, and the IP-address evidence is not always trusted either.

That said, there’s no evidence that Malibu Media and others are done yet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Voksi Releases Detailed Denuvo-Cracking Video Tutorial

Post Syndicated from Andy original https://torrentfreak.com/voksi-releases-detailed-denuvo-cracking-video-tutorial-180210/

Earlier this week, version 4.9 of the Denuvo anti-tamper system, which had protected Assassins Creed Origin for the past several months, was defeated by Italian cracking group CPY.

While Denuvo would probably paint four months of protection as a success, the company would certainly have preferred for things to have gone on a bit longer, not least following publisher Ubisoft’s decision to use VMProtect technology on top.

But while CPY do their thing in Italy there’s another rival whittling away at whatever the giants at Denuvo (and new owner Irdeto) can come up with. The cracker – known only as Voksi – hails from Bulgaria and this week he took the unusual step of releasing a 90-minute video (embedded below) in which he details how to defeat Denuvo’s V4 anti-tamper technology.

The video is not for the faint-hearted so those with an aversion to issues of a highly technical nature might feel the urge to look away. However, it may surprise readers to learn that not so long ago, Voksi knew absolutely nothing about coding.

“You will find this very funny and unbelievable,” Voksi says, recalling the events of 2012.

“There was one game called Sanctum and on one free [play] weekend [on Steam], I and my best friend played through it and saw how great the cooperative action was. When the free weekend was over, we wanted to keep playing, but we didn’t have any money to buy the game.

“So, I started to look for alternative ways, LAN emulators, anything! Then I decided I need to crack it. That’s how I got into reverse engineering. I started watching some shitty YouTube videos with bad quality and doing some tutorials. Then I found about Steam exploits and that’s how I got into making Steamworks fixes, allowing cracked multiplayer between players.”

Voksi says his entire cracking career began with this one indie game and his desire to play it with his best friend. Prior to that, he had absolutely no experience at all. He says he’s taken no university courses or any course at all for that matter. Everything he knows has come from material he’s found online. But the intrigue doesn’t stop there.

“I don’t even know how to code properly in high-level language like C#, C++, etc. But I understand assembly [language] perfectly fine,” he explains.

For those who code, that’s generally a little bit back to front, with low-level languages usually posing the most difficulties. But Voksi says that with assembly, everything “just clicked.”

Of course, it’s been six years since the 21-year-old was first motivated to crack a game due to lack of funds. In the more than half decade since, have his motivations changed at all? Is it the thrill of solving the puzzle or are there other factors at play?

“I just developed an urge to provide paid stuff for free for people who can’t afford it and specifically, co-op and multiplayer cracks. Of course, i’m not saying don’t support the developers if you have the money and like the game. You should do that,” he says.

“The challenge of cracking also motivates me, especially with an abomination like Denuvo. It is pure cancer for the gaming industry, it doesn’t help and it only causes issues for the paying customers.”

Those who follow Voksi online will know that as well as being known in his own right, he’s part of the REVOLT group, a collective that has Voksi’s core interests and goals as their own.

“REVOLT started as a group with one and only goal – to provide multiplayer support for cracked games. No other group was doing it until that day. It was founded by several members, from which I’m currently the only one active, still releasing cracks.

“Our great achievements are in first place, of course, cracking Denuvo V4, making us one of the four groups/people who were able to break the protection. In second place are our online fixes for several AAA games, allowing you to play on legit servers with legit players. In third place, our ordinary Steamworks fixes allowing you to play multiplayer between cracked users.”

In communities like /r/crackwatch on Reddit and those less accessible, Voksi and others doing similar work are often held up as Internet heroes, cracking games in order to give the masses access to something that might’ve been otherwise inaccessible. But how does this fame sit with him?

“Well, I don’t see myself as a hero, just another ordinary person doing what he loves. I love seeing people happy because of my work, that’s also a big motivation, but nothing more than that,” he says.

Finally, what’s up next for Voksi and what are his hopes for the rest of the year?

“In an ideal world, Denuvo would die. As for me, I don’t know, time will tell,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Water Utility Infected by Cryptocurrency Mining Software

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/water_utility_i.html

A water utility in Europe has been infected by cryptocurrency mining software. This is a relatively new attack: hackers compromise computers and force them to mine cryptocurrency for them. This is the first time I’ve seen it infect SCADA systems, though.

It seems that this mining software is benign, and doesn’t affect the performance of the hacked computer. (A smart virus doesn’t kill its host.) But that’s not going to always be the case.

[$] A cyborg’s journey

Post Syndicated from jake original https://lwn.net/Articles/745942/rss

Karen Sandler has been giving conference talks about free software and open
medical devices
for the better part of a decade at this point. LWN briefly covered a 2010 LinuxCon talk and a 2012 linux.conf.au (LCA) talk; her talk at
LCA 2012 was her first full-length keynote, she said. In this year’s
edition, she
reviewed her history (including her love for LCA based in part on that 2012
visit)
and gave an update on the status of the source code for the device she
has implanted on her heart.

Astro Pi celebrates anniversary of ISS Columbus module

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-celebrates-anniversary/

Right now, 400km above the Earth aboard the International Space Station, are two very special Raspberry Pi computers. They were launched into space on 6 December 2015 and are, most assuredly, the farthest-travelled Raspberry Pi computers in existence. Each year they run experiments that school students create in the European Astro Pi Challenge.

Raspberry Astro Pi units on the International Space Station

Left: Astro Pi Vis (Ed); right: Astro Pi IR (Izzy). Image credit: ESA.

The European Columbus module

Today marks the tenth anniversary of the launch of the European Columbus module. The Columbus module is the European Space Agency’s largest single contribution to the ISS, and it supports research in many scientific disciplines, from astrobiology and solar science to metallurgy and psychology. More than 225 experiments have been carried out inside it during the past decade. It’s also home to our Astro Pi computers.

Here’s a video from 7 February 2008, when Space Shuttle Atlantis went skywards carrying the Columbus module in its cargo bay.

STS-122 Launch NASA TV Coverage

From February 7th, 2008 NASA-TV Coverage of The 121st Space Shuttle Launch Launched At:2:45:30 P.M E.T – Coverage begins exactly one hour till launch STS-122 Crew:

Today, coincidentally, is also the deadline for the European Astro Pi Challenge: Mission Space Lab. Participating teams have until midnight tonight to submit their experiments.

Anniversary celebrations

At 16:30 GMT today there will be a live event on NASA TV for the Columbus module anniversary with NASA flight engineers Joe Acaba and Mark Vande Hei.

Our Astro Pi computers will be joining in the celebrations by displaying a digital birthday candle that the crew can blow out. It works by detecting an increase in humidity when someone blows on it. The video below demonstrates the concept.

AstroPi candle

Uploaded by Effi Edmonton on 2018-01-17.

Do try this at home

The exact Astro Pi code that will run on the ISS today is available for you to download and run on your own Raspberry Pi and Sense HAT. You’ll notice that the program includes code to make it stop automatically when the date changes to 8 February. This is just to save time for the ground control team.

If you have a Raspberry Pi and a Sense HAT, you can use the terminal commands below to download and run the code yourself:

wget http://rpf.io/colbday -O birthday.py
chmod +x birthday.py
./birthday.py

When you see a blank blue screen with the brightness increasing, the Sense HAT is measuring the baseline humidity. It does this every 15 minutes so it can recalibrate to take account of natural changes in background humidity. A humidity increase of 2% is needed to blow out the candle, so if the background humidity changes by more than 2% in 15 minutes, it’s possible to get a false positive. Press Ctrl + C to quit.

Please tweet pictures of your candles to @astro_pi – we might share yours! And if we’re lucky, we might catch a glimpse of the candle on the ISS during the NASA TV event at 16:30 GMT today.

The post Astro Pi celebrates anniversary of ISS Columbus module appeared first on Raspberry Pi.

Cabinet of Secret Documents from Australia

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/cabinet_of_secr.html

This story of leaked Australian government secrets is unlike any other I’ve heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as “top secret” or “AUSTEO”, which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There’s lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government’s reaction to the incident: they’re pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

“The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence,” he said.

“That is to say, if you’ve got a filing cabinet that is full of classified information … that means all the Crown has to prove if they’re prosecuting you is that it is classified ­ nothing else.

“They don’t have to prove that you knew it was classified, so knowledge is beside the point.”

[…]

Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government’s perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don’t want to appear weak on national security, so I’m not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

EDITED TO ADD (2/13): Excellent political cartoon.

Jailed Streaming Site Operator Hit With Fresh $3m Damages Lawsuit

Post Syndicated from Andy original https://torrentfreak.com/jailed-streaming-site-operator-hit-with-fresh-3m-damages-lawsuit-180207/

After being founded more than half a decade ago, Swefilmer grew to become Sweden’s most popular movie and TV show streaming site. It was only a question of time before authorities stepped in to bring the show to an end.

In 2015, a Swedish operator of the site in his early twenties was raided by local police. A second man, Turkish and in his late twenties, was later arrested in Germany.

The pair, who hadn’t met in person, appeared before the Varberg District Court in January 2017, accused of making more than $1.5m from their activities between November 2013 and June 2015.

The prosecutor described Swefilmer as “organized crime”, painting the then 26-year-old as the main brains behind the site and the 23-year-old as playing a much smaller role. The former was said to have led a luxury lifestyle after benefiting from $1.5m in advertising revenue.

The sentences eventually handed down matched the defendants’ alleged level of participation. While the younger man received probation and community service, the Turk was sentenced to serve three years in prison and ordered to forfeit $1.59m.

Very quickly it became clear there would be an appeal, with plaintiffs represented by anti-piracy outfit RightsAlliance complaining that their 10m krona ($1.25m) claim for damages over the unlawful distribution of local movie Johan Falk: Kodnamn: Lisa had been ruled out by the Court.

With the appeal hearing now just a couple of weeks away, Swedish outlet Breakit is reporting that media giant Bonnier Broadcasting has launched an action of its own against the now 27-year-old former operator of Swefilmer.

According to the publication, Bonnier’s pay-TV company C More, which distributes for Fox, MGM, Paramount, Universal, Sony and Warner, is set to demand around 24m krona ($3.01m) via anti-piracy outfit RightsAlliance.

“This is about organized crime and grossly criminal individuals who earned huge sums on our and others’ content. We want to take every opportunity to take advantage of our rights,” says Johan Gustafsson, Head of Corporate Communications at Bonnier Broadcasting.

C More reportedly filed its lawsuit at the Stockholm District Court on January 30, 2018. At its core are four local movies said to have been uploaded and made available via Swefilmer.

“C More would probably never even have granted a license to [the operator] to make or allow others to make the films available to the public in a similar way as [the operator] did, but if that had happened, the fee would not be less than 5,000,000 krona ($628,350) per film or a total of 20,000,000 krona ($2,513,400),” C More’s claim reads.

Speaking with Breakit, lawyer Ansgar Firsching said he couldn’t say much about C More’s claims against his client.

“I am very surprised that two weeks before the main hearing [C More] comes in with this requirement. If you open another front, we have two trials that are partly about the same thing,” he said.

Firsching said he couldn’t elaborate at this stage but expects his client to deny the claim for damages. C More sees things differently.

“Many people live under the illusion that sites like Swefilmer are driven by idealistic teens in their parents’ basements, which is completely wrong. This is about organized crime where our content is used to generate millions and millions in revenue,” the company notes.

The appeal in the main case is set to go ahead February 20th.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization

Post Syndicated from Songzhi Liu original https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/

One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using EMRFS (EMR File System) authorization, the Amazon S3 storage-level authorization on Amazon EMR.

Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.

Simply put, on Amazon EMR, we can now have an Amazon EC2 role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.

In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.

EMRFS authorization for Amazon S3

There are two prerequisites for using this feature:

  1. Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a Kerberos-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.
  2. The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.

EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.

Assume that you have the following three identities in your organization, and they are defined in the Active Directory:

To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:

In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named EMR_EC2_RestrictedRole. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.

Important: For all other roles, like emrfs_auth_group_role_data_eng, you need to add the base role (EMR_EC2_RestrictedRole) as the trusted entity so that it can assume other roles. See the following example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The following is an example policy for the admin user role (emrfs_auth_user_role_admin_user):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

We are assuming the admin user has access to all buckets in this example.

The following is an example policy for the data science group role (emrfs_auth_group_role_data_sci):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

This role grants all Amazon S3 permissions to the emrfs-auth-data-science-bucket-demo bucket and all the objects in it. Similarly, the policy for the role emrfs_auth_group_role_data_eng is shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

Example role mappings configuration

To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post

Consider the following scenario.

First, the admin user admin1 tries to log in and run a command to access Amazon S3 data through EMRFS. The first role emrfs_auth_user_role_admin_user on the mapping list, which is a user role, is mapped and picked up. Then admin1 has access to the Amazon S3 locations that are defined in this role.

Then a user from the data engineer group (grp_data_engineering) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the grp_data_engineering group, the group role emrfs_auth_group_role_data_eng is assumed, and the user has proper access to Amazon S3 that is defined in the emrfs_auth_group_role_data_eng role.

Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the prefix type. If the user is trying to access s3://emrfs-auth-default-bucket-demo/, then the prefix mapping is in effect, and the prefix role emrfs_auth_prefix_role_default_s3_prefix is assumed.

If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the EMR_EC2RestrictedRole. This role is assumed by the EC2 instances in the cluster.

In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.

Setting up an EMR cluster and mapping Active Directory users and groups

Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.

Active Directory setup

Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).

To create and join Active Directory to Amazon EMR, follow the steps in the blog post Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory.

After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like admin1, dataeng1, datascientist1, grp_data_engineering, and grp_data_science, and then add the users to the right groups.

Join the EMR cluster to an Active Directory domain

For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.

The following is an example of the EMR security configuration with a trusted Active Directory domain EMRKRB.TEST.COM and the EMRFS role mappings as we discussed earlier:

The EMRFS role mapping configuration is shown in this example:

We will also provide an example AWS CLI command that you can run.

Launching the EMR cluster and running the tests

Now you have configured Kerberos and EMRFS authorization for Amazon S3.

Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.

[
  {
    "Classification":"hue-ini",
    "Properties":{

    },
    "Configurations":[
      {
        "Classification":"desktop",
        "Properties":{

        },
        "Configurations":[
          {
            "Classification":"ldap",
            "Properties":{

            },
            "Configurations":[
              {
                "Classification":"ldap_servers",
                "Properties":{

                },
                "Configurations":[
                  {
                    "Classification":"AWS",
                    "Properties":{
                      "base_dn":"DC=emrkrb,DC=test,DC=com",
                      "ldap_url":"ldap://emrkrb.test.com",
                      "search_bind_authentication":"false",
                      "bind_dn":"CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com",
                      "bind_password":"Abc123456",
                      "create_users_on_login":"true",
                      "nt_domain":"emrkrb.test.com"
                    },
                    "Configurations":[

                    ]
                  }
                ]
              }
            ]
          },
          {
            "Classification":"auth",
            "Properties":{
              "backend":"desktop.auth.backend.LdapBackend"
            },
            "Configurations":[

            ]
          }
        ]
      }
    ]
  }

Note: In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.

Now let’s use this configuration and the security configuration you created before to launch the cluster.

In the Amazon EMR console, choose Create cluster. Then choose Go to advanced options. On the Step1: Software and Steps page, under Edit software settings (optional), paste the configuration in the box.

The rest of the setup is the same as an ordinary cluster setup, except in the Security Options section. In Step 4: Security, under Permissions, choose Custom, and then choose the RestrictedRole that you created before.

Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the Amazon EMR Management Guide for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.

Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.

You can use the following AWS CLI command to create a cluster.

aws emr create-cluster --name "TestEMRFSAuthorization" \ 
--release-label emr-5.10.0 \ --instance-type m3.xlarge \ 
--instance-count 3 \ 
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \ 
--security-configuration MyKerberosConfig \ 
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=<YourClusterKDCAdminPassword>, \ ADDomainJoinUser=<YourADUserLogonName>,ADDomainJoinPassword=<YourADUserPassword>, \ 
CrossRealmTrustPrincipalPassword=<MatchADTrustPwd>

Note: If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.

After the cluster gets into the Waiting state, try to connect by using SSH into the cluster using the Active Directory user name and password.

ssh -l [email protected] <EMR IP or DNS name>

Quickly run two commands to show that the Active Directory join is successful:

  1. id [user name] shows the mapped AD users and groups in Linux.
  2. hdfs groups [user name] shows the mapped group in Hadoop.

Both should return the current Active Directory user and group information if the setup is correct.

Now, you can test the user mapping first. Log in with the admin1 user, and run a Hadoop list directory command:

hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/

Now switch to a user from the data engineer group.

Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 Access Denied exception.

When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.

hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/

It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.

 

To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under /user/<username>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.

First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/.

You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.

It failed and threw an Amazon S3 Access Denied error.

Now insert one line of data into the successfully create table.

Next, log out, switch to a data science group user, and create another table, test2_datasci_tb.

The creation is successful.

The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).

Now let’s come back to the command line and run some Spark commands.

Login to the master node using the datascientist1 user:

Start the SparkSQL interactive shell by typing spark-sql, and run the show tables command. It should list the tables that you created using Hive.

As a data science group user, try select on both tables. You will find that you can only select the table defined in the location that your group has access to.

Conclusion

EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.


Additional Reading

If you found this post useful, be sure to check out Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory and Launching and Running an Amazon EMR Cluster inside a VPC.


About the Authors

Songzhi Liu is a Big Data Consultant with AWS Professional Services. He works closely with AWS customers to provide them Big Data & Machine Learning solutions and best practices on the Amazon cloud.

 

 

 

 

Cloudflare Terminates Service to Sci-Hub Domain Names

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-terminates-service-to-sci-hub-domain-names-180205/

While Sci-Hub is praised by thousands of researchers and academics around the world, copyright holders are doing everything in their power to wipe the site from the web.

Following a $15 million defeat against Elsevier last June, the American Chemical Society (ACS) won a default judgment of $4.8 million in copyright damages a few months later.

The publisher was further granted a broad injunction, requiring various third-party services to stop providing access to the site. This includes domain registries, hosting companies and search engines.

Soon after the order was signed, several of Sci-Hub’s domain names became unreachable as domain registries complied with the court order. This resulted in a domain name whack-a-mole, but all this time Sci-Hub remained available.

Last weekend another problem appeared for Sci-Hub. This time ACS went after CDN provider Cloudflare, which informed the site that a court order requires the company to disconnect several domain names.

“Cloudflare has received the attached court order, Case 1:17-cv-OO726-LMB-JFA,” the company writes. “Cloudflare will terminate your service for the following domains sci-hub.la, sci-hub.tv, and sci-hub.tw by disabling our authoritative DNS in 24 hours.”

According to Sci-Hub’s operator, losing access to Cloudflare is not “critical,” but it may “cause a short pause in website operation.”

Sci-Hub’s Cloudflare tweet

Cloudflare’s actions are significant because the company previously protested a similar order. When the RIAA used the permanent injunction in the MP3Skull case to compel Cloudflare to disconnect the site, the CDN provider refused.

The RIAA argued that Cloudflare was operating “in active concert or participation” with the pirates. The CDN provider objected, but the court eventually ordered Cloudflare to take action, although it did not rule on the “active concert or participation” part.

In the Sci-Hub case “active concert or participation” is also a requirement for the injunction to apply. While it specifically mentions ISPs and search engines, ACS Director Glenn Ruskin previously stressed that companies won’t be targeted for simply linking users to Sci-Hub.

“The court’s affirmative ruling does not apply to search engines writ large, but only to those entities who have been in active concert or participation with Sci-Hub, such as websites that host ACS content stolen by Sci-Hub,” Ruskin told us at the time.

Cloudflare does more than linking of course, but the company doesn’t see itself as a web hosting service either. While it still may not agree with the “active concert” classification, there’s no evidence that Cloudflare objected in court this time.

As for Sci-Hub, they have to look elsewhere if they want another CDN provider. For now, however, the site remains widely available.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Success at Apache: A Newbie’s Narrative

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/170536010891

yahoodevelopers:

Kuhu Shukla (bottom center) and team at the 2017 DataWorks Summit


By Kuhu Shukla

This post first appeared here on the Apache Software Foundation blog as part of ASF’s “Success at Apache” monthly blog series.

As I sit at my desk on a rather frosty morning with my coffee, looking up new JIRAs from the previous day in the Apache Tez project, I feel rather pleased. The latest community release vote is complete, the bug fixes that we so badly needed are in and the new release that we tested out internally on our many thousand strong cluster is looking good. Today I am looking at a new stack trace from a different Apache project process and it is hard to miss how much of the exceptional code I get to look at every day comes from people all around the globe. A contributor leaves a JIRA comment before he goes on to pick up his kid from soccer practice while someone else wakes up to find that her effort on a bug fix for the past two months has finally come to fruition through a binding +1.

Yahoo – which joined AOL, HuffPost, Tumblr, Engadget, and many more brands to form the Verizon subsidiary Oath last year – has been at the frontier of open source adoption and contribution since before I was in high school. So while I have no historical trajectories to share, I do have a story on how I found myself in an epic journey of migrating all of Yahoo jobs from Apache MapReduce to Apache Tez, a then-new DAG based execution engine.

Oath grid infrastructure is through and through driven by Apache technologies be it storage through HDFS, resource management through YARN, job execution frameworks with Tez and user interface engines such as Hive, Hue, Pig, Sqoop, Spark, Storm. Our grid solution is specifically tailored to Oath’s business-critical data pipeline needs using the polymorphic technologies hosted, developed and maintained by the Apache community.

On the third day of my job at Yahoo in 2015, I received a YouTube link on An Introduction to Apache Tez. I watched it carefully trying to keep up with all the questions I had and recognized a few names from my academic readings of Yarn ACM papers. I continued to ramp up on YARN and HDFS, the foundational Apache technologies Oath heavily contributes to even today. For the first few weeks I spent time picking out my favorite (necessary) mailing lists to subscribe to and getting started on setting up on a pseudo-distributed Hadoop cluster. I continued to find my footing with newbie contributions and being ever more careful with whitespaces in my patches. One thing was clear – Tez was the next big thing for us. By the time I could truly call myself a contributor in the Hadoop community nearly 80-90% of the Yahoo jobs were now running with Tez. But just like hiking up the Grand Canyon, the last 20% is where all the pain was. Being a part of the solution to this challenge was a happy prospect and thankfully contributing to Tez became a goal in my next quarter.

The next sprint planning meeting ended with me getting my first major Tez assignment – progress reporting. The progress reporting in Tez was non-existent – “Just needs an API fix,”  I thought. Like almost all bugs in this ecosystem, it was not easy. How do you define progress? How is it different for different kinds of outputs in a graph? The questions were many.

I, however, did not have to go far to get answers. The Tez community actively came to a newbie’s rescue, finding answers and posing important questions. I started attending the bi-weekly Tez community sync up calls and asking existing contributors and committers for course correction. Suddenly the team was much bigger, the goals much more chiseled. This was new to anyone like me who came from the networking industry, where the most open part of the code are the RFCs and the implementation details are often hidden. These meetings served as a clean room for our coding ideas and experiments. Ideas were shared, to the extent of which data structure we should pick and what a future user of Tez would take from it. In between the usual status updates and extensive knowledge transfers were made.

Oath uses Apache Pig and Apache Hive extensively and most of the urgent requirements and requests came from Pig and Hive developers and users. Each issue led to a community JIRA and as we started running Tez at Oath scale, new feature ideas and bugs around performance and resource utilization materialized. Every year most of the Hadoop team at Oath travels to the Hadoop Summit where we meet our cohorts from the Apache community and we stand for hours discussing the state of the art and what is next for the project. One such discussion set the course for the next year and a half for me.

We needed an innovative way to shuffle data. Frameworks like MapReduce and Tez have a shuffle phase in their processing lifecycle wherein the data from upstream producers is made available to downstream consumers. Even though Apache Tez was designed with a feature set corresponding to optimization requirements in Pig and Hive, the Shuffle Handler Service was retrofitted from MapReduce at the time of the project’s inception. With several thousands of jobs on our clusters leveraging these features in Tez, the Shuffle Handler Service became a clear performance bottleneck. So as we stood talking about our experience with Tez with our friends from the community, we decided to implement a new Shuffle Handler for Tez. All the conversation points were tracked now through an umbrella JIRA TEZ-3334 and the to-do list was long. I picked a few JIRAs and as I started reading through I realized, this is all new code I get to contribute to and review. There might be a better way to put this, but to be honest it was just a lot of fun! All the whiteboards were full, the team took walks post lunch and discussed how to go about defining the API. Countless hours were spent debugging hangs while fetching data and looking at stack traces and Wireshark captures from our test runs. Six months in and we had the feature on our sandbox clusters. There were moments ranging from sheer frustration to absolute exhilaration with high fives as we continued to address review comments and fixing big and small issues with this evolving feature.

As much as owning your code is valued everywhere in the software community, I would never go on to say “I did this!” In fact, “we did!” It is this strong sense of shared ownership and fluid team structure that makes the open source experience at Apache truly rewarding. This is just one example. A lot of the work that was done in Tez was leveraged by the Hive and Pig community and cross Apache product community interaction made the work ever more interesting and challenging. Triaging and fixing issues with the Tez rollout led us to hit a 100% migration score last year and we also rolled the Tez Shuffle Handler Service out to our research clusters. As of last year we have run around 100 million Tez DAGs with a total of 50 billion tasks over almost 38,000 nodes.

In 2018 as I move on to explore Hadoop 3.0 as our future release, I hope that if someone outside the Apache community is reading this, it will inspire and intrigue them to contribute to a project of their choice. As an astronomy aficionado, going from a newbie Apache contributor to a newbie Apache committer was very much like looking through my telescope - it has endless possibilities and challenges you to be your best.

About the Author:

Kuhu Shukla is a software engineer at Oath and did her Masters in Computer Science at North Carolina State University. She works on the Big Data Platforms team on Apache Tez, YARN and HDFS with a lot of talented Apache PMCs and Committers in Champaign, Illinois. A recent Apache Tez Committer herself she continues to contribute to YARN and HDFS and spoke at the 2017 Dataworks Hadoop Summit on “Tez Shuffle Handler: Shuffling At Scale With Apache Hadoop”. Prior to that she worked on Juniper Networks’ router and switch configuration APIs. She likes to participate in open source conferences and women in tech events. In her spare time she loves singing Indian classical and jazz, laughing, whale watching, hiking and peering through her Dobsonian telescope.

EU Anti-Piracy Agreement Has Little Effect on Advertising, Research Finds

Post Syndicated from Ernesto original https://torrentfreak.com/eu-anti-piracy-agreement-has-little-effect-on-advertising-research-finds-180204/

In recent years various copyright holder groups have adopted a “follow-the-money” approach in the hope of cutting off funding to so-called pirate sites.

Thus far this has resulted in some notable developments. In the UK, hundreds of advertising agencies began banning pirate sites in 2014 and similar initiatives have popped up elsewhere too.

One of the more prominent plans was orchestrated by the European Commission. In October 2016, this resulted in a voluntary self-regulation agreement signed by leading EU advertising organizations, which promised to reduce ad placement on pirate sites. The question is, how effective is this agreement?

To find out, researchers from European universities in Munich, Copenhagen, and Lisbon, conducted an extensive study. They collected data on the prevalence of ads from various advertisers on hundreds of pirate sites. The data were collected on several occasions, both before and after the agreement.

The findings are published in the article “Follow The Money: Online Piracy and Self-Regulation in the Advertising Industry.” Christian Peukert, one of the authors, informs TF that the latest version of the working paper was published last month and is currently under review at an academic journal.

The results show that the effects of the anti-piracy agreement are fairly minimal. On a whole, there is no significant change in the volume of piracy sites that ad agencies serve. Only when looking at the larger ad-networks in isolation, a downward trend is visible.

“Our results suggests that the presence of advertising services on piracy websites does not change significantly, at least not on average,” the researchers write in their paper.

“Once we allow for heterogeneity in terms of size, we show that more popular advertising services, i.e. those that are overall more diffused on the Internet, reduce their presence on piracy websites significantly more.”

When larger advertising companies are given more weight in the analysis, the average effect equates to a 17% drop in pirate site connections.

That larger companies are more likely to comply with the agreement can be explained by a variety of reasons. They could simply be more aware of the agreement, or they feel more pressure to take appropriate steps in response.

Interestingly, there are also advertising companies that began advertising on pirate sites after the agreement was signed.

“We further provide some evidence that ad services that were not active in the piracy market before the self-regulation agreement increase their presence on piracy websites afterwards,” the researchers write.

This may have been partly triggered by site owners looking for alternatives, or advertising companies looking for new opportunities. However, the effect is not statistically significant, which means that people shouldn’t read into it too much.

Overall, however, the researchers conclude that the voluntary agreement only had a relatively small impact on the EU advertising as a whole, and that there’s room for improvement.

“These results raise concerns about the overall effectiveness of the self-regulation effort with respect to reducing incentives for publishers to supply unlicensed content,” they write.

The EU agreement coincided with a series of similar agreements which, according to this data, had little effect on EU advertisers either over the researched timespan. And by looking at the average pirate site today, it becomes instantly clear that there are still plenty advertisers who are willing to work with these sites.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Blizzard Targets Fan-Created ‘World of Warcraft’ Legacy Server

Post Syndicated from Ernesto original https://torrentfreak.com/blizzard-targets-fan-created-world-of-warcraft-legacy-server-180203/

Over the years video game developer Blizzard Entertainment has published many popular game titles, including World of Warcraft (WoW).

First released in 2004, the multiplayer online role-playing game has been a massive success. It holds the record for the most popular MMORPG in history, with over 100 million subscribers.

While the current game looks entirely different from its first release, there are many nostalgic gamers who still enjoy the earlier editions. Unfortunately, however, they can’t play them. At least not legally.

The only option WoW fans have is to go to unauthorized fan projects which recreate the early gaming experience, such as Light’s Hope.

“We are what’s known as a ‘Legacy Server’ project for World of Warcraft, which seeks to emulate the experience of playing the game in its earliest iterations, including advancing through early expansions,” the project explains.

“If you’ve ever wanted to see what World of Warcraft was like back in 2004 then this is the place to be. Our goal is to maintain the same feel and structure as the realms back then while maintaining an open platform for development and operation.”

In recent years the project has captured the hearts of tens of thousands of die-hard WoW fans. At the time of writing, the most popular realm has more than 6,000 people playing from all over the world. Blizzard, however, is less excited.

The company has asked the developer platform GitHub to remove the code repository published by Light’s Hope. Blizzard’s notice targets several SQL databases stating that the layout and structure is nearly identical to the early WoW databases.

“The LightsHope spell table has identical layout and typically identical field names as the table from early WoW. We use database tables to represent game data, like spells, in WoW,” Blizzard writes.

“In our code, we use .sql files to represent the data layout of each table […]. MaNGOS, the platform off of which Light’s Hope appears to be built, uses a similar structure. The LightsHope spell_template table matches almost exactly the layout and field names of early WoW client database tables.”

This takedown notice had some effect, as people now see a “repository unavailable due to DMCA takedown” message when they access it in their browser.

While this may slow down development temporarily, it appears that the server itself is still running just fine. There were some downtime reports earlier this week, but it’s unknown whether that was related.

In addition to the GitHub repository, the official Twitter account was also suspended recently.

TorrentFreak contacted both Blizzard and Light’s Hope earlier this week for a comment on the situation. At the time of publication, we haven’t heard back.

Blizzard’s takedown notice comes just weeks after several organizations and gaming fans asked the US Copyright Office to make a DMCA circumvention exemption for “abandoned” games, including older versions of popular MMORPGs.

While it’s possible that such an exemption is granted in the future, it’s unlikely to apply to the public at large. The more likely scenario is that it would permit libraries, researchers, and museums to operate servers for these abandoned games.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Progressing from tech to leadership

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/on-leadership.html

I’ve been a technical person all my life. I started doing vulnerability research in the late 1990s – and even today, when I’m not fiddling with CNC-machined robots or making furniture, I’m probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I’m a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It’s been an interesting ride – and now that I’m on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I’ve been thinking a bit about the lessons learned along the way.

Of course, I’m a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it – and it’s possible to draw precisely the wrong conclusions from such anecdotes. Still, I’m very proud of the culture we’ve created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident – so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one’s level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems – and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here’s a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you’ve started or helped improve; after all, it’s your baby: you’re familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it’s also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don’t make them deathly afraid of messing up, they will soon excel at their new jobs – and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers – but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don’t run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don’t spiral out of control – and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it’s easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year’s, or if you ever find yourself ending an argument by deferring to a policy or a process document, it’s probably a sign that you’re getting complacent. In security, complacency usually ends in tears – and when it doesn’t, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they’d streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what’s the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I’d usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks “why do you have so many people here and how do I know they are doing the right things?”. Not everything in security can be quantified, but hard data can validate many of your assumptions – and will alert you to unseen issues early on.

When focusing on data, it’s important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you’re asking the right questions; instead of “how satisfied are you with our process”, try “is your product better as a consequence of talking to us?”

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It’s probable that your team disagrees with your vision or its feasibility – and that you’re either not listening to their feedback, or they don’t think you’d care. It’s good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It’s good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that’s needed is honesty about the business trade-offs, so that your team feels like your “partner in crime”, not a victim of circumstance. For example, we’d tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs – and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it’s how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I’ve come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it’s important to move quickly, rather than prolonging the uncertainty – but it’s also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don’t react to repeated scuffles, your best people will probably start looking for other opportunities: it’s draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don’t run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you’ll ever make.

Four days of STEAM at Bett 2018

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/bett-2018/

If you’re an educator from the UK, chances are you’ve heard of Bett. For everyone else: Bett stands for British Education Technology Tradeshow. It’s the El Dorado of edtech, where every street is adorned with interactive whiteboards, VR headsets, and new technologies for the classroom. Every year since 2014, the Raspberry Pi Foundation has been going to the event hosted in the ExCeL London to chat to thousands of lovely educators about our free programmes and resources.

Raspberry Pi Bett 2018

On a mission

Our setup this year consisted of four pods (imagine tables on steroids) in the STEAM village, and the mission of our highly trained team of education agents was to establish a new world record for Highest number of teachers talked to in a four-day period. I’m only half-joking.

Bett 2018 Raspberry Pi

Educators with a mission

Meeting educators

The best thing about being at Bett is meeting the educators who use our free content and training materials. It’s easy to get wrapped up in the everyday tasks of the office without stopping to ask: “Hey, have we asked our users what they want recently?” Events like Bett help us to connect with our audience, creating some lovely moments for both sides. We had plenty of Hello World authors visit us, including Gary Stager, co-author of Invent to Learn, a must-read for any computing educator. More than 700 people signed up for a digital subscription, we had numerous lovely conversations about our content and about ideas for new articles, and we met many new authors expressing an interest in writing for us in the future.

BETT 2018 Hello World Raspberry Pi
BETT 2018 Hello World Raspberry Pi
BETT 2018 Hello World Raspberry Pi

We also talked to lots of Raspberry Pi Certified Educators who we’d trained in our free Picademy programme — new dates in Belfast and Dublin now! — and who are now doing exciting and innovative things in their local areas. For example, Chris Snowden came to tell us about the great digital making outreach work he has been doing with the Eureka! museum in Yorkshire.

Bett 2018 Raspberry Pi

Raspberry Pi Certified Educator Chris Snowden

Digital making for kids

The other best thing about being at Bett is running workshops for young learners and seeing the delight on their faces when they accomplish something they believed to be impossible only five minutes ago. On the Saturday, we ran a massive Raspberry Jam/Code Club where over 250 children, parents, and curious onlookers got stuck into some of our computing activities. We were super happy to find out that we’d won the Bett Kids’ Choice Award for Best Hands-on Experience — a fantastic end to a busy four days. With Bett over for another year, our tired and happy ‘rebel alliance’ from across the Foundation still had the energy to pose for a group photo.

Bett 2018 Raspberry Pi

Celebrating our ‘Best Hands-on Experience’ award

More events

You can find out more about starting a Code Club here, and if you’re running a Jam, why not get involved with our global Raspberry Jam Big Birthday Weekend celebrations in March?

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

We’ll be at quite a few events in 2018, including the Big Bang Fair in March — do come and say hi.

The post Four days of STEAM at Bett 2018 appeared first on Raspberry Pi.

2018 Picademy dates in the United States

Post Syndicated from Andrew Collins original https://www.raspberrypi.org/blog/new-picademy-2018-dates-in-united-states/

Cue the lights! Cue the music! Picademy is back for another year stateside. We’re excited to bring our free computer science and digital making professional development program for educators to four new cities this summer — you can apply right now.

Picademy USA Denver Raspberry Pi
Picademy USA Seattle Raspberry Pi
Picademy USA Jersey City Raspberry Pi
Raspberry Pi Picademy USA Atlanta

We’re thrilled to kick off our 2018 season! Before we get started, let’s take a look back at our community’s accomplishments in the 2017 Picademy North America season.

Picademy 2017 highlights

Last year, we partnered with four awesome venues to host eight Picademy events in the United States. At every event across the country, we met incredibly talented educators passionate about bringing digital making to their learners. Whether it was at Ann Arbor District Library’s makerspace, UC Irvine’s College of Engineering, or a creative community center in Boise, Idaho, we were truly inspired by all our Picademy attendees and were thrilled to welcome them to the Raspberry Pi Certified Educator community.

JWU Hosts Picademy

JWU Providence’s College of Engineering & Design recently partnered with the Raspberry Pi Foundation to host Picademy, a free training session designed to give educators the tools to teach computer skills with confidence and creativity. | http://www.jwu.edu

The 2017 Picademy cohorts were a diverse bunch with a lot of experience in their field. We welcomed more than 300 educators from 32 U.S. states and 10 countries. They were a mix of high school, middle school, and elementary classroom teachers, librarians, museum staff, university lecturers, and teacher trainers. More than half of our attendees were teaching computer science or technology already, and over 90% were specifically interested in incorporating physical computing into their work.

Picademy has a strong and lasting impact on educators. Over 80% of graduates said they felt confident using Raspberry Pi after attending, and 88% said they were now interested in leading a digital making event in their community. To showcase two wonderful examples of this success: Chantel Mason led a Raspberry Pi workshop for families and educators in her community in St. Louis, Missouri this fall, and Dean Palmer led a digital making station at the Computer Science for Rhode Island Summit in December.

Picademy 2018 dates

This year, we’re partnering with four new venues to host our Picademy season.


We’ll be at mindSpark Learning in Denver the first week in June, at Liberty Science Center in Jersey City later that month, at Georgia Tech University in Atlanta in mid-July, and finally at the Living Computer Museum in Seattle the first week in August.


A big thank you to each of these venues for hosting us and supporting our free educator professional development program!

Ready to join us for Picademy 2018? Learn more and apply now: rpf.io/picademy2018.

The post 2018 Picademy dates in the United States appeared first on Raspberry Pi.

After Section 702 Reauthorization

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/after_section_7.html

For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We’ve just lost an important battle. On January 18, President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law.

Section 702 was initially passed in 2008, as an amendment to the Foreign Intelligence Surveillance Act of 1978. As the title of that law says, it was billed as a way for the NSA to spy on non-Americans located outside the United States. It was supposed to be an efficiency and cost-saving measure: the NSA was already permitted to tap communications cables located outside the country, and it was already permitted to tap communications cables from one foreign country to another that passed through the United States. Section 702 allowed it to tap those cables from inside the United States, where it was easier. It also allowed the NSA to request surveillance data directly from Internet companies under a program called PRISM.

The problem is that this authority also gave the NSA the ability to collect foreign communications and data in a way that inherently and intentionally also swept up Americans’ communications as well, without a warrant. Other law enforcement agencies are allowed to ask the NSA to search those communications, give their contents to the FBI and other agencies and then lie about their origins in court.

In 1978, after Watergate had revealed the Nixon administration’s abuses of power, we erected a wall between intelligence and law enforcement that prevented precisely this kind of sharing of surveillance data under any authority less restrictive than the Fourth Amendment. Weakening that wall is incredibly dangerous, and the NSA should never have been given this authority in the first place.

Arguably, it never was. The NSA had been doing this type of surveillance illegally for years, something that was first made public in 2006. Section 702 was secretly used as a way to paper over that illegal collection, but nothing in the text of the later amendment gives the NSA this authority. We didn’t know that the NSA was using this law as the statutory basis for this surveillance until Edward Snowden showed us in 2013.

Civil libertarians have been battling this law in both Congress and the courts ever since it was proposed, and the NSA’s domestic surveillance activities even longer. What this most recent vote tells me is that we’ve lost that fight.

Section 702 was passed under George W. Bush in 2008, reauthorized under Barack Obama in 2012, and now reauthorized again under Trump. In all three cases, congressional support was bipartisan. It has survived multiple lawsuits by the Electronic Frontier Foundation, the ACLU, and others. It has survived the revelations by Snowden that it was being used far more extensively than Congress or the public believed, and numerous public reports of violations of the law. It has even survived Trump’s belief that he was being personally spied on by the intelligence community, as well as any congressional fears that Trump could abuse the authority in the coming years. And though this extension lasts only six years, it’s inconceivable to me that it will ever be repealed at this point.

So what do we do? If we can’t fight this particular statutory authority, where’s the new front on surveillance? There are, it turns out, reasonable modifications that target surveillance more generally, and not in terms of any particular statutory authority. We need to look at US surveillance law more generally.

First, we need to strengthen the minimization procedures to limit incidental collection. Since the Internet was developed, all the world’s communications travel around in a single global network. It’s impossible to collect only foreign communications, because they’re invariably mixed in with domestic communications. This is called “incidental” collection, but that’s a misleading name. It’s collected knowingly, and searched regularly. The intelligence community needs much stronger restrictions on which American communications channels it can access without a court order, and rules that require they delete the data if they inadvertently collect it. More importantly, “collection” is defined as the point the NSA takes a copy of the communications, and not later when they search their databases.

Second, we need to limit how other law enforcement agencies can use incidentally collected information. Today, those agencies can query a database of incidental collection on Americans. The NSA can legally pass information to those other agencies. This has to stop. Data collected by the NSA under its foreign surveillance authority should not be used as a vehicle for domestic surveillance.

The most recent reauthorization modified this lightly, forcing the FBI to obtain a court order when querying the 702 data for a criminal investigation. There are still exceptions and loopholes, though.

Third, we need to end what’s called “parallel construction.” Today, when a law enforcement agency uses evidence found in this NSA database to arrest someone, it doesn’t have to disclose that fact in court. It can reconstruct the evidence in some other manner once it knows about it, and then pretend it learned of it that way. This right to lie to the judge and the defense is corrosive to liberty, and it must end.

Pressure to reform the NSA will probably first come from Europe. Already, European Union courts have pointed to warrantless NSA surveillance as a reason to keep Europeans’ data out of US hands. Right now, there is a fragile agreement between the EU and the United States ­– called “Privacy Shield” — ­that requires Americans to maintain certain safeguards for international data flows. NSA surveillance goes against that, and it’s only a matter of time before EU courts start ruling this way. That’ll have significant effects on both government and corporate surveillance of Europeans and, by extension, the entire world.

Further pressure will come from the increased surveillance coming from the Internet of Things. When your home, car, and body are awash in sensors, privacy from both governments and corporations will become increasingly important. Sooner or later, society will reach a tipping point where it’s all too much. When that happens, we’re going to see significant pushback against surveillance of all kinds. That’s when we’ll get new laws that revise all government authorities in this area: a clean sweep for a new world, one with new norms and new fears.

It’s possible that a federal court will rule on Section 702. Although there have been many lawsuits challenging the legality of what the NSA is doing and the constitutionality of the 702 program, no court has ever ruled on those questions. The Bush and Obama administrations successfully argued that defendants don’t have legal standing to sue. That is, they have no right to sue because they don’t know they’re being targeted. If any of the lawsuits can get past that, things might change dramatically.

Meanwhile, much of this is the responsibility of the tech sector. This problem exists primarily because Internet companies collect and retain so much personal data and allow it to be sent across the network with minimal security. Since the government has abdicated its responsibility to protect our privacy and security, these companies need to step up: Minimize data collection. Don’t save data longer than absolutely necessary. Encrypt what has to be saved. Well-designed Internet services will safeguard users, regardless of government surveillance authority.

For the rest of us concerned about this, it’s important not to give up hope. Everything we do to keep the issue in the public eye ­– and not just when the authority comes up for reauthorization again in 2024 — hastens the day when we will reaffirm our rights to privacy in the digital age.

This essay previously appeared in the Washington Post.

Raspberry Crusoe: how a Pi got lost at sea

Post Syndicated from James Robinson original https://www.raspberrypi.org/blog/lost-high-altitude-balloon/

The tale of the little HAB that could and its three-month journey from Portslade Aldridge Community Academy in the UK to the coast of Denmark.

PACA Computing on Twitter

Where did it land ???? #skypaca #skycademy @pacauk #RaspberryPi

High-altitude ballooning

Some of you may be familiar with Raspberry Pi being used as the flight computer, or tracker, of high-altitude balloon (HAB) payloads. For those who aren’t, high-altitude ballooning is a relatively simple activity (at least in principle) where a tracker is attached to a large weather balloon which is then released into the atmosphere. While the HAB ascends, the tracker takes pictures and data readings the whole time. Eventually (around 30km up) the balloon bursts, leaving the payload free to descend and be recovered. For a better explanation, I’m handing over to the students of UTC Oxfordshire:

Pi in the Sky | UTC Oxfordshire

On Tuesday 2nd May, students launched a Raspberry Pi computer 35,000 metres into the stratosphere as part of an Employer-Led project at UTC Oxfordshire, set by the Raspberry Pi Foundation. The project involved engineering, scientific and communication/publicity skills being developed to create the payload and code to interpret experiments set by the science team.

Skycademy

Over the past few years, we’ve seen schools and their students explore the possibilities that high-altitude ballooning offers, and back in 2015 and 2016 we ran Skycademy. The programme was simple enough: get a bunch of educators together in the same space, show them how to launch a balloon flight, and then send them back to their students to try and repeat what they’ve learned. Since the first Skycademy event, a number of participants have carried out launches, and we are extremely proud of each and every one of them.

The case of the vanishing PACA HAB

Not every launch has been a 100% success though. There are many things that can and do go wrong during HAB flights, and watching each launch from the comfort of our office can be a nerve-wracking experience. We had such an experience back in July 2017, during the launch performed by Skycademy graduate and Raspberry Pi Certified Educator Dave Hartley and his students from Portslade Aldridge Community Academy (PACA).

Dave and his team had been working on their payload for some time, and were awaiting suitable weather conditions. Early one Wednesday in July, everything aligned: they had a narrow window of good weather and so set their launch plan in motion. Soon they had assembled the payload in the school grounds and all was ready for the launch.

Dave Hartley on Twitter

Launch day! @pacauk #skycademy #skypaca #raspberrypi

Just before 11:00, they’d completed their final checks and released their payload into the atmosphere. Over the course of 64 minutes, the HAB steadily rose to an altitude of 25647m, where it captured some amazing pictures before the balloon burst and a rapid descent began.

Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi

Soon after the payload began to descend, the team noticed something worrying: their predicted descent path took the payload dangerously far south — it was threatening to land in the sea. As the payload continued to lose altitude, their calculated results kept shifting, alternately predicting a landing on the ground or out to sea. Eventually it became clear that the payload would narrowly overshoot the land, and it finally landed about 2 km out to sea.

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

The path of the balloon

It’s not uncommon for a HAB payload to get lost. There are many ways this can happen, particularly in a narrow country with a prevailing easterly wind like the UK. Payloads can get lost at sea, land somewhere inaccessible, or simply run out of power before they are located and retrieved. So normally, this would be the end of the story for the PACA students — even if the team had had a speedboat to hand, their payload was surely lost for good.

A message from Denmark

However, this is not the end of our story! A couple of months later, I arrived at work and saw this tweet from a colleague:

Raspberry Pi on Twitter

Anyone lost a Raspberry Pi HAB? Someone found this one on a beach in south western Denmark yesterday #UKHAS https://t.co/7lBzFiemgr

Good Samaritan Henning Hansen had found a Raspberry Pi washed up on a remote beach in Denmark! While walking a stretch of coast to collect plastic debris for an environmental monitoring project, he came across something unusual near the shore at 55°04’53.0″N and 8°38’46.9″E.

This of course piqued my interest, and we began to investigate the image he had shared on Facebook.

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

Inspecting the photo closely, we noticed a small asset label — the kind of label that, over a year earlier, we’d stuck to each and every bit of Skycademy field kit. We excitedly claimed the kit on behalf of Dave and his students, and contacted Henning to arrange the recovery of the payload. He told us it must have been carried ashore with the tide some time between 21 and 27 September, and probably on 21 September, since that day had the highest tide over the period. This meant the payload must have spent over two months at sea!

From the photo we could tell that the Raspberry Pi had suffered significant corrosion, having been exposed to salt water for so long, and so we felt pessimistic about the chances that there would be any recoverable data on it. However, Henning said that he’d been able to read some files from the FAT partition of the SD card, so all hope was not lost.

After a few weeks and a number of complications around dispatch and delivery (thank you, Henning, for your infinite patience!), Helen collected the HAB from a local Post Office.

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

SUCCESS!

We set about trying to read the data from the SD card, and eventually became disheartened: despite several attempts, we were unable to read its contents.

In a last-ditch effort, we gave the SD card to Jonathan, one of our engineers, who initially laughed at the prospect of recovering any data from it. But ten minutes later, he returned with news of success!

Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi

Since then, we’ve been able to reunite the payload with the PACA launch team, and the students sent us the perfect message to end this story:

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

The post Raspberry Crusoe: how a Pi got lost at sea appeared first on Raspberry Pi.