Tag Archives: wd

Pravda-та на Мъск

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/02/pravda/

Елон Мъск споделя в Twitter идеята да създаде нова медия – “crowdsourced site”, наречен Pravda – който според Мъск би позволил на обществеността  “да оцени  истината във всяка статия и  доверието към всеки журналист, редактор и публикация.” Дори репортери са открили заявление за вписване на търговско дружество Pravda Corp.

Мнозина правят паралел между Мъск и Тръмп – заради определянето на негативни новини за тях като фалшиви  и отрицателната оценка за новинарските медии като цяло.

Pravda е много, много лоша идея, се казва в коментар на The Verge: Мъск, както и Тръмп, може да не харесва начина, по който го представят – или като жертва, или като провал – но вие не можете да законодателствате достоверността, нито да  я подлагате на гласуване – и да очаквате някакъв друг резултат, освен дистопия. Неинформираното общество е общество, което може да бъде подведено  да вярва в каквото и да е – че земята е плоска и че човек не е кацал на Луната –  и това е много опасно нещо в ръцете на мотивиран субект.  Особено ако се съчетае с ехо-ефекта на платформите, може да подкопае доверието във всеки източник, който си позволява  да влезе в конфликт с този субект.

Голямата тема не е Мъск,  голямата тема е манипулирането на общественото мнение – и възможното противодействие.

Обществена оценка какво е вярно  е на една крачка от  обществената оценка какво е правилно да се направи – и обяснява резултатите от някои избори и референдуми.

Japan’s Directorate for Signals Intelligence

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/japans_director.html

The Intercept has a long article on Japan’s equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister’s inner circle, are kept in the dark about the directorate’s activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK — produced in collaboration with The Intercept — reveals for the first time details about the inner workings of Japan’s opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency’s intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.

Brutus 2: the gaming PC case of your dreams

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/brutus-2-gaming-pc-case/

Attention, case modders: take a look at the Brutus 2, an extremely snazzy computer case with a partly transparent, animated side panel that’s powered by a Pi. Daniel Otto and Carsten Lehman have a current crowdfunder for the case; their video is in German, but the looks of the build speak for themselves. There are some truly gorgeous effects here.

der BRUTUS 2 by 3nb Gaming

Vorbestellungen ab sofort auf https://www.startnext.com/brutus2 Weitere Infos zu uns auf: https://3nb.de https://www.facebook.com/3nb.de https://www.instagram.com/3nb.de Über 3nb: – GbR aus Leipzig, gegründet 2017 – wir kommen aus den Bereichen Elektronik und Informatik – erstes Produkt: der Brutus One ein Gaming PC mit transparentem Display in der Seite Kurzinfo Brutus 2: – Markencomputergehäuse für Gaming- /Casemoddingszene – Besonderheit: animiertes Seitenfenster angesteuert mit einem Raspberry Pi – Vorteile von unserem Case: o Case ist einzeln lieferbar und nicht nur als komplett-PC o kein Leistungsverbrauch der Grafikkarte dank integriertem Raspberry Pi o bessere Darstellung von Texten und Grafiken durch unscharfen Hintergrund

What’s case modding?

Case modding just means modifying your computer or gaming console’s case, and it’s very popular in the gaming community. Some mods are functional, while others improve the way the case looks. Lots of dedicated gamers don’t only want a powerful computer, they also want it to look amazing — at home, or at LAN parties and games tournaments.

The Brutus 2 case

The Brutus 2 case is made by Daniel and Carsten’s startup, 3nb electronics, and it’s a product that is officially Powered by Raspberry Pi. Its standout feature is the semi-transparent TFT screen, which lets you play any video clip you choose while keeping your gaming hardware on display. It looks incredibly cool. All the graphics for the case’s screen are handled by a Raspberry Pi, so it doesn’t use any of your main PC’s GPU power and your gaming won’t suffer.

Brutus 2 PC case powered by Raspberry Pi

The software

To use Brutus 2, you just need to run a small desktop application on your PC to choose what you want to display on the case. A number of neat animations are included, and you can upload your own if you want.

So far, the app only runs on Windows, but 3nb electronics are planning to make the code open-source, so you can modify it for other operating systems, or to display other file types. This is true to the spirit of the case modding and Raspberry Pi communities, who love adapting, retrofitting, and overhauling projects and code to fit their needs.

Brutus 2 PC case powered by Raspberry Pi

Daniel and Carsten say that one of their campaign’s stretch goals is to implement more functionality in the Brutus 2 app. So in the future, the case could also show things like CPU temperature, gaming stats, and in-game messages. Of course, there’s nothing stopping you from integrating features like that yourself.

If you have any questions about the case, you can post them directly to Daniel and Carsten here.

The crowdfunding campaign

The Brutus 2 campaign on Startnext is currently halfway to its first funding goal of €10000, with over three weeks to go until it closes. If you’re quick, you still be may be able to snatch one of the early-bird offers. And if your whole guild NEEDS this, that’s OK — there are discounts for bulk orders.

The post Brutus 2: the gaming PC case of your dreams appeared first on Raspberry Pi.

Supply-Chain Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/supply-chain_se.html

Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

It’s a legitimate fear, and perhaps a prudent action. But it’s just one instance of the much larger issue of securing our supply chains.

All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add “backdoors” to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It’s relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

This isn’t the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: “Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems.”

Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country’s tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn’t trust another country, then it can’t trust that country’s computer products.

But this trust isn’t limited to the country where the company is based. We have to trust the country where the software is written — and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips ­ without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

We also have to trust the programmers. Today’s large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries’ citizens are writing software for Apple or Microsoft or Google.

We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

We don’t know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don’t know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It’s doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can’t buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we’re largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn’t get too far out of control.

This essay previously appeared in the Washington Post.

Converting a Kodak Box Brownie into a digital camera

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/kodak-brownie-camera/

In this article from The MagPi issue 69, David Crookes explains how Daniel Berrangé took an old Kodak Brownie from the 1950s and turned it into a quirky digital camera. Get your copy of The MagPi magazine in stores now, or download it as a free PDF here.

Daniel Berrangé Kodak Brownie Raspberry Pi Camera

The Kodak Box Brownie

When Kodak unveiled its Box Brownie in 1900, it did so with the slogan ‘You press the button, we do the rest.’ The words referred to the ease-of-use of what was the world’s first mass-produced camera. But it could equally apply to Daniel Berrangé’s philosophy when modifying it for the 21st century. “I wanted to use the Box Brownie’s shutter button to trigger image capture, and make it simple to use,” he tells us.

Daniel Berrangé Kodak Brownie Raspberry Pi Camera

Daniel’s project grew from a previous effort in which he placed a pinhole webcam inside a ladies’ powder compact case. “The Box Brownie project is essentially a repeat of that design but with a normal lens instead of a pinhole, a real camera case, and improved software to enable a shutter button. Ideally, it would look unchanged from when it was shooting film.”

Webcam woes

At first, Daniel looked for a cheap webcam, intending to spend no more than the price of a Pi Zero. This didn’t work out too well. “The low-light performance of the webcam was not sufficient to make a pinhole camera so I just decided to make a ‘normal’ digital camera instead,” he reveals.
To that end, he began removing some internal components from the Box Brownie. “With the original lens removed, the task was to position the webcam’s electronic light sensor (the CCD) and lens as close to the front of the camera as possible,” Daniel explains. “In the end, the CCD was about 15 mm away from the front aperture of the camera, giving a field of view that was approximately the same as the unmodified camera would achieve.”

Daniel Berrangé Kodak Brownie Raspberry Pi Camera
Daniel Berrangé Kodak Brownie Raspberry Pi Camera
Daniel Berrangé Kodak Brownie Raspberry Pi Camera

It was then time for him to insert the Raspberry Pi, upon which was a custom ‘init’ binary that loads a couple of kernel modules to run the webcam, mount the microSD file system, and launch the application binary. Here, Daniel found he was in luck. “I’d noticed that the size of a 620 film spool (63 mm) was effectively the same as the width of a Raspberry Pi Zero (65 mm), so it could be held in place between the film spool grips,” he recalls. “It was almost as if it was designed with this in mind.”

Shutter success

In order to operate the camera, Daniel had to work on the shutter button. “The Box Brownie’s shutter button is entirely mechanical, driven by a handful of levers and springs,” Daniel explains. “First, the Pi Zero needs to know when the shutter button is pressed and second, the physical shutter has to be open while the webcam is capturing the image. Rather than try to synchronise image capture with the fraction of a second that the physical shutter is open, a bit of electrical tape was used on the shutter mechanism to keep it permanently open.”

Daniel Berrangé Kodak Brownie Raspberry Pi Camera

Daniel made use of the Pi Zero’s GPIO pins to detect the pressing of the shutter button. It determines if each pin is at 0 or 5 volts. “My thought was that I could set a GPIO pin high to 5 V, and then use the action of the shutter button to short it to ground, and detect this change in level from software.”

This initially involved using a pair of bare wires and some conductive paint, although the paint was later replaced by a piece of tinfoil. But with the button pressed, the GPIO pin level goes to zero and the device constantly captures still images until the button is released. All that’s left to do is smile and take the perfect snap.

The post Converting a Kodak Box Brownie into a digital camera appeared first on Raspberry Pi.

Get Started with Blockchain Using the new AWS Blockchain Templates

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/get-started-with-blockchain-using-the-new-aws-blockchain-templates/

Many of today’s discussions around blockchain technology remind me of the classic Shimmer Floor Wax skit. According to Dan Aykroyd, Shimmer is a dessert topping. Gilda Radner claims that it is a floor wax, and Chevy Chase settles the debate and reveals that it actually is both! Some of the people that I talk to see blockchains as the foundation of a new monetary system and a way to facilitate international payments. Others see blockchains as a distributed ledger and immutable data source that can be applied to logistics, supply chain, land registration, crowdfunding, and other use cases. Either way, it is clear that there are a lot of intriguing possibilities and we are working to help our customers use this technology more effectively.

We are launching AWS Blockchain Templates today. These templates will let you launch an Ethereum (either public or private) or Hyperledger Fabric (private) network in a matter of minutes and with just a few clicks. The templates create and configure all of the AWS resources needed to get you going in a robust and scalable fashion.

Launching a Private Ethereum Network
The Ethereum template offers two launch options. The ecs option creates an Amazon ECS cluster within a Virtual Private Cloud (VPC) and launches a set of Docker images in the cluster. The docker-local option also runs within a VPC, and launches the Docker images on EC2 instances. The template supports Ethereum mining, the EthStats and EthExplorer status pages, and a set of nodes that implement and respond to the Ethereum RPC protocol. Both options create and make use of a DynamoDB table for service discovery, along with Application Load Balancers for the status pages.

Here are the AWS Blockchain Templates for Ethereum:

I start by opening the CloudFormation Console in the desired region and clicking Create Stack:

I select Specify an Amazon S3 template URL, enter the URL of the template for the region, and click Next:

I give my stack a name:

Next, I enter the first set of parameters, including the network ID for the genesis block. I’ll stick with the default values for now:

I will also use the default values for the remaining network parameters:

Moving right along, I choose the container orchestration platform (ecs or docker-local, as I explained earlier) and the EC2 instance type for the container nodes:

Next, I choose my VPC and the subnets for the Ethereum network and the Application Load Balancer:

I configure my keypair, EC2 security group, IAM role, and instance profile ARN (full information on the required permissions can be found in the documentation):

The Instance Profile ARN can be found on the summary page for the role:

I confirm that I want to deploy EthStats and EthExplorer, choose the tag and version for the nested CloudFormation templates that are used by this one, and click Next to proceed:

On the next page I specify a tag for the resources that the stack will create, leave the other options as-is, and click Next:

I review all of the parameters and options, acknowledge that the stack might create IAM resources, and click Create to build my network:

The template makes use of three nested templates:

After all of the stacks have been created (mine took about 5 minutes), I can select JeffNet and click the Outputs tab to discover the links to EthStats and EthExplorer:

Here’s my EthStats:

And my EthExplorer:

If I am writing apps that make use of my private network to store and process smart contracts, I would use the EthJsonRpcUrl.

Stay Tuned
My colleagues are eager to get your feedback on these new templates and plan to add new versions of the frameworks as they become available.

Jeff;

 

Artefacts in the classroom with Museum in a Box

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/museum-in-a-box/

Museum in a Box bridges the gap between museums and schools by creating a more hands-on approach to conservation education through 3D printing and digital making.

Artefacts in the classroom with Museum in a Box || Raspberry Pi Stories

Learn more: http://rpf.io/ Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Fantastic collections and where to find them

Large, impressive statues are truly a sight to be seen. Take for example the 2.4m Hoa Hakananai’a at the British Museum. Its tall stature looms over you as you read its plaque to learn of the statue’s journey from Easter Island to the UK under the care of Captain Cook in 1774, and you can’t help but wonder at how it made it here in one piece.

Hoa Hakananai’a Captain Cook British Museum
Hoa Hakananai’a Captain Cook British Museum

But unless you live near a big city where museums are plentiful, you’re unlikely to see the likes of Hoa Hakananai’a in person. Instead, you have to content yourself with online photos or videos of world-famous artefacts.

And that only accounts for the objects that are on display: conservators estimate that only approximately 5 to 10% of museums’ overall collections are actually on show across the globe. The rest is boxed up in storage, inaccessible to the public due to risk of damage, or simply due to lack of space.

Museum in a Box

Museum in a Box aims to “put museum collections and expert knowledge into your hand, wherever you are in the world,” through modern maker practices such as 3D printing and digital making. With the help of the ‘Scan the World’ movement, an “ambitious initiative whose mission is to archive objects of cultural significance using 3D scanning technologies”, the Museum in a Box team has been able to print small, handheld replicas of some of the world’s most recognisable statues and sculptures.

Museum in a Box Raspberry Pi

Each 3D print gets NFC tags so it can initiate audio playback from a Raspberry Pi that sits snugly within the laser-cut housing of a ‘brain box’. Thus the print can talk directly to us through the magic of wireless technology, replacing the dense, dry text of a museum plaque with engaging speech.

Museum in a Box Raspberry Pi

The Museum in a Box team headed by CEO George Oates (featured in the video above) makes use of these 3D-printed figures alongside original artefacts, postcards, and more to bridge the gap between large, crowded, distant museums and local schools. Modeled after the museum handling collections that used to be sent to schools, Museum in a Box is a cheaper, more accessible alternative. Moreover, it not only allows for hands-on learning, but also encourages children to get directly involved by hacking its technology! With NFC technology readily available to the public, students can curate their own collections about their local area, record their own messages, and send their own box-sized museums on to schools in other towns or countries. In this way, Museum in a Box enables students to explore, and expand the reach of, their own histories.

Moving forward

With the technology perfected and interest in the project ever-growing, Museum in a Box has a busy year ahead. Supporting the new ‘Unstacked’ learning initiative, the team will soon be delivering ten boxes to the Smithsonian Libraries. The team has curated two collections specifically for this: an exploration into Asia-Pacific America experiences of migration to the USA throughout the 20th century, and a look into the history of science.

Smithsonian Library Museum in a Box Raspberry Pi

The team will also be making a box for the British Museum to support their Iraq Scheme initiative, and another box will be heading to the V&A to support their See Red programme. While primarily installed in the Lansbury Micro Museum, the box will also take to the road to visit the local Spotlight high school.

Museum in a Box at Raspberry Fields

Lastly, by far the most exciting thing the Museum in a Box team will be doing this year — in our opinion at least — is showcasing at Raspberry Fields! This is our brand-new festival of digital making that’s taking place on 30 June and 1 July 2018 here in Cambridge, UK. Find more information about it and get your ticket here.

The post Artefacts in the classroom with Museum in a Box appeared first on Raspberry Pi.

American Public Television Embraces the Cloud — And the Future

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/american-public-television-embraces-the-cloud-and-the-future/

American Public Television website

American Public Television was like many organizations that have been around for a while. They were entrenched using an older technology — in their case, tape storage and distribution — that once met their needs but was limiting their productivity and preventing them from effectively collaborating with their many media partners. APT’s VP of Technology knew that he needed to move into the future and embrace cloud storage to keep APT ahead of the game.
Since 1961, American Public Television (APT) has been a leading distributor of groundbreaking, high-quality, top-rated programming to the nation’s public television stations. Gerry Field is the Vice President of Technology at APT and is responsible for delivering their extensive program catalog to 350+ public television stations nationwide.

In the time since Gerry  joined APT in 2007, the industry has been in digital overdrive. During that time APT has continued to acquire and distribute the best in public television programming to their technically diverse subscribers.

This created two challenges for Gerry. First, new technology and format proliferation were driving dramatic increases in digital storage. Second, many of APT’s subscribers struggled to keep up with the rapidly changing industry. While some subscribers had state-of-the-art satellite systems to receive programming, others had to wait for the post office to drop off programs recorded on tape weeks earlier. With no slowdown on the horizon of innovation in the industry, Gerry knew that his storage and distribution systems would reach a crossroads in no time at all.

American Public Television logo

Living the tape paradigm

The digital media industry is only a few years removed from its film, and later videotape, roots. Tape was the input and the output of the industry for many years. As a consequence, the tools and workflows used by the industry were built and designed to work with tape. Over time, the “file” slowly replaced the tape as the object to be captured, edited, stored and distributed. Trouble was, many of the systems and more importantly workflows were based on processing tape, and these have proven to be hard to change.

At APT, Gerry realized the limits of the tape paradigm and began looking for technologies and solutions that enabled workflows based on file and object based storage and distribution.

Thinking file based storage and distribution

For data (digital media) storage, APT, like everyone else, started by installing onsite storage servers. As the amount of digital data grew, more storage was added. In addition, APT was expanding its distribution footprint by creating or partnering with distribution channels such as CreateTV and APT Worldwide. This dramatically increased the number of programming formats and the amount of data that had to be stored. As a consequence, updating, maintaining, and managing the APT storage systems was becoming a major challenge and a major resource hog.

APT Online

Knowing that his in-house storage system was only going to cost more time and money, Gerry decided it was time to look at cloud storage. But that wasn’t the only reason he looked at the cloud. While most people consider cloud storage as just a place to back up and archive files, Gerry was envisioning how the ubiquity of the cloud could help solve his distribution challenges. The trouble was the price of cloud storage from vendors like Amazon S3 and Microsoft Azure was a non-starter, especially for a non-profit. Then Gerry came across Backblaze. B2 Cloud Storage service met all of his performance requirements, and at $0.005/GB/month for storage and $0.01/GB for downloads it was nearly 75% less than S3 or Azure.

Gerry did the math and found that he could economically incorporate B2 Cloud Storage into his IT portfolio, using it for both program submission and for active storage and archiving of the APT programs. In addition, B2 now gives him the foundation necessary to receive and distribute programming content over the Internet. This is especially useful for organizations that can’t conveniently access satellite distribution systems. Not to mention downloading from the cloud is much faster than sending a tape through the mail.

Adding B2 Cloud Storage to their infrastructure has helped American Public Television address two key challenges. First, they now have “unlimited” storage in the cloud without having to add any hardware. In addition, with B2, they only pay for the storage they use. That means they don’t have to buy storage upfront trying to match the maximum amount of storage they’ll ever need. Second, by using B2 as a distribution source for their programming APT subscribers, especially the smaller and remote ones, can get content faster and more reliably without having to perform costly upgrades to their infrastructure.

The road ahead

As APT gets used to their file based infrastructure and workflow, there are a number of cost saving and income generating ideas they are pondering which are now worth considering. Here are a few:

Program Submissions — New content can be uploaded from anywhere using a web browser, an Internet connection, and a login. For example, a producer in Cambodia can upload their film to B2. From there the film is downloaded to an in-house system where it is processed and transcoded using compute. The finished film is added to the APT catalog and added to B2. Once there, the program is instantly available for subscribers to order and download.

“The affordability and performance of Backblaze B2 is what allowed us to make the B2 cloud part of the APT data storage and distribution strategy into the future.” — Gerry Field

Easier Previews — At any time, work in process or finished programs can be made available for download from the B2 cloud. One place this could be useful is where a subscriber needs to review a program to comply with local policies and practices before airing. In the old system, each “one-off” was a time consuming manual process.

Instant Subscriptions — There are many organizations such as schools and businesses that want to use just one episode of a desired show. With an e-commerce based website, current or even archived programming kept in B2 could be available to download or stream for a minimal charge.

At APT there were multiple technologies needed to make their file-based infrastructure work, but as Gerry notes, having an affordable, trustworthy, cloud storage service like B2 is one of the critical building blocks needed to make everything work together.

The post American Public Television Embraces the Cloud — And the Future appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Take home Mugsy, the Raspberry Pi coffee robot

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/mugsy/

We love Mugsy, the Raspberry Pi coffee robot that has smashed its crowdfunding goal within days! Our latest YouTube video shows our catch-up with Mugsy and its creator Matthew Oswald at Maker Faire New York last year.

MUGSY THE RASPBERRY PI COFFEE ROBOT #MFNYC

Uploaded by Raspberry Pi on 2018-03-22.

Mugsy

Labelled ‘the world’s first hackable, customisable, dead simple, robotic coffee maker’, Mugsy allows you to take control of every aspect of the coffee-making process: from grind size and water temperature, to brew and bloom time. Feeling lazy instead? Read in your beans’ barcode via an onboard scanner, and it will automatically use the best settings for your brew.

Mugsy Raspberry Pi Coffee Robot

Looking to start your day with your favourite coffee straight out of bed? Send the robot a text, email, or tweet, and it will notify you when your coffee is ready!

Learning through product development

“Initially, I used [Mugsy] as a way to teach myself hardware design,” explained Matthew at his Editor’s Choice–winning Maker Faire stand. “I really wanted to hold something tangible in my hands. By using the Raspberry Pi and just being curious, anytime I wanted to use a new technology, I would try to pull back [and ask] ‘How can I integrate this into Mugsy?’”

Mugsy Raspberry Pi Coffee Robot

By exploring his passions and using Mugsy as his guinea pig, Matthew created a project that not only solves a problem — how to make amazing coffee at home — but also brings him one step closer to ‘making things’ for a living. “I used to dream about this stuff when I was a kid, and I used to say ‘I’m never going to be able to do something like that.’” he admitted. But now, with open-source devices like the Raspberry Pi so readily available, he “can see the end of the road”: making his passion his livelihood.

Back Mugsy

With only a few days left on the Kickstarter campaign, Mugsy has reached its goal and then some. It’s available for backing from $150 if you provide your own Raspberry Pi 3, or from $175 with a Pi included — check it out today!

The post Take home Mugsy, the Raspberry Pi coffee robot appeared first on Raspberry Pi.

Japanese Govt Intervention Fails to Stop Mass Dragon Ball Super Piracy

Post Syndicated from Andy original https://torrentfreak.com/japanese-govt-intervention-fails-to-stop-mass-dragon-ball-super-piracy-180319/

Earlier this month, fans of Dragon Ball Super in Mexico started a movement on social media which suggested that everyone should be able to watch episode 130 (titled “The Greatest Showdown of All Time! The Ultimate Survival Battle!!”) together in public.

Surprisingly, this movement started receiving support from various local governments, many of which agreed to erect large screens in public places, from town and city squares to football stadiums.

Official government Twitter accounts lit up with announcements from the authorities, with posters like the one below issued for many of the events.

While this all sounded wonderful in practice, there was a huge problem. According to Toei Animation, the Japanese company behind the hit anime show, no one had the licensing rights to show Dragon Ball Super in public.

The company issued a statement condemning the plans, branding the proposed performances as “illegal screenings that incite piracy” while urging people to support the creators by only watching on officially licensed platforms.

As Saturday drew near, some regions announced that without permission from Toei, their screenings would not go ahead. Others, however, offered no cooperation whatsoever, effectively informing Toei that it was powerless to do anything to stop what would amount to government-approved mass piracy.

Whether Toei had anything to do with it or not isn’t clear, but on Friday the ambassador of Japan took the highly unusual step of writing to various local governments with a demand for them to cancel the events. El Espanol obtained a copy of the letter, as shown below.

The letter from the Ambassador of Japan

“The Government of Japan is aware that episode 130 and 131 of the Dragon Ball Super series, whose copyright belongs to Japanese company Toei Animation, will be shown in public places and places without the author’s due authorization,” the letter reads.

“In the event the exhibition is illegal, the Government of Japan wishes that it be suspended.”

It seems that as a result of the letter, some of the screenings were canceled, causing much disappointment for the fans of the series. However, in some areas of Mexico the events went ahead anyway, with tens of thousands of massively enthusiastic people in attendance.

But it didn’t stop there. The DBS fever also spread to Chile, Peru, El Salvador and Ecuador, with outdoor events attracting huge cheering crowds.

Whether there will be any diplomatic fallout from these shows of defiance isn’t yet clear but if anyone needed a visualization of what torrent sharing might look like if it took place in the physical realm, there are no better examples than these videos. In Ecuador, where more than ten thousand people gathered in just one location, fun was had by all.



Why Toei didn’t make the most of this opportunity is anyone’s guess but it looks like the company could have made a killing selling official t-shirts alone. Nevermind, maybe next time.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Local Governments in Mexico Might ‘Pirate’ Dragon Ball

Post Syndicated from Andy original https://torrentfreak.com/local-governments-mexico-might-pirate-dragon-ball-180316/

When one thinks of large-scale piracy, sites like The Pirate Bay and perhaps 123Movies spring to mind.

Offering millions of viewers the chance to watch the latest movies and TV shows for free the day they’re released or earlier, they’re very much hated by the entertainment industries.

Tomorrow, however, there’s the very real possibility of a huge copyright infringement controversy hitting large parts of Mexico, all centered around the hugely popular anime series Dragon Ball Super.

This Saturday episode 130, titled “The Greatest Showdown of All Time! The Ultimate Survival Battle!!”, will hit the streets. It’s the penultimate episode of the series and will see the climax of Goku and Jiren’s battle – apparently.

The key point is that fans everywhere are going nuts in anticipation, so much so that various local governments in Mexico have agreed to hold public screenings for free, including in football stadiums and public squares.

“Fans of the series are crazy to see the new episode of Dragon Ball Super and have already organized events around the country as if it were a boxing match,” local media reports.

For example, Remberto Estrada, the municipal president of Benito Juárez, Quintana Roo, confirmed that the episode will be aired at the Cultural Center of the Arts in Cancun. The mayor of Ciudad Juarez says that a viewing will go ahead at the Plaza de la Mexicanidad with giant screens and cosplay contests on the sidelines.

Many local government Twitter accounts sent out official invitations, like the one shown below.

But despite all the preparations, there is a big problem. According to reports, no group or organization has the rights to show Dragon Ball Super in public in Mexico, a fact confirmed by Toei Animation, the company behind the show.

“To the viewers and fans of Dragon Ball. We have become aware of the plans to exhibit episode # 130 of our Dragon Ball Super series in stadiums, plazas, and public places throughout Latin America,” the company said in an official announcement.

“Toei Animation has not authorized these public shows and does not support or sponsor any of these events nor do we or any of our titles endorse any institution exhibiting the unauthorized episode.

“In an effort to support copyright laws, to protect the work of thousands of persons and many labor sectors, we request that you please enjoy our titles at the official platforms and broadcasters and not support illegal screenings that incite piracy.”

Armando Cabada, mayor of Ciudad Juarez, Chihuahua, was one of the first municipal officials to offer support to the episode 130 movement. He believes that since the events are non-profit, they can go ahead but others have indicated their screenings will only go ahead if they can get the necessary permission.

Crunchyroll, the US video-streaming company that holds some Dragon Ball Super rights, is reportedly trying to communicate with the establishments and organizations planning to host the events to ensure that everything remains legal and above board. At this stage, however, there’s no indication that any agreements have been reached or whether they’re simply getting in touch to deliver a warning.

One region that has already confirmed its event won’t go ahead is Mexico City. The head of the local government there told disappointed fans that since they can’t get permission from Toei, the whole thing has been canceled.

What will happen in the other locations Saturday night if licenses haven’t been obtained is anyone’s guess but thousands of disappointed fans in multiple locations raises the potential for the kind of battle the Mexican authorities can well do without, even if Dragon Ball Super thrives on them.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

What John Oliver gets wrong about Bitcoin

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/what-john-oliver-gets-wrong-about.html

John Oliver covered bitcoin/cryptocurrencies last night. I thought I’d describe a bunch of things he gets wrong.

How Bitcoin works

Nowhere in the show does it describe what Bitcoin is and how it works.
Discussions should always start with Satoshi Nakamoto’s original paper. The thing Satoshi points out is that there is an important cost to normal transactions, namely, the entire legal system designed to protect you against fraud, such as the way you can reverse the transactions on your credit card if it gets stolen. The point of Bitcoin is that there is no way to reverse a charge. A transaction is done via cryptography: to transfer money to me, you decrypt it with your secret key and encrypt it with mine, handing ownership over to me with no third party involved that can reverse the transaction, and essentially no overhead.
All the rest of the stuff, like the decentralized blockchain and mining, is all about making that work.
Bitcoin crazies forget about the original genesis of Bitcoin. For example, they talk about adding features to stop fraud, reversing transactions, and having a central authority that manages that. This misses the point, because the existing electronic banking system already does that, and does a better job at it than cryptocurrencies ever can. If you want to mock cryptocurrencies, talk about the “DAO”, which did exactly that — and collapsed in a big fraudulent scheme where insiders made money and outsiders didn’t.
Sticking to Satoshi’s original ideas are a lot better than trying to repeat how the crazy fringe activists define Bitcoin.

How does any money have value?

Oliver’s answer is currencies have value because people agree that they have value, like how they agree a Beanie Baby is worth $15,000.
This is wrong. A better way of asking the question why the value of money changes. The dollar has been losing roughly 2% of its value each year for decades. This is called “inflation”, as the dollar loses value, it takes more dollars to buy things, which means the price of things (in dollars) goes up, and employers have to pay us more dollars so that we can buy the same amount of things.
The reason the value of the dollar changes is largely because the Federal Reserve manages the supply of dollars, using the same law of Supply and Demand. As you know, if a supply decreases (like oil), then the price goes up, or if the supply of something increases, the price goes down. The Fed manages money the same way: when prices rise (the dollar is worth less), the Fed reduces the supply of dollars, causing it to be worth more. Conversely, if prices fall (or don’t rise fast enough), the Fed increases supply, so that the dollar is worth less.
The reason money follows the law of Supply and Demand is because people use money, they consume it like they do other goods and services, like gasoline, tax preparation, food, dance lessons, and so forth. It’s not like a fine art painting, a stamp collection or a Beanie Baby — money is a product. It’s just that people have a hard time thinking of it as a consumer product since, in their experience, money is what they use to buy consumer products. But it’s a symmetric operation: when you buy gasoline with dollars, you are actually selling dollars in exchange for gasoline. That you call one side in this transaction “money” and the other “goods” is purely arbitrary, you call gasoline money and dollars the good that is being bought and sold for gasoline.
The reason dollars is a product is because trying to use gasoline as money is a pain in the neck. Storing it and exchanging it is difficult. Goods like this do become money, such as famously how prisons often use cigarettes as a medium of exchange, even for non-smokers, but it has to be a good that is fungible, storable, and easily exchanged. Dollars are the most fungible, the most storable, and the easiest exchanged, so has the most value as “money”. Sure, the mechanic can fix the farmers car for three chickens instead, but most of the time, both parties in the transaction would rather exchange the same value using dollars than chickens.
So the value of dollars is not like the value of Beanie Babies, which people might buy for $15,000, which changes purely on the whims of investors. Instead, a dollar is like gasoline, which obey the law of Supply and Demand.
This brings us back to the question of where Bitcoin gets its value. While Bitcoin is indeed used like dollars to buy things, that’s only a tiny use of the currency, so therefore it’s value isn’t determined by Supply and Demand. Instead, the value of Bitcoin is a lot like Beanie Babies, obeying the laws of investments. So in this respect, Oliver is right about where the value of Bitcoin comes, but wrong about where the value of dollars comes from.

Why Bitcoin conference didn’t take Bitcoin

John Oliver points out the irony of a Bitcoin conference that stopped accepting payments in Bitcoin for tickets.
The biggest reason for this is because Bitcoin has become so popular that transaction fees have gone up. Instead of being proof of failure, it’s proof of popularity. What John Oliver is saying is the old joke that nobody goes to that popular restaurant anymore because it’s too crowded and you can’t get a reservation.
Moreover, the point of Bitcoin is not to replace everyday currencies for everyday transactions. If you read Satoshi Nakamoto’s whitepaper, it’s only goal is to replace certain types of transactions, like purely electronic transactions where electronic goods and services are being exchanged. Where real-life goods/services are being exchanged, existing currencies work just fine. It’s only the crazy activists who claim Bitcoin will eventually replace real world currencies — the saner people see it co-existing with real-world currencies, each with a different value to consumers.

Turning a McNugget back into a chicken

John Oliver uses the metaphor of turning a that while you can process a chicken into McNuggets, you can’t reverse the process. It’s a funny metaphor.
But it’s not clear what the heck this metaphor is trying explain. That’s not a metaphor for the blockchain, but a metaphor for a “cryptographic hash”, where each block is a chicken, and the McNugget is the signature for the block (well, the block plus the signature of the last block, forming a chain).
Even then that metaphor as problems. The McNugget produced from each chicken must be unique to that chicken, for the metaphor to accurately describe a cryptographic hash. You can therefore identify the original chicken simply by looking at the McNugget. A slight change in the original chicken, like losing a feather, results in a completely different McNugget. Thus, nuggets can be used to tell if the original chicken has changed.
This then leads to the key property of the blockchain, it is unalterable. You can’t go back and change any of the blocks of data, because the fingerprints, the nuggets, will also change, and break the nugget chain.
The point is that while John Oliver is laughing at a silly metaphor to explain the blockchain becuase he totally misses the point of the metaphor.
Oliver rightly says “don’t worry if you don’t understand it — most people don’t”, but that includes the big companies that John Oliver name. Some companies do get it, and are producing reasonable things (like JP Morgan, by all accounts), but some don’t. IBM and other big consultancies are charging companies millions of dollars to consult with them on block chain products where nobody involved, the customer or the consultancy, actually understand any of it. That doesn’t stop them from happily charging customers on one side and happily spending money on the other.
Thus, rather than Oliver explaining the problem, he’s just being part of the problem. His explanation of blockchain left you dumber than before.

ICO’s

John Oliver mocks the Brave ICO ($35 million in 30 seconds), claiming it’s all driven by YouTube personalities and people who aren’t looking at the fundamentals.
And while this is true, most ICOs are bunk, the  Brave ICO actually had a business model behind it. Brave is a Chrome-like web-browser whose distinguishing feature is that it protects your privacy from advertisers. If you don’t use Brave or a browser with an ad block extension, you have no idea how bad things are for you. However, this presents a problem for websites that fund themselves via advertisements, which is most of them, because visitors no longer see ads. Brave has a fix for this. Most people wouldn’t mind supporting the websites they visit often, like the New York Times. That’s where the Brave ICO “token” comes in: it’s not simply stock in Brave, but a token for micropayments to websites. Users buy tokens, then use them for micropayments to websites like New York Times. The New York Times then sells the tokens back to the market for dollars. The buying and selling of tokens happens without a centralized middleman.
This is still all speculative, of course, and it remains to be seen how successful Brave will be, but it’s a serious effort. It has well respected VC behind the company, a well-respected founder (despite the fact he invented JavaScript), and well-respected employees. It’s not a scam, it’s a legitimate venture.

How to you make money from Bitcoin?

The last part of the show is dedicated to describing all the scam out there, advising people to be careful, and to be “responsible”. This is garbage.
It’s like my simple two step process to making lots of money via Bitcoin: (1) buy when the price is low, and (2) sell when the price is high. My advice is correct, of course, but useless. Same as “be careful” and “invest responsibly”.
The truth about investing in cryptocurrencies is “don’t”. The only responsible way to invest is to buy low-overhead market index funds and hold for retirement. No, you won’t get super rich doing this, but anything other than this is irresponsible gambling.
It’s a hard lesson to learn, because everyone is telling you the opposite. The entire channel CNBC is devoted to day traders, who buy and sell stocks at a high rate based on the same principle as a ponzi scheme, basing their judgment not on the fundamentals (like long term dividends) but animal spirits of whatever stock is hot or cold at the moment. This is the same reason people buy or sell Bitcoin, not because they can describe the fundamental value, but because they believe in a bigger fool down the road who will buy it for even more.
For things like Bitcoin, the trick to making money is to have bought it over 7 years ago when it was essentially worthless, except to nerds who were into that sort of thing. It’s the same tick to making a lot of money in Magic: The Gathering trading cards, which nerds bought decades ago which are worth a ton of money now. Or, to have bought Apple stock back in 2009 when the iPhone was new, when nerds could understand the potential of real Internet access and apps that Wall Street could not.
That was my strategy: be a nerd, who gets into things. I’ve made a good amount of money on all these things because as a nerd, I was into Magic: The Gathering, Bitcoin, and the iPhone before anybody else was, and bought in at the point where these things were essentially valueless.
At this point with cryptocurrencies, with the non-nerds now flooding the market, there little chance of making it rich. The lottery is probably a better bet. Instead, if you want to make money, become a nerd, obsess about a thing, understand a thing when its new, and cash out once the rest of the market figures it out. That might be Brave, for example, but buy into it because you’ve spent the last year studying the browser advertisement ecosystem, the market’s willingness to pay for content, and how their Basic Attention Token delivers value to websites — not because you want in on the ICO craze.

Conclusion

John Oliver spends 25 minutes explaining Bitcoin, Cryptocurrencies, and the Blockchain to you. Sure, it’s funny, but it leaves you worse off than when it started. It admits they “simplify” the explanation, but they simplified it so much to the point where they removed all useful information.

Voksi Releases Detailed Denuvo-Cracking Video Tutorial

Post Syndicated from Andy original https://torrentfreak.com/voksi-releases-detailed-denuvo-cracking-video-tutorial-180210/

Earlier this week, version 4.9 of the Denuvo anti-tamper system, which had protected Assassins Creed Origin for the past several months, was defeated by Italian cracking group CPY.

While Denuvo would probably paint four months of protection as a success, the company would certainly have preferred for things to have gone on a bit longer, not least following publisher Ubisoft’s decision to use VMProtect technology on top.

But while CPY do their thing in Italy there’s another rival whittling away at whatever the giants at Denuvo (and new owner Irdeto) can come up with. The cracker – known only as Voksi – hails from Bulgaria and this week he took the unusual step of releasing a 90-minute video (embedded below) in which he details how to defeat Denuvo’s V4 anti-tamper technology.

The video is not for the faint-hearted so those with an aversion to issues of a highly technical nature might feel the urge to look away. However, it may surprise readers to learn that not so long ago, Voksi knew absolutely nothing about coding.

“You will find this very funny and unbelievable,” Voksi says, recalling the events of 2012.

“There was one game called Sanctum and on one free [play] weekend [on Steam], I and my best friend played through it and saw how great the cooperative action was. When the free weekend was over, we wanted to keep playing, but we didn’t have any money to buy the game.

“So, I started to look for alternative ways, LAN emulators, anything! Then I decided I need to crack it. That’s how I got into reverse engineering. I started watching some shitty YouTube videos with bad quality and doing some tutorials. Then I found about Steam exploits and that’s how I got into making Steamworks fixes, allowing cracked multiplayer between players.”

Voksi says his entire cracking career began with this one indie game and his desire to play it with his best friend. Prior to that, he had absolutely no experience at all. He says he’s taken no university courses or any course at all for that matter. Everything he knows has come from material he’s found online. But the intrigue doesn’t stop there.

“I don’t even know how to code properly in high-level language like C#, C++, etc. But I understand assembly [language] perfectly fine,” he explains.

For those who code, that’s generally a little bit back to front, with low-level languages usually posing the most difficulties. But Voksi says that with assembly, everything “just clicked.”

Of course, it’s been six years since the 21-year-old was first motivated to crack a game due to lack of funds. In the more than half decade since, have his motivations changed at all? Is it the thrill of solving the puzzle or are there other factors at play?

“I just developed an urge to provide paid stuff for free for people who can’t afford it and specifically, co-op and multiplayer cracks. Of course, i’m not saying don’t support the developers if you have the money and like the game. You should do that,” he says.

“The challenge of cracking also motivates me, especially with an abomination like Denuvo. It is pure cancer for the gaming industry, it doesn’t help and it only causes issues for the paying customers.”

Those who follow Voksi online will know that as well as being known in his own right, he’s part of the REVOLT group, a collective that has Voksi’s core interests and goals as their own.

“REVOLT started as a group with one and only goal – to provide multiplayer support for cracked games. No other group was doing it until that day. It was founded by several members, from which I’m currently the only one active, still releasing cracks.

“Our great achievements are in first place, of course, cracking Denuvo V4, making us one of the four groups/people who were able to break the protection. In second place are our online fixes for several AAA games, allowing you to play on legit servers with legit players. In third place, our ordinary Steamworks fixes allowing you to play multiplayer between cracked users.”

In communities like /r/crackwatch on Reddit and those less accessible, Voksi and others doing similar work are often held up as Internet heroes, cracking games in order to give the masses access to something that might’ve been otherwise inaccessible. But how does this fame sit with him?

“Well, I don’t see myself as a hero, just another ordinary person doing what he loves. I love seeing people happy because of my work, that’s also a big motivation, but nothing more than that,” he says.

Finally, what’s up next for Voksi and what are his hopes for the rest of the year?

“In an ideal world, Denuvo would die. As for me, I don’t know, time will tell,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization

Post Syndicated from Songzhi Liu original https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/

One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using EMRFS (EMR File System) authorization, the Amazon S3 storage-level authorization on Amazon EMR.

Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.

Simply put, on Amazon EMR, we can now have an Amazon EC2 role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.

In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.

EMRFS authorization for Amazon S3

There are two prerequisites for using this feature:

  1. Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a Kerberos-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.
  2. The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.

EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.

Assume that you have the following three identities in your organization, and they are defined in the Active Directory:

To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:

In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named EMR_EC2_RestrictedRole. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.

Important: For all other roles, like emrfs_auth_group_role_data_eng, you need to add the base role (EMR_EC2_RestrictedRole) as the trusted entity so that it can assume other roles. See the following example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The following is an example policy for the admin user role (emrfs_auth_user_role_admin_user):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

We are assuming the admin user has access to all buckets in this example.

The following is an example policy for the data science group role (emrfs_auth_group_role_data_sci):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

This role grants all Amazon S3 permissions to the emrfs-auth-data-science-bucket-demo bucket and all the objects in it. Similarly, the policy for the role emrfs_auth_group_role_data_eng is shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

Example role mappings configuration

To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post

Consider the following scenario.

First, the admin user admin1 tries to log in and run a command to access Amazon S3 data through EMRFS. The first role emrfs_auth_user_role_admin_user on the mapping list, which is a user role, is mapped and picked up. Then admin1 has access to the Amazon S3 locations that are defined in this role.

Then a user from the data engineer group (grp_data_engineering) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the grp_data_engineering group, the group role emrfs_auth_group_role_data_eng is assumed, and the user has proper access to Amazon S3 that is defined in the emrfs_auth_group_role_data_eng role.

Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the prefix type. If the user is trying to access s3://emrfs-auth-default-bucket-demo/, then the prefix mapping is in effect, and the prefix role emrfs_auth_prefix_role_default_s3_prefix is assumed.

If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the EMR_EC2RestrictedRole. This role is assumed by the EC2 instances in the cluster.

In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.

Setting up an EMR cluster and mapping Active Directory users and groups

Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.

Active Directory setup

Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).

To create and join Active Directory to Amazon EMR, follow the steps in the blog post Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory.

After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like admin1, dataeng1, datascientist1, grp_data_engineering, and grp_data_science, and then add the users to the right groups.

Join the EMR cluster to an Active Directory domain

For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.

The following is an example of the EMR security configuration with a trusted Active Directory domain EMRKRB.TEST.COM and the EMRFS role mappings as we discussed earlier:

The EMRFS role mapping configuration is shown in this example:

We will also provide an example AWS CLI command that you can run.

Launching the EMR cluster and running the tests

Now you have configured Kerberos and EMRFS authorization for Amazon S3.

Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.

[
  {
    "Classification":"hue-ini",
    "Properties":{

    },
    "Configurations":[
      {
        "Classification":"desktop",
        "Properties":{

        },
        "Configurations":[
          {
            "Classification":"ldap",
            "Properties":{

            },
            "Configurations":[
              {
                "Classification":"ldap_servers",
                "Properties":{

                },
                "Configurations":[
                  {
                    "Classification":"AWS",
                    "Properties":{
                      "base_dn":"DC=emrkrb,DC=test,DC=com",
                      "ldap_url":"ldap://emrkrb.test.com",
                      "search_bind_authentication":"false",
                      "bind_dn":"CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com",
                      "bind_password":"Abc123456",
                      "create_users_on_login":"true",
                      "nt_domain":"emrkrb.test.com"
                    },
                    "Configurations":[

                    ]
                  }
                ]
              }
            ]
          },
          {
            "Classification":"auth",
            "Properties":{
              "backend":"desktop.auth.backend.LdapBackend"
            },
            "Configurations":[

            ]
          }
        ]
      }
    ]
  }

Note: In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.

Now let’s use this configuration and the security configuration you created before to launch the cluster.

In the Amazon EMR console, choose Create cluster. Then choose Go to advanced options. On the Step1: Software and Steps page, under Edit software settings (optional), paste the configuration in the box.

The rest of the setup is the same as an ordinary cluster setup, except in the Security Options section. In Step 4: Security, under Permissions, choose Custom, and then choose the RestrictedRole that you created before.

Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the Amazon EMR Management Guide for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.

Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.

You can use the following AWS CLI command to create a cluster.

aws emr create-cluster --name "TestEMRFSAuthorization" \ 
--release-label emr-5.10.0 \ --instance-type m3.xlarge \ 
--instance-count 3 \ 
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \ 
--security-configuration MyKerberosConfig \ 
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=<YourClusterKDCAdminPassword>, \ ADDomainJoinUser=<YourADUserLogonName>,ADDomainJoinPassword=<YourADUserPassword>, \ 
CrossRealmTrustPrincipalPassword=<MatchADTrustPwd>

Note: If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.

After the cluster gets into the Waiting state, try to connect by using SSH into the cluster using the Active Directory user name and password.

ssh -l [email protected] <EMR IP or DNS name>

Quickly run two commands to show that the Active Directory join is successful:

  1. id [user name] shows the mapped AD users and groups in Linux.
  2. hdfs groups [user name] shows the mapped group in Hadoop.

Both should return the current Active Directory user and group information if the setup is correct.

Now, you can test the user mapping first. Log in with the admin1 user, and run a Hadoop list directory command:

hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/

Now switch to a user from the data engineer group.

Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 Access Denied exception.

When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.

hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/

It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.

 

To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under /user/<username>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.

First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/.

You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.

It failed and threw an Amazon S3 Access Denied error.

Now insert one line of data into the successfully create table.

Next, log out, switch to a data science group user, and create another table, test2_datasci_tb.

The creation is successful.

The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).

Now let’s come back to the command line and run some Spark commands.

Login to the master node using the datascientist1 user:

Start the SparkSQL interactive shell by typing spark-sql, and run the show tables command. It should list the tables that you created using Hive.

As a data science group user, try select on both tables. You will find that you can only select the table defined in the location that your group has access to.

Conclusion

EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.


Additional Reading

If you found this post useful, be sure to check out Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory and Launching and Running an Amazon EMR Cluster inside a VPC.


About the Authors

Songzhi Liu is a Big Data Consultant with AWS Professional Services. He works closely with AWS customers to provide them Big Data & Machine Learning solutions and best practices on the Amazon cloud.

 

 

 

 

Four days of STEAM at Bett 2018

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/bett-2018/

If you’re an educator from the UK, chances are you’ve heard of Bett. For everyone else: Bett stands for British Education Technology Tradeshow. It’s the El Dorado of edtech, where every street is adorned with interactive whiteboards, VR headsets, and new technologies for the classroom. Every year since 2014, the Raspberry Pi Foundation has been going to the event hosted in the ExCeL London to chat to thousands of lovely educators about our free programmes and resources.

Raspberry Pi Bett 2018

On a mission

Our setup this year consisted of four pods (imagine tables on steroids) in the STEAM village, and the mission of our highly trained team of education agents was to establish a new world record for Highest number of teachers talked to in a four-day period. I’m only half-joking.

Bett 2018 Raspberry Pi

Educators with a mission

Meeting educators

The best thing about being at Bett is meeting the educators who use our free content and training materials. It’s easy to get wrapped up in the everyday tasks of the office without stopping to ask: “Hey, have we asked our users what they want recently?” Events like Bett help us to connect with our audience, creating some lovely moments for both sides. We had plenty of Hello World authors visit us, including Gary Stager, co-author of Invent to Learn, a must-read for any computing educator. More than 700 people signed up for a digital subscription, we had numerous lovely conversations about our content and about ideas for new articles, and we met many new authors expressing an interest in writing for us in the future.

BETT 2018 Hello World Raspberry Pi
BETT 2018 Hello World Raspberry Pi
BETT 2018 Hello World Raspberry Pi

We also talked to lots of Raspberry Pi Certified Educators who we’d trained in our free Picademy programme — new dates in Belfast and Dublin now! — and who are now doing exciting and innovative things in their local areas. For example, Chris Snowden came to tell us about the great digital making outreach work he has been doing with the Eureka! museum in Yorkshire.

Bett 2018 Raspberry Pi

Raspberry Pi Certified Educator Chris Snowden

Digital making for kids

The other best thing about being at Bett is running workshops for young learners and seeing the delight on their faces when they accomplish something they believed to be impossible only five minutes ago. On the Saturday, we ran a massive Raspberry Jam/Code Club where over 250 children, parents, and curious onlookers got stuck into some of our computing activities. We were super happy to find out that we’d won the Bett Kids’ Choice Award for Best Hands-on Experience — a fantastic end to a busy four days. With Bett over for another year, our tired and happy ‘rebel alliance’ from across the Foundation still had the energy to pose for a group photo.

Bett 2018 Raspberry Pi

Celebrating our ‘Best Hands-on Experience’ award

More events

You can find out more about starting a Code Club here, and if you’re running a Jam, why not get involved with our global Raspberry Jam Big Birthday Weekend celebrations in March?

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

We’ll be at quite a few events in 2018, including the Big Bang Fair in March — do come and say hi.

The post Four days of STEAM at Bett 2018 appeared first on Raspberry Pi.

Backblaze Hard Drive Stats for 2017

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hard-drive-stats-for-2017/

Backbalze Drive Stats 2017 Review

Beginning in April 2013, Backblaze has recorded and saved daily hard drive statistics from the drives in our data centers. Each entry consists of the date, manufacturer, model, serial number, status (operational or failed), and all of the SMART attributes reported by that drive. As of the end of 2017, there are about 88 million entries totaling 23 GB of data. You can download this data from our website if you want to do your own research, but for starters here’s what we found.

Overview

At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We’ll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track. Along the way we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

Hard Drive Reliability Statistics for Q4 2017

At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.

Hard Drive Annualized Failure Rates for Q4 2017

A few things to remember when viewing this chart:

  • The failure rate listed is for just Q4 2017. If a drive model has a failure rate of 0%, it means there were no drive failures of that model during Q4 2017.
  • There were 62 drives (91,305 minus 91,243) that were not included in the list above because we did not have at least 45 of a given drive model. The most common reason we would have fewer than 45 drives of one model is that we needed to replace a failed drive and we had to purchase a different model as a replacement because the original model was no longer available. We use 45 drives of the same model as the minimum number to qualify for reporting quarterly, yearly, and lifetime drive statistics.
  • Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure.
  • AFR stands for Annualized Failure Rate, which is the projected failure rate for a year based on the data from this quarter only.

Bulking Up and Adding On Storage

Looking back over 2017, we not only added new drives, we “bulked up” by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:

Backblaze Drive Population by Drive Size

For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.

2017 Hard Drive Failure Stats

Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.

Hard Drive Annualized Failure Rates

The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.

Let’s take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.

Annual Hard Drive Failure Rates by Year

The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:

  • The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
  • While it looks like the failure rates for the 3 TB WDC drives have also decreased, you’ll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
  • The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That’s impressive, but with only 45 drives in use for each model, not statistically useful.
  • The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.

A Few More Numbers

To save you countless hours of looking, we’ve culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.

  • 116,833 — The number of hard drives for which we have data from April 2013 through the end of December 2017. Currently there are 91,305 drives (data drives) in operation. This means 25,528 drives have either failed or been removed from service due for some other reason — typically migration.
  • 29,844 — The number of hard drives that were installed in 2017. This includes new drives, migrations, and failure replacements.
  • 81.76 — The number of hard drives that were installed each day in 2017. This includes new drives, migrations, and failure replacements.
  • 95,638 — The number of drives installed since we started keeping records in April 2013 through the end of December 2017.
  • 55.41 — The average number of hard drives installed per day from April 2013 to the end of December 2017. The installations can be new drives, migration replacements, or failure replacements.
  • 1,508 — The number of hard drives that were replaced as failed in 2017.
  • 4.13 — The average number of hard drives that have failed each day in 2017.
  • 6,795 — The number of hard drives that have failed from April 2013 until the end of December 2017.
  • 3.94 — The average number of hard drives that have failed each day from April 2013 until the end of December 2017.

Can’t Get Enough Hard Drive Stats?

We’ll be presenting the webinar “Backblaze Hard Drive Stats for 2017” on Thursday February 9, 2017 at 10:00 Pacific time. The webinar will dig deeper into the quarterly, yearly, and lifetime hard drive stats and include the annual and lifetime stats by drive size and manufacturer. You will need to subscribe to the Backblaze BrightTALK channel to view the webinar. Sign up today.

As a reminder, the complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone — it is free.

Good luck and let us know if you find anything interesting.

The post Backblaze Hard Drive Stats for 2017 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

After Section 702 Reauthorization

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/after_section_7.html

For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We’ve just lost an important battle. On January 18, President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law.

Section 702 was initially passed in 2008, as an amendment to the Foreign Intelligence Surveillance Act of 1978. As the title of that law says, it was billed as a way for the NSA to spy on non-Americans located outside the United States. It was supposed to be an efficiency and cost-saving measure: the NSA was already permitted to tap communications cables located outside the country, and it was already permitted to tap communications cables from one foreign country to another that passed through the United States. Section 702 allowed it to tap those cables from inside the United States, where it was easier. It also allowed the NSA to request surveillance data directly from Internet companies under a program called PRISM.

The problem is that this authority also gave the NSA the ability to collect foreign communications and data in a way that inherently and intentionally also swept up Americans’ communications as well, without a warrant. Other law enforcement agencies are allowed to ask the NSA to search those communications, give their contents to the FBI and other agencies and then lie about their origins in court.

In 1978, after Watergate had revealed the Nixon administration’s abuses of power, we erected a wall between intelligence and law enforcement that prevented precisely this kind of sharing of surveillance data under any authority less restrictive than the Fourth Amendment. Weakening that wall is incredibly dangerous, and the NSA should never have been given this authority in the first place.

Arguably, it never was. The NSA had been doing this type of surveillance illegally for years, something that was first made public in 2006. Section 702 was secretly used as a way to paper over that illegal collection, but nothing in the text of the later amendment gives the NSA this authority. We didn’t know that the NSA was using this law as the statutory basis for this surveillance until Edward Snowden showed us in 2013.

Civil libertarians have been battling this law in both Congress and the courts ever since it was proposed, and the NSA’s domestic surveillance activities even longer. What this most recent vote tells me is that we’ve lost that fight.

Section 702 was passed under George W. Bush in 2008, reauthorized under Barack Obama in 2012, and now reauthorized again under Trump. In all three cases, congressional support was bipartisan. It has survived multiple lawsuits by the Electronic Frontier Foundation, the ACLU, and others. It has survived the revelations by Snowden that it was being used far more extensively than Congress or the public believed, and numerous public reports of violations of the law. It has even survived Trump’s belief that he was being personally spied on by the intelligence community, as well as any congressional fears that Trump could abuse the authority in the coming years. And though this extension lasts only six years, it’s inconceivable to me that it will ever be repealed at this point.

So what do we do? If we can’t fight this particular statutory authority, where’s the new front on surveillance? There are, it turns out, reasonable modifications that target surveillance more generally, and not in terms of any particular statutory authority. We need to look at US surveillance law more generally.

First, we need to strengthen the minimization procedures to limit incidental collection. Since the Internet was developed, all the world’s communications travel around in a single global network. It’s impossible to collect only foreign communications, because they’re invariably mixed in with domestic communications. This is called “incidental” collection, but that’s a misleading name. It’s collected knowingly, and searched regularly. The intelligence community needs much stronger restrictions on which American communications channels it can access without a court order, and rules that require they delete the data if they inadvertently collect it. More importantly, “collection” is defined as the point the NSA takes a copy of the communications, and not later when they search their databases.

Second, we need to limit how other law enforcement agencies can use incidentally collected information. Today, those agencies can query a database of incidental collection on Americans. The NSA can legally pass information to those other agencies. This has to stop. Data collected by the NSA under its foreign surveillance authority should not be used as a vehicle for domestic surveillance.

The most recent reauthorization modified this lightly, forcing the FBI to obtain a court order when querying the 702 data for a criminal investigation. There are still exceptions and loopholes, though.

Third, we need to end what’s called “parallel construction.” Today, when a law enforcement agency uses evidence found in this NSA database to arrest someone, it doesn’t have to disclose that fact in court. It can reconstruct the evidence in some other manner once it knows about it, and then pretend it learned of it that way. This right to lie to the judge and the defense is corrosive to liberty, and it must end.

Pressure to reform the NSA will probably first come from Europe. Already, European Union courts have pointed to warrantless NSA surveillance as a reason to keep Europeans’ data out of US hands. Right now, there is a fragile agreement between the EU and the United States ­– called “Privacy Shield” — ­that requires Americans to maintain certain safeguards for international data flows. NSA surveillance goes against that, and it’s only a matter of time before EU courts start ruling this way. That’ll have significant effects on both government and corporate surveillance of Europeans and, by extension, the entire world.

Further pressure will come from the increased surveillance coming from the Internet of Things. When your home, car, and body are awash in sensors, privacy from both governments and corporations will become increasingly important. Sooner or later, society will reach a tipping point where it’s all too much. When that happens, we’re going to see significant pushback against surveillance of all kinds. That’s when we’ll get new laws that revise all government authorities in this area: a clean sweep for a new world, one with new norms and new fears.

It’s possible that a federal court will rule on Section 702. Although there have been many lawsuits challenging the legality of what the NSA is doing and the constitutionality of the 702 program, no court has ever ruled on those questions. The Bush and Obama administrations successfully argued that defendants don’t have legal standing to sue. That is, they have no right to sue because they don’t know they’re being targeted. If any of the lawsuits can get past that, things might change dramatically.

Meanwhile, much of this is the responsibility of the tech sector. This problem exists primarily because Internet companies collect and retain so much personal data and allow it to be sent across the network with minimal security. Since the government has abdicated its responsibility to protect our privacy and security, these companies need to step up: Minimize data collection. Don’t save data longer than absolutely necessary. Encrypt what has to be saved. Well-designed Internet services will safeguard users, regardless of government surveillance authority.

For the rest of us concerned about this, it’s important not to give up hope. Everything we do to keep the issue in the public eye ­– and not just when the authority comes up for reauthorization again in 2024 — hastens the day when we will reaffirm our rights to privacy in the digital age.

This essay previously appeared in the Washington Post.

Yaghmour: Ten Days in Shenzhen

Post Syndicated from jake original https://lwn.net/Articles/745708/rss

On his blog, embedded developer Karim Yaghmour has written about his ten-day trip to Shenzen, China, which is known as the “Silicon Valley of hardware”. His lengthy trip report covers much that would be of use to others who are thinking of making the trip, but also serves as an interesting travelogue even for those who are likely to never go. “The map didn’t disappoint and I was able to find a large number of kiosks selling some of the items I was interested in. Obviously many kiosks also had items that I had seen on Amazon or elsewhere as well. I was mostly focusing on things I hadn’t seen before. After a few hours of walking floors upon floors of shops, I was ready to start focusing on other aspects of my research: hard to source and/or evaluate components, tools and expanding my knowledge of what was available in the hardware space. Hint: TEGES’ [The Essential Guide to Electronics in Shenzhen] advice about having comfortable shoes and comfortable clothing is completely warranted.

Finding tools was relatively easy. TEGES indicates the building and floor to go to, and you’ll find most anything you can think of from rework stations, to pick-and-place machines, and including things like oscilloscopes, stereo microscopes, multimeters, screwdrivers, etc. In the process I saw some tools which I couldn’t immediately figure out the purpose for, but later found out their uses on some other visits. Satisfied with a first glance at the tools, I set out to look for one specific component I was having a hard time with. That proved a lot more difficult than anticipated. Actually I should qualify that. It was trivial to find tons of it, just not something that matched exactly what I needed. I used TEGES to identify one part of the market that seemed most likely to have what I was looking for, but again, I could find lots of it, just not what I needed.”