Tag Archives: Cali

Facebook User Pleads Guilty to Uploading Pirated Copy of Deadpool

Post Syndicated from Ernesto original https://torrentfreak.com/facebook-user-pleads-guilty-to-uploading-pirated-copy-of-deadpool-180522/

Every day, hundreds of millions of people use Facebook to share photos, videos and other information.

While most of the content posted on the site is relatively harmless, some people use it to share things they are not supposed to. A pirated copy of Deadpool, for example.

This is what the now 22-year-old Trevon Franklin from Fresno, California, did early 2016. Just a week after the first installment of the box-office hit Deadpool premiered in theaters, he shared a pirated copy of the movie on the social network.

To be clear, Franklin wasn’t the person who originally made the copy available. He simply downloaded it from the file-sharing site Putlocker.is and then proceeded to upload it to his Facebook account, using the screen name “Tre-Von M. King.”

This post went viral with more than six million viewers ‘tuning in.’ While many people dream of this kind of attention, in this case, it meant that copyright holder Twentieth Century Fox and the feds were alerted as well.

The FBI launched a full-fledged investigation which eventually led to an indictment and the arrest of Franklin last summer.

After months of relative silence, Franklin has now signed a plea agreement with the Government where he admits to sharing the pirated film on Facebook. In return, the authorities will recommend a sentence reduction.

“Defendant admits that defendant is, in fact, guilty of the offense to which defendant is agreeing to plead guilty,” the plea agreement reads.

The legal paperwork, signed by both sides, states that Franklin downloaded the pirated copy from Putlocker, knowing full well that he didn’t have permission to do so. He then willfully shared it on Facebook where it was accessed by millions of people.

“Between February 20 and 22, 2016, while Deadpool was still in theaters and had not yet been made available for purchase by the public for home viewing, the copy of Deadpool defendant posted to his Facebook page had been viewed over 6,386,456 times,” the paperwork reads.

From the plea agreement

While a federal case over Facebook uploads is unlikely, the risk of legal trouble was pointed out to Franklin by others.

According to Facebook comments from 2016, several people warned “Tre-Von M. King” that it wasn’t wise to post copyright-infringing material on the social media platform. However, Franklin said he wasn’t worried.

It’s unclear why the US Government decided to pursue this case. Copyright infringement isn’t exactly rare on Facebook. However, it may be that the media attention and the high number of views may have prompted the authorities to set an example.

Under the terms of the plea agreement, Franklin will be sentenced for a Class A misdemeanor. This can lead to a maximum prison sentence of one year, followed by probation or a supervised release, as well as a fine of $100,000. Meanwhile, he has waived his right to a trial by jury.

A copy of the plea agreement is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Internet Association Blasts MPAA’s ‘Crony Politics’

Post Syndicated from Ernesto original https://torrentfreak.com/internet-association-blasts-mpaas-crony-politics-180516/

Last month, MPAA Chairman and CEO Charles Rivkin used the Facebook privacy debacle to voice his concern about the current state of the Internet.

“The Internet is no longer nascent – and people around the world are growing increasingly uncomfortable with what it’s becoming,” Rivkin wrote in his letter to several Senators, linking Internet-related privacy breaches to regulation, immunities, and safe harbors.

“The moment has come for a national dialogue about restoring accountability on the internet. Whether through regulation, recalibration of safe harbors, or the exercise of greater responsibility by online platforms, something must change.”

While it’s good to see that the head of Hollywood’s main lobbying group is concerned about Facebook users, not everyone is convinced of his good intentions. Some suggest that the MPAA is hijacking the scandal to further its own, unrelated, interests.

This is exactly the position taken by the Internet Association, a US-based organization comprised of the country’s leading Internet-based businesses. The organization is comprised of many prominent members including Google, Twitter, Amazon, Reddit, Yahoo, and Facebook.

Several of these companies were the target of the MPAA’s criticism, named or not, which prompted the Internet Association to respond.

In an open letter to House Energy and Commerce Committee Chairman Greg Walden, the group’s president and CEO, Michael Beckerman, lashes out against the MPAA and similar lobbying groups. These groups hijack the regulatory debate with anti-internet lobbying efforts, he says.

“Look no further than the gratuitous letter Motion Picture Association of America, Inc. Chairman & CEO Charles Rivkin submitted to the Energy and Commerce Committee during your recent Zuckerberg hearing,” Beckerman writes.

“The hearing had nothing to do with the Motion Picture industry, but Mr. Rivkin demonstrated shameless rent-seeking by calling for regulation on internet companies simply in an effort to protect his clients’ business interest.”

These rent-seeking efforts are part of the “crony politics” used by “pre-internet” companies to protect their old business models, the Internet Association’s CEO adds.

“This blatant display of crony politics is not unique to the big Hollywood studios, but rather emblematic of a broader anti-consumer lobbying campaign. Many other pre-internet industries —telcos, legacy tech firms, hotels, and others — are looking to defend old business models by regulating a rising competitor to the clear detriment of consumers.”

These harsh words show that the rift between Silicon Valley and Hollywood is still wide open.

It’s clear that the MPAA and other copyright industry groups are still hoping for stricter regulation to ensure that Internet companies are held accountable. Privacy is generally not their main focus though.

They mostly want companies such as Google and Facebook to prevent piracy and compensate rightsholders. Whether using the Facebook privacy scandal was a good way to bring this message to the forefront is a matter of which camp one’s in.

While the Internet Association bashes the MPAA’s efforts, they don’t discount the idea that more can be done to prevent and stop abuse.

“As technology and services evolve to better meet user needs, bad actors will find ways to take advantage. Our members are ever vigilant and work hard to stop them. The task is never done, and we pledge to work harder and do even better,” Beckerman notes.

The Internet Association’s full letter, spotted by Variety, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Sending Inaudible Commands to Voice Assistants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/sending_inaudib.html

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online ­– simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

[$] Autoscaling for Kubernetes workloads

Post Syndicated from corbet original https://lwn.net/Articles/754153/rss

Technologies like containers, clusters, and Kubernetes offer the prospect
of rapidly scaling the available computing resources to match variable demands
placed on the system. Actually implementing that scaling can be a
challenge, though.
During KubeCon
+ CloudNativeCon Europe 2018
,
Frederic Branczyk from CoreOS (now
part of Red Hat) held a packed session
to introduce a standard and officially recommended way to scale workloads
automatically in Kubernetes
clusters.

Welcome Josh — Data Center Technician

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-josh-datacenter-technician/

The Backblaze production team is growing and that means the data center is increasingly gaining some new faces. One of the newest to join the team is Josh! Lets learn a bit more about Josh shall we?

What is your Backblaze Title?
I’m a Data Center Technician in the Sacramento area.

Where are you originally from?
I lived all over the California central valley growing up.

What attracted you to Backblaze?
Backblaze is the best of a few worlds — cool startup meets professional DIYers meets transparent tech company (a rare thing).

What do you expect to learn while being at Backblaze?
I expect to learn about Data Center operations, and continue to develop the Linux skills that landed me here.

Favorite hobby?
Building and playing with new and useful toys.

Star Trek or Star Wars?
Darmok and Jalad at Tanagra.

Coke or Pepsi?
Good Beer.

Favorite food?
Tacos. No, burgers. No, it’s sushi. No, gyros. I can’t choose.

Why do you like certain things?
I like things that I can take apart and rebuild and turn every knob and adjust every piece. It means there’s a lot to learn, and I definitely like that.

Darmok and Jalad on the ocean! Welcome aboard Josh 😀

The post Welcome Josh — Data Center Technician appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Iconic Megaupload.com Domain Has a New Owner

Post Syndicated from Ernesto original https://torrentfreak.com/iconic-megaupload-com-domain-has-a-new-owner-180509/

Following the 2012 raid on Megaupload and Kim Dotcom, U.S. and New Zealand authorities seized millions of dollars in cash and other property, located around the world.

Claiming the assets were obtained through copyright and money laundering crimes, the U.S. government launched separate civil cases in which it asked the court to forfeit a wide variety of seized possessions of the Megaupload defendants.

One of these cases was lost after the U.S. branded Dotcom and his colleagues as “fugitives”.The defense team appealed the ruling, but lost again, and a subsequent petition at the Supreme Court was denied.

As a result, Dotcom had to leave behind several bank accounts and servers, as well as all hope of getting some of his dearly treasured domain names back. This includes the most valuable domain of all, Megaupload.com.

The forfeiture was made final earlier this year but since then little was known about the fate of the domain names. This week, however, it became clear that the US Government didn’t plan to hold on to it, as Megaupload.com now has a new owner.

According to the latest Whois information, which was updated late last week, RegistrarAds Inc is now the official Megaupload.com owner. This previously was Megaupload Limited, under FBI control.

New owner

RegistrarAds is a company based in Vancouver, Washington, which specializes in buying domain names. While we could not find a corporate website, the web is littered with domain disputes and other references to domain name issues.

Michelin North America, for example, filed a complaint against RegistrarAds because it registered the michelin-group.com domain, witch success. Similarly, the California Milk Processor Board, most famous for its Got Milk? ads, won a WIPO domain dispute over gotpuremilk.com.

How RegistrarAds obtained the Megaupload domain name isn’t entirely clear. It wasn’t dropped by the registry, but it might be possible that it was scooped up in an auction. Theoretically, the US Government could have sold it too, but we see no evidence for that.

It’s also unknown what the company’s plans are for Megaupload.com. However, given the company’s track record it’s unlikely that it will do anything file-sharing related. The domain hasn’t updated its nameservers yet and remains unreachable at the time of writing.

TorrentFreak reached out to RegistrarAds, hoping to find out more, but we have yet to hear back.

Megaupload.com is not the only domain that changed owners recently. The same happened to Megaclick.com, which is now registered to Buydomains.com. Several of the other seized Megaupload domain names remain in possession of US authorities, for now.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cloudflare Fails to Exclude Daily Stormer Evidence From Piracy Trial

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-fails-to-exclude-daily-stormer-evidence-from-piracy-trial-180504/

Last summer Cloudflare CEO Matthew Prince decided to terminate the account of controversial neo-Nazi site Daily Stormer.

“I woke up this morning in a bad mood and decided to kick them off the Internet,” he announced.

The company’s lawyers later explained that the move was meant as an “intellectual exercise” to start a conversation regarding censorship and free speech on the internet. However, this discussion went much further than Prince had planned.

For years, Cloudflare had a policy not to remove any accounts without a court order, so when this was exceeded, eyebrows were raised. In particular, copyright holders wondered why the company could terminate this account but not those of the most notorious pirate sites.

This is also why The Daily Stormer removal became an issue in the piracy liability case previously filed by adult entertainment publisher ALS Scan. After Cloudflare’s CEO was questioned on the matter, it could be raised before a jury during the trial as well.

Cloudflare didn’t fancy this prospect. In March, the company asked the court to preclude any evidence related to Daily Stormer or other hate groups from the upcoming trial, fearing that it would lead to “guilt by association.”

“The apparent reason that ALS seeks to offer is not for its probative value but rather for its distracting emotional impact,” Cloudflare argued.

“Given the strong feelings such evidence would almost certainly arouse among members of the jury, this evidence creates an unwarranted and impermissible risk of unfair prejudice to Cloudflare.”

However, California District Court Judge George Wu was not receptive to this argument. Following a hearing on the matter last week the Judge denied the motion, which means that ALS is allowed to use the Daily Stormer case at trial.

“[Cloudflare’s motion] to Exclude Evidence Relating to Provision or Termination of Services to Hate Groups is DENIED.”

Motion denied

In hindsight, Cloudflare’s decision to disconnect Daily Stormer the way it did might not have been the best option, but t’s too late now.

According to recent court filings AS and Cloudflare have tried to reach a settlement but thus far that hasn’t happened. This means that the case will move to the scheduled trial, unless both sides can make peace beforehand.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

EC2 Fleet – Manage Thousands of On-Demand and Spot Instances with One Request

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-fleet-manage-thousands-of-on-demand-and-spot-instances-with-one-request/

EC2 Spot Fleets are really cool. You can launch a fleet of Spot Instances that spans EC2 instance types and Availability Zones without having to write custom code to discover capacity or monitor prices. You can set the target capacity (the size of the fleet) in units that are meaningful to your application and have Spot Fleet create and then maintain the fleet on your behalf. Our customers are creating Spot Fleets of all sizes. For example, one financial service customer runs Monte Carlo simulations across 10 different EC2 instance types. They routinely make requests for hundreds of thousands of vCPUs and count on Spot Fleet to give them access to massive amounts of capacity at the best possible price.

EC2 Fleet
Today we are extending and generalizing the set-it-and-forget-it model that we pioneered in Spot Fleet with EC2 Fleet, a new building block that gives you the ability to create fleets that are composed of a combination of EC2 On-Demand, Reserved, and Spot Instances with a single API call. You tell us what you need, capacity and instance-wise, and we’ll handle all the heavy lifting. We will launch, manage, monitor and scale instances as needed, without the need for scaffolding code.

You can specify the capacity of your fleet in terms of instances, vCPUs, or application-oriented units, and also indicate how much of the capacity should be fulfilled by Spot Instances. The application-oriented units allow you to specify the relative power of each EC2 instance type in a way that directly maps to the needs of your application. All three capacity specification options (instances, vCPUs, and application-oriented units) are known as weights.

I think you’ll find a number ways this feature makes managing a fleet of instances easier, and believe that you will also appreciate the team’s near-term feature roadmap of interest (more on that in a bit).

Using EC2 Fleet
There are a number of ways that you can use this feature, whether you’re running a stateless web service, a big data cluster or a continuous integration pipeline. Today I’m going to describe how you can use EC2 Fleet for genomic processing, but this is similar to workloads like risk analysis, log processing or image rendering. Modern DNA sequencers can produce multiple terabytes of raw data each day, to process that data into meaningful information in a timely fashion you need lots of processing power. I’ll be showing you how to deploy a “grid” of worker nodes that can quickly crunch through secondary analysis tasks in parallel.

Projects in genomics can use the elasticity EC2 provides to experiment and try out new pipelines on hundreds or even thousands of servers. With EC2 you can access as many cores as you need and only pay for what you use. Prior to today, you would need to use the RunInstances API or an Auto Scaling group for the On-Demand & Reserved Instance portion of your grid. To get the best price performance you’d also create and manage a Spot Fleet or multiple Spot Auto Scaling groups with different instance types if you wanted to add Spot Instances to turbo-boost your secondary analysis. Finally, to automate scaling decisions across multiple APIs and Auto Scaling groups you would need to write Lambda functions that periodically assess your grid’s progress & backlog, as well as current Spot prices – modifying your Auto Scaling Groups and Spot Fleets accordingly.

You can now replace all of this with a single EC2 Fleet, analyzing genomes at scale for as little as $1 per analysis. In my grid, each step in in the pipeline requires 1 vCPU and 4 GiB of memory, a perfect match for M4 and M5 instances with 4 GiB of memory per vCPU. I will create a fleet using M4 and M5 instances with weights that correspond to the number of vCPUs on each instance:

  • m4.16xlarge – 64 vCPUs, weight = 64
  • m5.24xlarge – 96 vCPUs, weight = 96

This is expressed in a template that looks like this:

"Overrides": [
{
  "InstanceType": "m4.16xlarge",
  "WeightedCapacity": 64,
},
{
  "InstanceType": "m5.24xlarge",
  "WeightedCapacity": 96,
},
]

By default, EC2 Fleet will select the most cost effective combination of instance types and Availability Zones (both specified in the template) using the current prices for the Spot Instances and public prices for the On-Demand Instances (if you specify instances for which you have matching RIs, your discounts will apply). The default mode takes weights into account to get the instances that have the lowest price per unit. So for my grid, fleet will find the instance that offers the lowest price per vCPU.

Now I can request capacity in terms of vCPUs, knowing EC2 Fleet will select the lowest cost option using only the instance types I’ve defined as acceptable. Also, I can specify how many vCPUs I want to launch using On-Demand or Reserved Instance capacity and how many vCPUs should be launched using Spot Instance capacity:

"TargetCapacitySpecification": {
	"TotalTargetCapacity": 2880,
	"OnDemandTargetCapacity": 960,
	"SpotTargetCapacity": 1920,
	"DefaultTargetCapacityType": "Spot"
}

The above means that I want a total of 2880 vCPUs, with 960 vCPUs fulfilled using On-Demand and 1920 using Spot. The On-Demand price per vCPU is lower for m5.24xlarge than the On-Demand price per vCPU for m4.16xlarge, so EC2 Fleet will launch 10 m5.24xlarge instances to fulfill 960 vCPUs. Based on current Spot pricing (again, on a per-vCPU basis), EC2 Fleet will choose to launch 30 m4.16xlarge instances or 20 m5.24xlarges, delivering 1920 vCPUs either way.

Putting it all together, I have a single file (fl1.json) that describes my fleet:

    "LaunchTemplateConfigs": [
        {
            "LaunchTemplateSpecification": {
                "LaunchTemplateId": "lt-0e8c754449b27161c",
                "Version": "1"
            }
        "Overrides": [
        {
          "InstanceType": "m4.16xlarge",
          "WeightedCapacity": 64,
        },
        {
          "InstanceType": "m5.24xlarge",
          "WeightedCapacity": 96,
        },
      ]
        }
    ],
    "TargetCapacitySpecification": {
        "TotalTargetCapacity": 2880,
        "OnDemandTargetCapacity": 960,
        "SpotTargetCapacity": 1920,
        "DefaultTargetCapacityType": "Spot"
    }
}

I can launch my fleet with a single command:

$ aws ec2 create-fleet --cli-input-json file://home/ec2-user/fl1.json
{
    "FleetId":"fleet-838cf4e5-fded-4f68-acb5-8c47ee1b248a"
}

My entire fleet is created within seconds and was built using 10 m5.24xlarge On-Demand Instances and 30 m4.16xlarge Spot Instances, since the current Spot price was 1.5¢ per vCPU for m4.16xlarge and 1.6¢ per vCPU for m5.24xlarge.

Now lets imagine my grid has crunched through its backlog and no longer needs the additional Spot Instances. I can then modify the size of my fleet by changing the target capacity in my fleet specification, like this:

{         
    "TotalTargetCapacity": 960,
}

Since 960 was equal to the amount of On-Demand vCPUs I had requested, when I describe my fleet I will see all of my capacity being delivered using On-Demand capacity:

"TargetCapacitySpecification": {
	"TotalTargetCapacity": 960,
	"OnDemandTargetCapacity": 960,
	"SpotTargetCapacity": 0,
	"DefaultTargetCapacityType": "Spot"
}

When I no longer need my fleet I can delete it and terminate the instances in it like this:

$ aws ec2 delete-fleets --fleet-id fleet-838cf4e5-fded-4f68-acb5-8c47ee1b248a \
  --terminate-instances   
{
    "UnsuccessfulFleetDletetions": [],
    "SuccessfulFleetDeletions": [
        {
            "CurrentFleetState": "deleted_terminating",
            "PreviousFleetState": "active",
            "FleetId": "fleet-838cf4e5-fded-4f68-acb5-8c47ee1b248a"
        }
    ]
}

Earlier I described how RI discounts apply when EC2 Fleet launches instances for which you have matching RIs, so you might be wondering how else RI customers benefit from EC2 Fleet. Let’s say that I own regional RIs for M4 instances. In my EC2 Fleet I would remove m5.24xlarge and specify m4.10xlarge and m4.16xlarge. Then when EC2 Fleet creates the grid, it will quickly find M4 capacity across the sizes and AZs I’ve specified, and my RI discounts apply automatically to this usage.

In the Works
We plan to connect EC2 Fleet and EC2 Auto Scaling groups. This will let you create a single fleet that mixed instance types and Spot, Reserved and On-Demand, while also taking advantage of EC2 Auto Scaling features such as health checks and lifecycle hooks. This integration will also bring EC2 Fleet functionality to services such as Amazon ECS, Amazon EKS, and AWS Batch that build on and make use of EC2 Auto Scaling for fleet management.

Available Now
You can create and make use of EC2 Fleets today in all public AWS Regions!

Jeff;

Nike Sued for Running Pirated Software

Post Syndicated from Ernesto original https://torrentfreak.com/nike-sued-for-running-pirated-software-100426/

Virtually every piece of software is cracked and made available on the Internet, through a myriad of pirate sources.

These are generally visited by regular people out to save a few bucks, but according to Quest Software, pirated license keys found their way to Nike’s office as well.

The company, known for developing a variety of database software, filed a lawsuit in an Oregon federal court this week, accusing Nike of copyright infringement. Both parties have had a software license agreement in place since 2001, but during an audit last year, Qwest noticed that not all products were properly licensed.

“That audit revealed that Nike had deployed Quest Software Products far in excess of the scope allowed by the parties’ SLA,” Quest writes in their complaint, filed at a federal court in Oregon.

Quest keeps a database of all valid keys and found that Nike used “cracked” versions, which are generally circulated on pirate sites. This is something Nike must have been aware of, it adds.

“The audit also revealed that Nike had used pirated keys to bypass the Quest License Key System and made unauthorized copies of certain Quest Software Products by breaking the technological security measures Quest had in place,” Quest writes.

“Upon information and belief, to obtain a pirated key for Quest Software Products, customers must affirmatively seek out and obtain pirated keys on download sites known to traffic in counterfeit or illegally downloaded intellectual property, such as BitTorrent.”

Pirated keys?

When the software company found out, it confronted Nike with the findings. However, according to the complaint, Nike refused to purchase the additional licenses that were required for its setup. This prompted Quest to go to court instead.

At this point, it’s not entirely clear to Quest how many pirated keys were used on Nike computers. That’s something the company would like to find out during the discovery process.

Quest is certain, however, that its customer crossed a line. It accuses Nike of copyright infringement, breach of contract, and violating the DMCA’s circumvention provisions.

The company requests an injunction restraining Nike from any infringing activity and demands compensation for the damages it suffered as a result. The exact height of these damages will have to be determined at trial.

A copy of the complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Continued: the answers to your questions for Eben Upton

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eben-q-a-2/

Last week, we shared the first half of our Q&A with Raspberry Pi Trading CEO and Raspberry Pi creator Eben Upton. Today we follow up with all your other questions, including your expectations for a Raspberry Pi 4, Eben’s dream add-ons, and whether we really could go smaller than the Zero.

Live Q&A with Eben Upton, creator of the Raspberry Pi

Get your questions to us now using #AskRaspberryPi on Twitter

With internet security becoming more necessary, will there be automated versions of VPN on an SD card?

There are already third-party tools which turn your Raspberry Pi into a VPN endpoint. Would we do it ourselves? Like the power button, it’s one of those cases where there are a million things we could do and so it’s more efficient to let the community get on with it.

Just to give a counterexample, while we don’t generally invest in optimising for particular use cases, we did invest a bunch of money into optimising Kodi to run well on Raspberry Pi, because we found that very large numbers of people were using it. So, if we find that we get half a million people a year using a Raspberry Pi as a VPN endpoint, then we’ll probably invest money into optimising it and feature it on the website as we’ve done with Kodi. But I don’t think we’re there today.

Have you ever seen any Pis running and doing important jobs in the wild, and if so, how does it feel?

It’s amazing how often you see them driving displays, for example in radio and TV studios. Of course, it feels great. There’s something wonderful about the geographic spread as well. The Raspberry Pi desktop is quite distinctive, both in its previous incarnation with the grey background and logo, and the current one where we have Greg Annandale’s road picture.

The PIXEL desktop on Raspberry Pi

And so it’s funny when you see it in places. Somebody sent me a video of them teaching in a classroom in rural Pakistan and in the background was Greg’s picture.

Raspberry Pi 4!?!

There will be a Raspberry Pi 4, obviously. We get asked about it a lot. I’m sticking to the guidance that I gave people that they shouldn’t expect to see a Raspberry Pi 4 this year. To some extent, the opportunity to do the 3B+ was a surprise: we were surprised that we’ve been able to get 200MHz more clock speed, triple the wireless and wired throughput, and better thermals, and still stick to the $35 price point.

We’re up against the wall from a silicon perspective; we’re at the end of what you can do with the 40nm process. It’s not that you couldn’t clock the processor faster, or put a larger processor which can execute more instructions per clock in there, it’s simply about the energy consumption and the fact that you can’t dissipate the heat. So we’ve got to go to a smaller process node and that’s an order of magnitude more challenging from an engineering perspective. There’s more effort, more risk, more cost, and all of those things are challenging.

With 3B+ out of the way, we’re going to start looking at this now. For the first six months or so we’re going to be figuring out exactly what people want from a Raspberry Pi 4. We’re listening to people’s comments about what they’d like to see in a new Raspberry Pi, and I’m hoping by early autumn we should have an idea of what we want to put in it and a strategy for how we might achieve that.

Could you go smaller than the Zero?

The challenge with Zero as that we’re periphery-limited. If you run your hand around the unit, there is no edge of that board that doesn’t have something there. So the question is: “If you want to go smaller than Zero, what feature are you willing to throw out?”

It’s a single-sided board, so you could certainly halve the PCB area if you fold the circuitry and use both sides, though you’d have to lose something. You could give up some GPIO and go back to 26 pins like the first Raspberry Pi. You could give up the camera connector, you could go to micro HDMI from mini HDMI. You could remove the SD card and just do USB boot. I’m inventing a product live on air! But really, you could get down to two thirds and lose a bunch of GPIO – it’s hard to imagine you could get to half the size.

What’s the one feature that you wish you could outfit on the Raspberry Pi that isn’t cost effective at this time? Your dream feature.

Well, more memory. There are obviously technical reasons why we don’t have more memory on there, but there are also market reasons. People ask “why doesn’t the Raspberry Pi have more memory?”, and my response is typically “go and Google ‘DRAM price’”. We’re used to the price of memory going down. And currently, we’re going through a phase where this has turned around and memory is getting more expensive again.

Machine learning would be interesting. There are machine learning accelerators which would be interesting to put on a piece of hardware. But again, they are not going to be used by everyone, so according to our method of pricing what we might add to a board, machine learning gets treated like a $50 chip. But that would be lovely to do.

Which citizen science projects using the Pi have most caught your attention?

I like the wildlife camera projects. We live out in the countryside in a little village, and we’re conscious of being surrounded by nature but we don’t see a lot of it on a day-to-day basis. So I like the nature cam projects, though, to my everlasting shame, I haven’t set one up yet. There’s a range of them, from very professional products to people taking a Raspberry Pi and a camera and putting them in a plastic box. So those are good fun.

Raspberry Shake seismometer

The Raspberry Shake seismometer

And there’s Meteor Pi from the Cambridge Science Centre, that’s a lot of fun. And the seismometer Raspberry Shake – that sort of thing is really nice. We missed the recent South Wales earthquake; perhaps we should set one up at our Californian office.

How does it feel to go to bed every day knowing you’ve changed the world for the better in such a massive way?

What feels really good is that when we started this in 2006 nobody else was talking about it, but now we’re part of a very broad movement.

We were in a really bad way: we’d seen a collapse in the number of applicants applying to study Computer Science at Cambridge and elsewhere. In our view, this reflected a move away from seeing technology as ‘a thing you do’ to seeing it as a ‘thing that you have done to you’. It is problematic from the point of view of the economy, industry, and academia, but most importantly it damages the life prospects of individual children, particularly those from disadvantaged backgrounds. The great thing about STEM subjects is that you can’t fake being good at them. There are a lot of industries where your Dad can get you a job based on who he knows and then you can kind of muddle along. But if your dad gets you a job building bridges and you suck at it, after the first or second bridge falls down, then you probably aren’t going to be building bridges anymore. So access to STEM education can be a great driver of social mobility.

By the time we were launching the Raspberry Pi in 2012, there was this wonderful movement going on. Code Club, for example, and CoderDojo came along. Lots of different ways of trying to solve the same problem. What feels really, really good is that we’ve been able to do this as part of an enormous community. And some parts of that community became part of the Raspberry Pi Foundation – we merged with Code Club, we merged with CoderDojo, and we continue to work alongside a lot of these other organisations. So in the two seconds it takes me to fall asleep after my face hits the pillow, that’s what I think about.

We’re currently advertising a Programme Manager role in New Delhi, India. Did you ever think that Raspberry Pi would be advertising a role like this when you were bringing together the Foundation?

No, I didn’t.

But if you told me we were going to be hiring somewhere, India probably would have been top of my list because there’s a massive IT industry in India. When we think about our interaction with emerging markets, India, in a lot of ways, is the poster child for how we would like it to work. There have already been some wonderful deployments of Raspberry Pi, for example in Kerala, without our direct involvement. And we think we’ve got something that’s useful for the Indian market. We have a product, we have clubs, we have teacher training. And we have a body of experience in how to teach people, so we have a physical commercial product as well as a charitable offering that we think are a good fit.

It’s going to be massive.

What is your favourite BBC type-in listing?

There was a game called Codename: Druid. There is a famous game called Codename: Droid which was the sequel to Stryker’s Run, which was an awesome, awesome game. And there was a type-in game called Codename: Druid, which was at the bottom end of what you would consider a commercial game.

codename druid

And I remember typing that in. And what was really cool about it was that the next month, the guy who wrote it did another article that talks about the memory map and which operating system functions used which bits of memory. So if you weren’t going to do disc access, which bits of memory could you trample on and know the operating system would survive.

babbage versus bugs Raspberry Pi annual

See the full listing for Babbage versus Bugs in the Raspberry Pi 2018 Annual

I still like type-in listings. The Raspberry Pi 2018 Annual has a type-in listing that I wrote for a Babbage versus Bugs game. I will say that’s not the last type-in listing you will see from me in the next twelve months. And if you download the PDF, you could probably copy and paste it into your favourite text editor to save yourself some time.

The post Continued: the answers to your questions for Eben Upton appeared first on Raspberry Pi.

Netflix, Amazon and Hollywood Sue “SET TV” Over IPTV Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-amazon-and-hollywood-sue-set-tv-over-iptv-piracy-180422/

In recent years, piracy streaming tools and services have become a prime target for copyright enforcers.

This is particularly true for the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership forged between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

After taking action against Kodi-powered devices Tickbox and Dragonbox, key ACE members have now filed a similar lawsuit against the Florida-based company Set Broadcast, LLC, which sells the popular IPTV service SET TV.

The complaint, filed at a California federal court on Friday, further lists company owner Jason Labbosiere and employee Nelson Johnson among the defendants.

According to the movie companies, the Set TV software is little more than a pirate tool, allowing buyers to stream copyright infringing content.

“Defendants market and sell subscriptions to ‘Setvnow,’ a software application that Defendants urge their customers to use as a tool for the mass infringement of Plaintiffs’ copyrighted motion pictures and television shows,” the complaint reads.

In addition to the software, the company also offers a preloaded box. Both allow users to connect to live streams of TV channels and ‘on demand’ content. The latter includes movies that are still in theaters, which SET TV allegedly streams through third-party sources.

“For its on-demand options, Setvnow relies on third-party sources that illicitly reproduce copyrighted works and then provide streams of popular content such as movies still exclusively in theaters and television shows.”

From the complaint

The intended use of SET TV is clear, according to the movie companies. They frame it as a pirate service and believe that this is the main draw for consumers.

“Defendants promote the use of Setvnow for overwhelmingly, if not exclusively, infringing purposes, and that is how their customers use Setvnow,” the complaint reads.

Interestingly, the complaint also states that SET TV pays for sponsored reviews to reach a broader audience. The videos, posted by popular YouTubers such as Solo Man, who is quoted in the complaint, advertise the IPTV service.

“[The] sponsored reviewer promotes Setvnow as a quick and easy way to access on demand movies: ‘You have new releases right there and you simply click on the movie … you click it and click on play again and here you have the movie just like that in 1 2 3 in beautiful HD quality’.”

The lawsuit aims to bring an end to this. The movie companies ask the California District for an injunction to shut down the infringing service and impound all pre-loaded devices. In addition, they’re requesting statutory damages which could go up to several million dollars.

At the time of writing the SET TV website is still in the air, selling subscriptions. The company itself has yet to comment on the allegations.

A copy of the complaint is available here (pdf), courtesy of GeekWire.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Announcing Coolest Projects North America

Post Syndicated from Courtney Lentz original https://www.raspberrypi.org/blog/coolest-projects-north-america/

The Raspberry Pi Foundation loves to celebrate people who use technology to solve problems and express themselves creatively, so we’re proud to expand the incredibly successful event Coolest Projects to North America. This free event will be held on Sunday 23 September 2018 at the Discovery Cube Orange County in Santa Ana, California.

Coolest Projects North America logo Raspberry Pi CoderDojo

What is Coolest Projects?

Coolest Projects is a world-leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. The event is both a competition and an exhibition to give young digital makers aged 7 to 17 a platform to celebrate their successes, creativity, and ingenuity.

showcase crowd — Coolest Projects North America

In 2012, Coolest Projects was conceived as an opportunity for CoderDojo Ninjas to showcase their work and for supporters to acknowledge these achievements. Week after week, Ninjas would meet up to work diligently on their projects, hacks, and code; however, it can be difficult for them to see their long-term progress on a project when they’re concentrating on its details on a weekly basis. Coolest Projects became a dedicated time each year for Ninjas and supporters to reflect, celebrate, and share both the achievements and challenges of the maker’s journey.

three female coolest projects attendees — Coolest Projects North America

Coolest Projects North America

Not only is Coolest Projects expanding to North America, it’s also expanding its participant pool! Members of our team have met so many amazing young people creating in all areas of the world, that it simply made sense to widen our outreach to include Code Clubs, students of Raspberry Pi Certified Educators, and members of the Raspberry Jam community at large as well as CoderDojo attendees.

 a boy showing a technology project to an old man, with a girl playing on a laptop on the floor — Coolest Projects North America

Exhibit and attend Coolest Projects

Coolest Projects is a free, family- and educator-friendly event. Young people can apply to exhibit their projects, and the general public can register to attend this one-day event. Be sure to register today, because you make Coolest Projects what it is: the coolest.

The post Announcing Coolest Projects North America appeared first on Raspberry Pi.

Backblaze at NAB 2018 in Las Vegas

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backblaze-at-nab-2018-in-las-vegas/

Backblaze B2 Cloud Storage NAB Booth

Backblaze just returned from exhibiting at NAB in Las Vegas, April 9-12, where the response to our recent announcements was tremendous. In case you missed the news, Backblaze B2 Cloud Storage continues to extend its lead as the most affordable, high performance cloud on the planet.

Backblaze’s News at NAB

Backblaze at NAB 2018 in Las Vegas

The Backblaze booth just before opening

What We Were Asked at NAB

Our booth was busy from start to finish with attendees interested in learning more about Backblaze and B2 Cloud Storage. Here are the questions we were asked most often in the booth.

Q. How long has Backblaze been in business?
A. The company was founded in 2007. Today, we have over 500 petabytes of data from customers in over 150 countries.

B2 Partners at NAB 2018

Q. Where is your data stored?
A. We have data centers in California and Arizona and expect to expand to Europe by the end of the year.

Q. How can your services be so inexpensive?
A. Backblaze’s goal from the beginning was to offer cloud backup and storage that was easy to use and affordable. All the existing options were simply too expensive to be viable, so we created our own infrastructure. Our purpose-built storage system — the Backblaze’s Storage Pod — is recognized as one of the most cost efficient storage platforms available.

Q. Tell me about your hardware.
A. Backblaze’s Storage Pods hold 60 HDDs each, containing as much as 720TB data per pod, stored using Reed-Solomon error correction. Storage Pods are arranged in Tomes with twenty Storage Pods making up a Vault.

Q. Where do you fit in the data workflow?
A. People typically use B2 in for archiving completed projects. All data is readily available for download from B2, making it more convenient than off-line storage. In addition, DAM and MAM systems such as CatDV, axle ai, Cantemo, and others have integrated with B2 to store raw images behind the proxies.

Q. Who uses B2 in the M&E business?
A. KLRU-TV, the PBS station in Austin Texas, uses B2 to archive their entire 43 year catalog of Austin City Limits episodes and related materials. WunderVu, the production house for Pixvana, uses B2 to back up and archive their local storage systems on which they build virtual reality experiences for their customers.

Q. You’re the company that publishes the hard drive stats, right?
A. Yes, we are!

Backblaze Case Studies and Swag at NAB 2018 in Las Vegas

Were You at NAB?

If you were, we hope you stopped by the Backblaze booth to say hello. We’d like to hear what you saw at the show that was interesting or exciting. Please tell us in the comments.

The post Backblaze at NAB 2018 in Las Vegas appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Now You Can Create Encrypted Amazon EBS Volumes by Using Your Custom Encryption Keys When You Launch an Amazon EC2 Instance

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/create-encrypted-amazon-ebs-volumes-custom-encryption-keys-launch-amazon-ec2-instance-2/

Amazon Elastic Block Store (EBS) offers an encryption solution for your Amazon EBS volumes so you don’t have to build, maintain, and secure your own infrastructure for managing encryption keys for block storage. Amazon EBS encryption uses AWS Key Management Service (AWS KMS) customer master keys (CMKs) when creating encrypted Amazon EBS volumes, providing you all the benefits associated with using AWS KMS. You can specify either an AWS managed CMK or a customer-managed CMK to encrypt your Amazon EBS volume. If you use a customer-managed CMK, you retain granular control over your encryption keys, such as having AWS KMS rotate your CMK every year. To learn more about creating CMKs, see Creating Keys.

In this post, we demonstrate how to create an encrypted Amazon EBS volume using a customer-managed CMK when you launch an EC2 instance from the EC2 console, AWS CLI, and AWS SDK.

Creating an encrypted Amazon EBS volume from the EC2 console

Follow these steps to launch an EC2 instance from the EC2 console with Amazon EBS volumes that are encrypted by customer-managed CMKs:

  1. Sign in to the AWS Management Console and open the EC2 console.
  2. Select Launch instance, and then, in Step 1 of the wizard, select an Amazon Machine Image (AMI).
  3. In Step 2 of the wizard, select an instance type, and then provide additional configuration details in Step 3. For details about configuring your instances, see Launching an Instance.
  4. In Step 4 of the wizard, specify additional EBS volumes that you want to attach to your instances.
  5. To create an encrypted Amazon EBS volume, first add a new volume by selecting Add new volume. Leave the Snapshot column blank.
  6. In the Encrypted column, select your CMK from the drop-down menu. You can also paste the full Amazon Resource Name (ARN) of your custom CMK key ID in this box. To learn more about finding the ARN of a CMK, see Working with Keys.
  7. Select Review and Launch. Your instance will launch with an additional Amazon EBS volume with the key that you selected. To learn more about the launch wizard, see Launching an Instance with Launch Wizard.

Creating Amazon EBS encrypted volumes from the AWS CLI or SDK

You also can use RunInstances to launch an instance with additional encrypted Amazon EBS volumes by setting Encrypted to true and adding kmsKeyID along with the actual key ID in the BlockDeviceMapping object, as shown in the following command:

$> aws ec2 run-instances –image-id ami-b42209de –count 1 –instance-type m4.large –region us-east-1 –block-device-mappings file://mapping.json

In this example, mapping.json describes the properties of the EBS volume that you want to create:


{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 100,
"VolumeType": "gp2",
"Encrypted": true,
"kmsKeyID": "arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef"
}
}

You can also launch instances with additional encrypted EBS data volumes via an Auto Scaling or Spot Fleet by creating a launch template with the above BlockDeviceMapping. For example:

$> aws ec2 create-launch-template –MyLTName –image-id ami-b42209de –count 1 –instance-type m4.large –region us-east-1 –block-device-mappings file://mapping.json

To learn more about launching an instance with the AWS CLI or SDK, see the AWS CLI Command Reference.

In this blog post, we’ve demonstrated a single-step, streamlined process for creating Amazon EBS volumes that are encrypted under your CMK when you launch your EC2 instance, thereby streamlining your instance launch workflow. To start using this functionality, navigate to the EC2 console.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

AWS Online Tech Talks – April & Early May 2018

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-april-early-may-2018/

We have several upcoming tech talks in the month of April and early May. Come join us to learn about AWS services and solution offerings. We’ll have AWS experts online to help answer questions in real-time. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

April & early May — 2018 Schedule

Compute

April 30, 2018 | 01:00 PM – 01:45 PM PTBest Practices for Running Amazon EC2 Spot Instances with Amazon EMR (300) – Learn about the best practices for scaling big data workloads as well as process, store, and analyze big data securely and cost effectively with Amazon EMR and Amazon EC2 Spot Instances.

May 1, 2018 | 01:00 PM – 01:45 PM PTHow to Bring Microsoft Apps to AWS (300) – Learn more about how to save significant money by bringing your Microsoft workloads to AWS.

May 2, 2018 | 01:00 PM – 01:45 PM PTDeep Dive on Amazon EC2 Accelerated Computing (300) – Get a technical deep dive on how AWS’ GPU and FGPA-based compute services can help you to optimize and accelerate your ML/DL and HPC workloads in the cloud.

Containers

April 23, 2018 | 11:00 AM – 11:45 AM PTNew Features for Building Powerful Containerized Microservices on AWS (300) – Learn about how this new feature works and how you can start using it to build and run modern, containerized applications on AWS.

Databases

April 23, 2018 | 01:00 PM – 01:45 PM PTElastiCache: Deep Dive Best Practices and Usage Patterns (200) – Learn about Redis-compatible in-memory data store and cache with Amazon ElastiCache.

April 25, 2018 | 01:00 PM – 01:45 PM PTIntro to Open Source Databases on AWS (200) – Learn how to tap the benefits of open source databases on AWS without the administrative hassle.

DevOps

April 25, 2018 | 09:00 AM – 09:45 AM PTDebug your Container and Serverless Applications with AWS X-Ray in 5 Minutes (300) – Learn how AWS X-Ray makes debugging your Container and Serverless applications fun.

Enterprise & Hybrid

April 23, 2018 | 09:00 AM – 09:45 AM PTAn Overview of Best Practices of Large-Scale Migrations (300) – Learn about the tools and best practices on how to migrate to AWS at scale.

April 24, 2018 | 11:00 AM – 11:45 AM PTDeploy your Desktops and Apps on AWS (300) – Learn how to deploy your desktops and apps on AWS with Amazon WorkSpaces and Amazon AppStream 2.0

IoT

May 2, 2018 | 11:00 AM – 11:45 AM PTHow to Easily and Securely Connect Devices to AWS IoT (200) – Learn how to easily and securely connect devices to the cloud and reliably scale to billions of devices and trillions of messages with AWS IoT.

Machine Learning

April 24, 2018 | 09:00 AM – 09:45 AM PT Automate for Efficiency with Amazon Transcribe and Amazon Translate (200) – Learn how you can increase the efficiency and reach your operations with Amazon Translate and Amazon Transcribe.

April 26, 2018 | 09:00 AM – 09:45 AM PT Perform Machine Learning at the IoT Edge using AWS Greengrass and Amazon Sagemaker (200) – Learn more about developing machine learning applications for the IoT edge.

Mobile

April 30, 2018 | 11:00 AM – 11:45 AM PTOffline GraphQL Apps with AWS AppSync (300) – Come learn how to enable real-time and offline data in your applications with GraphQL using AWS AppSync.

Networking

May 2, 2018 | 09:00 AM – 09:45 AM PT Taking Serverless to the Edge (300) – Learn how to run your code closer to your end users in a serverless fashion. Also, David Von Lehman from Aerobatic will discuss how they used [email protected] to reduce latency and cloud costs for their customer’s websites.

Security, Identity & Compliance

April 30, 2018 | 09:00 AM – 09:45 AM PTAmazon GuardDuty – Let’s Attack My Account! (300) – Amazon GuardDuty Test Drive – Practical steps on generating test findings.

May 3, 2018 | 09:00 AM – 09:45 AM PTProtect Your Game Servers from DDoS Attacks (200) – Learn how to use the new AWS Shield Advanced for EC2 to protect your internet-facing game servers against network layer DDoS attacks and application layer attacks of all kinds.

Serverless

April 24, 2018 | 01:00 PM – 01:45 PM PTTips and Tricks for Building and Deploying Serverless Apps In Minutes (200) – Learn how to build and deploy apps in minutes.

Storage

May 1, 2018 | 11:00 AM – 11:45 AM PTBuilding Data Lakes That Cost Less and Deliver Results Faster (300) – Learn how Amazon S3 Select And Amazon Glacier Select increase application performance by up to 400% and reduce total cost of ownership by extending your data lake into cost-effective archive storage.

May 3, 2018 | 11:00 AM – 11:45 AM PTIntegrating On-Premises Vendors with AWS for Backup (300) – Learn how to work with AWS and technology partners to build backup & restore solutions for your on-premises, hybrid, and cloud native environments.

Welcome Daren – Datacenter Technician!

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-daren-datacenter-technician/

The datacenter team continues to expand and the latest person to join the team is Daren! He’s very well versed with our infrastructure and is a welcome addition to the caregivers for our ever-growing fleet!

What is your Backblaze Title?
Datacenter Technician.

Where are you originally from?
Fair Oaks, CA.

What attracted you to Backblaze?
The Pods! I’ve always thought Backblaze had a great business concept and I wanted to be a part of the team that helps build it and make it a huge success.

What do you expect to learn while being at Backblaze?
Everything about Backblaze and what makes it tick.

Where else have you worked?
Sungard Availability Services, ASC Profiles, and Reids Family Martial Arts.

Where did you go to school?
American River College and Techskills of California.

What’s your dream job?
I always had interest in Architecture. I’m not sure how good I would be at it but building design is something that I would have liked to try.

Favorite place you’ve traveled?
My favorite place to travel is the Philippines. I have a lot of family their and I mostly like to visit the smaller villages far from the busy city life. White sandy beaches, family, and Lumpia!

Favorite hobby?
Martial Arts – its challenging, great exercise, and a lot of fun!

Star Trek or Star Wars?
Whatever my boss likes.

Coke or Pepsi?
Coke.

Favorite food?
One of my favorite foods is Lumpia. Its the cousin of the Egg Roll but much more amazing. Made of a thin pastry wrapper with a mixture of fillings, consisting of chopped vegetables, ground beef or pork, and potatoes.

Why do you like certain things?
I like certain things that take me to places I have never been before.

Anything else you’d like you’d like to tell us?
I am excited to be apart of the Backblaze team.

Welcome aboard Daren! We’d love to try some of that lumpia sometime!

The post Welcome Daren – Datacenter Technician! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

PUBG Files Copyright Lawsuit to Shut Down Competition

Post Syndicated from Ernesto original https://torrentfreak.com/pubg-files-copyright-lawsuit-to-shut-down-competition-180405/

When PlayerUnknown’s Battlegrounds (PUBG) was first released little over a year ago, it became an instant hit.

Within a month a million copies of the first public beta version were sold and this has since grown to over 28 million copies on the PC alone.

This success earned the company hundreds of millions of dollars in revenue, but according to PUBG, this could have been much more if others hadn’t copied their work.

This week PUBG filed a lawsuit against NetEase, the company behind the mobile games “Rules of Survival” and “Knives Out“, accusing it of copyright infringement, unfair competition and trade dress infringement.

In a complaint filed in a federal court in California, PUBG alleges that the two mobile apps were released before PUBG’s own mobile application to gain market share. In doing so, the company copied several crucial elements without permission, PUBG adds.

The 155-page complaint lists a long summary of elements that PUBG believes are infringing on its copyrighted works. This includes buildings, landscapes, vehicles, weapons, clothing, the pre-play area, and the shrinking gameplay area.

“On information and belief, Defendants copied PUBG’s expressive depictions of the pre-play area where other depictions could have been used for the purpose of evoking the same gameplay experience depicted in BATTLEGROUNDS,” one example reads.

The games also feature PUBG’s iconic “Winner Winner Chicken Dinner” salute, which is displayed to the winner of the game. In addition, both games use references to this phrase in their advertising efforts.

Chicken dinner

These and other similarities are used to confuse the public into believing that the NetEase games are developed by PUBG, the company notes, repeating the same arguments for Rules of Survival (ROS) and Knives Out (KO).

“Defendants intended to create consumer confusion as to the source of ROS and intended to cause consumers to believe, incorrectly, that ROS had been developed by PUBG.”

The company highlights this point by noting that both games are regularly referred to as “PUBG” mobile in the marketplace, suggesting that there indeed is confusion.

PUBG mobile?

In January, PUBG reached out to Apple asking the company to take action against the allegedly infringing applications listed in its iOS store but NetEase denied the allegations.

As a result, the company saw no other option than to file this lawsuit. In addition to monetary damages, PUBG wants both mobile games to be taken offline permanently, to shield the company from further harm.

“PUBG has suffered irreparable harm as a result of Defendants’ infringing activities and will continue to suffer irreparable harm in the future unless Defendants are enjoined from their infringing conduct,” the suit reads.

Specifically, PUBG asks the court to order NetEase “to remove each and every version of the games Rules of Survival, Knives Out, and similarly infringing games, from distribution and to cease developing and supporting those games.”

While it appears obvious that Rules of Survival and Knives Out are inspired by PUBG, it’s up to the court to determine whether the copyright infringement and unfair competition claims hold.

A copy of PUBG’s 155-page complaint, obtained by TorrentFreak, is available here (pdf). NetEase has yet to respond to the allegations.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

New – Encryption of Data in Transit for Amazon EFS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/

Amazon Elastic File System was designed to be the file system of choice for cloud-native applications that require shared access to file-based storage. We launched EFS in mid-2016 and have added several important features since then including on-premises access via Direct Connect and encryption of data at rest. We have also made EFS available in additional AWS Regions, most recently US West (Northern California). As was the case with EFS itself, these enhancements were made in response to customer feedback, and reflect our desire to serve an ever-widening customer base.

Encryption in Transit
Today we are making EFS even more useful with the addition of support for encryption of data in transit. When used in conjunction with the existing support for encryption of data at rest, you now have the ability to protect your stored files using a defense-in-depth security strategy.

In order to make it easy for you to implement encryption in transit, we are also releasing an EFS mount helper. The helper (available in source code and RPM form) takes care of setting up a TLS tunnel to EFS, and also allows you to mount file systems by ID. The two features are independent; you can use the helper to mount file systems by ID even if you don’t make use of encryption in transit. The helper also supplies a recommended set of default options to the actual mount command.

Setting up Encryption
I start by installing the EFS mount helper on my Amazon Linux instance:

$ sudo yum install -y amazon-efs-utils

Next, I visit the EFS Console and capture the file system ID:

Then I specify the ID (and the TLS option) to mount the file system:

$ sudo mount -t efs fs-92758f7b -o tls /mnt/efs

And that’s it! The encryption is transparent and has an almost negligible impact on data transfer speed.

Available Now
You can start using encryption in transit today in all AWS Regions where EFS is available.

The mount helper is available for Amazon Linux. If you are running another distribution of Linux you will need to clone the GitHub repo and build your own RPM, as described in the README.

Jeff;