Tag Archives: lima

Man Leaks New ‘Power’ Episodes Online, Records His Own Face

Post Syndicated from Andy original https://torrentfreak.com/man-leaks-new-power-episodes-online-records-his-own-face-170809/

With the whole world going crazy for Game of Thrones, another TV series has been turning some serious numbers. Produced by Curtis “50 Cent” Jackson, crime drama ‘Power’ has been pulling in around eight million viewers per episode.

After premiering in June 2014, Power is now seven episodes into season four, which is set to reach its climax on August 27. But somewhat typically for the Internet these days, fans won’t necessarily have to wait another three weeks to find out what happens. During the past few hours, the final three episodes of ‘Power’ leaked online.

While that’s something in itself, this leak is possibly the most bizarre to take place in the history of piracy. Having been tipped off that screener episodes were available online, TF went looking for evidence. We found it, but it wasn’t what we expected.

The leaks consist of the three episodes (one complete, the other two missing a few minutes) being played back on an iPhone. A white one. With a broken screen.

Power leaks: Broken iPhone edition

The off-center nature of the image above isn’t typical though and most of the time the main picture is both central and well-defined, with surprisingly clear audio. It’s certainly not going to win any prizes for quality but for the extremely impatient it offers some kind of relief.

The big question, of course, is how these episodes happened to find their way onto that battered iPhone in the first place. Incredibly, the videos themselves provide the answers, with the thoughtful ‘cammer’ explaining in several voice-overs how he gained access to one of STARZ hottest properties.

“This is like the special, this is only for the people that work at STARZ that watch this shit. My man sent me the whole log-in shit. I had to pay that n******r though,” he said.

The log-in referenced by the leaker appears to unlock press access to unreleased content on mediaroom.starz.com. That page has been taken down since, quite possibly due to the leak. Thanks to the video though, we can see how the portal looked on the leaker’s phone.

Unreleased ‘Power’ episodes on the STARZ portal

“That’s the whole series bitch, but I can’t log out though, so I can’t send it to you. The man says don’t log out. So i’m gonna watch these last two episodes and then spoil it for y’all,” the ‘cammer’ said over one of the episodes.

The original claim that theses were screener copies holds up. Throughout all three episodes, an occasional message appears across the bottom of the screen, declaring that the episodes are “for screening purposes only.”

Screener copies, for your eyes only

If the whole situation isn’t bizarre enough so far, the episodes contain quite a bit of complaining from the ‘cammer’, mainly due to his arm aching from holding up the recording phone for such a long time.

Why he didn’t simply place it down on the table isn’t clear. He managed it with the playback phone, which is seen leaning against a large water container throughout, something the ‘cammer’ believes is pretty badass.

“You see, I got my shit propped up like a G,” he said, placing the phone against the water bottle. “Next episode, definitely not holdin’ this shit, so you n*****s gotta relax.”

If this whole scenario isn’t crazy enough, the ‘cammer’ polishes off his virtuoso performance by turning the ‘cam’ phone around and recording his own face for several seconds. To save his embarrassment we won’t publish an image here but needless to say, he is extremely easy to identify, as is his Facebook page, where the content seems to have first appeared.

While there’s clearly no criminal mastermind behind these leaks, dumping unreleased TV shows online can result in a hefty jail sentence, no matter how poorly it’s done. The gentleman involved should hope that STARZ and the FBI are prepared to see the funny side. Fingers crossed….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Да разберем GDPR. Готова ли е компанията ви?

Post Syndicated from Йовко Ламбрев original https://yovko.net/gdpr-event/

Вероятно сте чували, а може би още не сте, но Регламентът за защита на личните данни (GDPR) е вече факт. След 4 години на обсъждане, лобиране и събиране на примери и добри практики Европейският съюз създаде нормативен акт, който променя мисленето ни за личните данни. Разбирането на нуждата от GDPR минава през осъзнаването, че четвъртата индустриална революция се случва сега. Все по–често конкурентното предимство на новите бизнес модели спрямо утвърдените такива се състои в достъпа, анализа и управлението на данни.

Политически кампании се печелят и компании излизат на върха на класациите чрез дейности по профилиране, анализ на големи масиви от данни и персонализиран маркетинг.

Какво променя GDPR и готова ли е компанията ви за новите правила или по-точно – кога е трябвало да бъде готова? Какво реално ще трябва да промените – организация, техническа инфраструктура или нагласа на служителите и ръководството? Ще може ли френския държавен орган по защитата на данните да вземе отношение спрямо нарушение, извършено от компания в Пазарджик? Какво е доклад относно въздействието от защитата на данни? Можете ли да продадете бизнеса си или да сключите стратегически договор без такъв доклад?

На 30 август в Пловдив експерти от адвокатско дружество Точева и Мандажиева заедно с Trakia Tech ще се опитат да отговорят на тези въпроси. Обещаваме Ви да е интересно и полезно. Нетърпеливи сме да чуем Вашите въпроси, които ще ни накарат да мислим още по-задълбочено в тази насока. Събитието организираме съвместно с нашите партньори от Капитал и ETNHost.

Работният език е български, а събитието е в малък формат и местата са ограничени, затова регистрацията на адрес capital.bg/gdpr е задължителна.

Очакваме ви в Център за събития Limacon, чийто адрес е: Пловдив, бул. Марица 154A (до големия паркинг на Билла).

ESET Tries to Scare People Away From Using Torrents

Post Syndicated from Andy original https://torrentfreak.com/eset-tries-to-scare-people-away-from-using-torrents-170805/

Any company in the security game can be expected to play up threats among its customer base in order to get sales.

Sellers of CCTV equipment, for example, would have us believe that criminals don’t want to be photographed and will often go elsewhere in the face of that. Car alarm companies warn us that since X thousand cars are stolen every minute, an expensive Immobilizer is an anti-theft must.

Of course, they’re absolutely right to point these things out. People want to know about these offline risks since they affect our quality of life. The same can be said of those that occur in the online world too.

We ARE all at risk of horrible malware that will trash our computers and steal our banking information so we should all be running adequate protection. That being said, how many times do our anti-virus programs actually trap a piece of nasty-ware in a year? Once? Twice? Ten times? Almost never?

The truth is we all need to be informed but it should be done in a measured way. That’s why an article just published by security firm ESET on the subject of torrents strikes a couple of bad chords, particularly with people who like torrents. It’s titled “Why you should view torrents as a threat” and predictably proceeds to outline why.

“Despite their popularity among users, torrents are very risky ‘business’,” it begins.

“Apart from the obvious legal trouble you could face for violating the copyright of musicians, filmmakers or software developers, there are security issues linked to downloading them that could put you or your computer in the crosshairs of the black hats.”

Aside from the use of the phrase “very risky” (‘some risk’ is a better description), there’s probably very little to complain about in this opening shot. However, things soon go downhill.

“Merely downloading the newest version of BitTorrent clients – software necessary for any user who wants to download or seed files from this ‘ecosystem’ – could infect your machine and irreversibly damage your files,” ESET writes.

Following that scary statement, some readers will have already vowed never to use a torrent again and moved on without reading any more, but the details are really important.

To support its claim, ESET points to two incidents in 2016 (which to its great credit the company actually discovered) which involved the Transmission torrent client. Both involved deliberate third-party infection and in the latter hackers attacked Transmission’s servers and embedded malware in its OSX client before distribution to the public.

No doubt these were both miserable incidents (to which the Transmission team quickly responded) but to characterize this as a torrent client problem seems somewhat unfair.

People intent on spreading viruses and malware do not discriminate and will happily infect ANY piece of computer software they can. Sadly, many non-technical people reading the ESET post won’t read beyond the claim that installing torrent clients can “infect your machine and irreversibly damage your files.”

That’s a huge disservice to the hundreds of millions of torrent client installations that have taken place over a decade and a half and were absolutely trouble free. On a similar basis, we could argue that installing Windows is the main initial problem for people getting viruses from the Internet. It’s true but it’s also not the full picture.

Finally, the piece goes on to detail other incidents over the years where torrents have been found to contain malware. The several cases highlighted by ESET are both real and pretty unpleasant for victims but the important thing to note here is torrent users are no different to any other online user, no matter how they use the Internet.

People who download files from the Internet, from ALL untrusted sources, are putting themselves at risk of getting a virus or other malware. Whether that content is obtained from a website or a P2P network, the risks are ever-present and only a foolish person would do so without decent security software (such as ESET’s) protecting them.

The take home point here is to be aware of security risks and put them into perspective. It’s hard to put a percentage on these things but of the hundreds of millions of torrent and torrent client downloads that have taken place since their inception 15 years ago, the overwhelming majority have been absolutely fine.

Security situations do arise and we need to be aware of them, but presenting things in a way that spreads unnecessary concern in a particular sector isn’t necessary to sell products.

The AV-TEST Institute registers around 390,000 new malicious programs every day that don’t involve torrents, plenty for any anti-virus firm to deal with.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Chrome’s Default ‘Ad-Blocker’ is Bad News for Torrent Sites

Post Syndicated from Ernesto original https://torrentfreak.com/chromes-default-ad-blocker-is-bad-news-for-torrent-sites-170705/

Online advertising can be quite a nuisance. Flashy and noisy banners, or intrusive pop-ups, are a thorn in the side of many Internet users.

These type of ads are particularly popular on pirate sites, so it’s no surprise that their users are more likely to have an ad-blocker installed.

The increasing popularity of these ad-blocking tools hasn’t done the income of site owners any good and the trouble on this front is about to increase.

A few weeks ago Google announced that its Chrome browser will start blocking ‘annoying’ ads in the near future, by default. This applies to all ads that don’t fall within the “better ads standards,” including popups and sticky ads.

Since Chrome is the leading browser on many pirate sites, this is expected to have a serious effect on torrent sites and other pirate platforms. TorrentFreak spoke to the operator of one of the largest torrent sites, who’s sounding the alarm bell.

The owner, who prefers not to have his site mentioned, says that it’s already hard to earn enough money to pay for hardware and hosting to keep the site afloat. This, despite millions of regular visitors.

“The torrent site economy is in a bad state. Profits are very low. Profits are f*cked up compared to previous years,” the torrent site owner says.

At the moment, 40% of the site’s users already have an ad-blocker installed, but when Chrome joins in with its default filter, it’s going to get much worse. A third of all visitors to the torrent site in question use the Chrome browser, either through mobile or desktop.

“Chrome’s ad-blocker will kill torrent sites. If they don’t at least cover their costs, no one is going to use money out of his pocket to keep them alive. I won’t be able to do so at least,” the site owner says.

It’s too early to assess how broad Chrome’s ad filtering will be, but torrent site owners may have to look for cleaner ads. That’s easier said than done though, as it’s usually the lower tier advertisers that are willing to work with these sites and they often serve more annoying ads.

The torrent site owner we spoke with isn’t very optimistic about the future. While he’s tested alternative revenue sources, he sees advertising as the only viable option. And with Chrome lining up to target part of their advertising inventory, revenue may soon dwindle.

“I’ve tested all types of ads and affiliates that are safe to work with, and advertising is the only way to cover costs. Also, most services that you can make good money promoting don’t work with torrent sites,” the torrent site owner notes.

Just a few months ago popular torrent site TorrentHound decided to shut down, citing a lack in revenue as one of the main reasons. This is by no means an isolated incident. TorrentFreak spoke to other site owners who confirm that it’s becoming harder and harder to pay the bills through advertisements.

The operator of Torlock, for example, confirms that those who are in the business to make a profit are having a hard time.

“All in all it’s a tough time for torrent sites but those that do it for the money will have a far more difficult time in the current climate than those who do this as a hobby and as a passion. We do it for the love of it so it doesn’t really affect us as much,” Torlock’s operator says.

Still, there is plenty of interest from advertisers, some of whom are trying their best to circumvent ad-blockers.

“Every day we receive emails from willing advertisers wanting to work with us so the market is definitely still there and most of them have the technology in place to circumvent adblockers, including Chrome’s default one,” he adds.

Google’s decision to ship Chrome with a default ad-blocker appears to be self-serving in part. If users see less annoying ads, they are less likely to install a third-party ad-blocker which blocks more of Google’s own advertisements.

Inadvertently, however, they may have also announced their most effective anti-piracy strategy to date.

If pirate sites are unable to generate enough revenue through advertisements, there are few options left. In theory, they could start charging visitors money, but most pirates go to these sites to avoid paying.

Asking for voluntary donations is an option, but that’s unlikely to cover the all the costs.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Bicrophonic Research Institute and the Sonic Bike

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/sonic-bike/

The Bicrophonic Sonic Bike, created by British sound artist Kaffe Matthews, utilises a Raspberry Pi and GPS signals to map location data and plays music and sound in response to the places you take it on your cycling adventures.

What is Bicrophonics?

Bicrophonics is about the mobility of sound, experienced and shared within a moving space, free of headphones and free of the internet. Music made by the journey you take, played with the space that you move through. The Bicrophonic Research Institute (BRI) http://sonicbikes.net

Cycling and music

I’m sure I wasn’t the only teen to go for bike rides with a group of friends and a radio. Spurred on by our favourite movie, the mid-nineties classic Now and Then, we’d hook up a pair of cheap portable speakers to our handlebars, crank up the volume, and sing our hearts out as we cycled aimlessly down country lanes in the cool light evenings of the British summer.

While Sonic Bikes don’t belt out the same classics that my precariously attached speakers provided, they do give you the same sense of connection to your travelling companions via sound. Linked to GPS locations on the same preset map of zones, each bike can produce the same music, creating a cloud of sound as you cycle.

Sonic Bikes

The Sonic Bike uses five physical components: a Raspberry Pi, power source, USB GPS receiver, rechargeable speakers, and subwoofer. Within the Raspberry Pi, the build utilises mapping software to divide a map into zones and connect each zone with a specific music track.

Sonic Bikes Raspberry Pi

Custom software enables the Raspberry Pi to locate itself among the zones using the USB GPS receiver. Then it plays back the appropriate track until it registers a new zone.

Bicrophonic Research Institute

The Bicrophonic Research Institute is a collective of artists and coders with the shared goal of creating sound directed by people and places via Sonic Bikes. In their own words:

Bicrophonics is about the mobility of sound, experienced and shared within a moving space, free of headphones and free of the internet. Music made by the journey you take, played with the space that you move through.

Their technology has potential beyond the aims of the BRI. The Sonic Bike software could be useful for navigation, logging data and playing beats to indicate when to alter speed or direction. You could even use it to create a guided cycle tour, including automatically reproduced information about specific places on the route.

For the creators of Sonic Bike, the project is ever-evolving, and “continues to be researched and developed to expand the compositional potentials and unique listening experiences it creates.”

Sensory Bike

A good example of this evolution is the Sensory Bike. This offshoot of the Sonic Bike idea plays sounds guided by the cyclist’s own movements – it acts like a two-wheeled musical instrument!

lean to go up, slow to go loud,

a work for Sensory Bikes, the Berlin wall and audience to ride it. ‘ lean to go up, slow to go loud ‘ explores freedom and celebrates escape. Celebrating human energy to find solutions, hot air balloons take off, train lines sing, people cheer and nature continues to grow.

Sensors on the wheels, handlebars, and brakes, together with a Sense HAT at the rear, register the unique way in which the rider navigates their location. The bike produces output based on these variables. Its creators at BRI say:

The Sensory Bike becomes a performative instrument – with riders choosing to go slow, go fast, to hop, zigzag, or circle, creating their own unique sound piece that speeds, reverses, and changes pitch while they dance on their bicycle.

Build your own Sonic Bike

As for many wonderful Raspberry Pi-based builds, the project’s code is available on GitHub, enabling makers to recreate it. All the BRI team ask is that you contact them so they can learn more of your plans and help in any way possible. They even provide code to create your own Sonic Kayak using GPS zones, temperature sensors, and an underwater microphone!

Sonic Kayaks explained

Sonic Kayaks are musical instruments for expanding our senses and scientific instruments for gathering marine micro-climate data. Made by foAm_Kernow with the Bicrophonic Research Institute (BRI), two were first launched at the British Science Festival in Swansea Bay September 6th 2016 and used by the public for 2 days.

The post Bicrophonic Research Institute and the Sonic Bike appeared first on Raspberry Pi.

A kindly lesson for you non-techies about encryption

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/a-kindly-lesson-for-you-non-techies.html

The following tweets need to be debunked:

The answer to John Schindler’s question is:

every expert in cryptography doesn’t know this

Oh, sure, you can find fringe wacko who also knows crypto that agrees with you but all the sane members of the security community will not.

Telegram is not trustworthy because it’s partially closed-source. We can’t see how it works. We don’t know if they’ve made accidental mistakes that can be hacked. We don’t know if they’ve been bribed by the NSA or Russia to put backdoors in their program. In contrast, PGP and Signal are open-source. We can read exactly what the software does. Indeed, thousands of people have been reviewing their software looking for mistakes and backdoors. Being open-source doesn’t automatically make software better, but it does make hiding secret backdoors much harder.

Telegram is not trustworthy because we aren’t certain the crypto is done properly. Signal, and especially PGP, are done properly.

The thing about encryption is that when done properly, it works. Neither the NSA nor the Russians can break properly encrypted content. There’s no such thing as “military grade” encryption that is better than consumer grade. There’s only encryption that nobody can hack vs. encryption that your neighbor’s teenage kid can easily hack. Those scenes in TV/movies about breaking encryption is as realistic as sound in space: good for dramatic presentation, but not how things work in the real world.

In particular, end-to-end encryption works. Sure, in the past, such apps only encrypted as far as the server, so whoever ran the server could read your messages. Modern chat apps, though, are end-to-end: the servers have absolutely no ability to decrypt what’s on them, unless they can get the decryption keys from the phones. But some tasks, like encrypted messages to a group of people, can be hard to do properly.

Thus, in contrast to what John Schindler says, while we techies have doubts about Telegram, we don’t have doubts about Russia authorities having access to Signal and PGP messages.

Snowden hatred has become the anti-vax of crypto. Sure, there’s no particular reason to trust Snowden — people should really stop treating him as some sort of privacy-Jesus. But there’s no particular reason to distrust him, either. His bland statements on crypto are indistinguishable from any other crypto-enthusiast statements. If he’s a Russian pawn, then so too is the bulk of the crypto community.

With all this said, using Signal doesn’t make you perfectly safe. The person you are chatting with could be a secret agent — especially in group chat. There could be cameras/microphones in the room where you are using the app. The Russians can also hack into your phone, and likewise eavesdrop on everything you do with the phone, regardless of which app you use. And they probably have hacked specific people’s phones. On the other hand, if the NSA or Russians were widely hacking phones, we’d detect that this was happening. We haven’t.

Signal is therefore not a guarantee of safety, because nothing is, and if your life depends on it, you can’t trust any simple advice like “use Signal”. But, for the bulk of us, it’s pretty damn secure, and I trust neither the Russians nor the NSA are reading my Signal or PGP messages.

At first blush, this @20committee tweet appears to be non-experts opining on things outside their expertise. But in reality, it’s just obtuse partisanship, where truth and expertise doesn’t matter. Nothing you or I say can change some people’s minds on this matter, no matter how much our expertise gives weight to our words. This post is instead for bystanders, who don’t know enough to judge whether these crazy statements have merit.


Bonus:

So let’s talk about “every crypto expert“. It’s, of course, impossible to speak for every crypto expert. It’s like saying how the consensus among climate scientists is that mankind is warming the globe, while at the same time, ignoring the wide spread disagreement on how much warming that is.

The same is true here. You’ll get a widespread different set of responses from experts about the above tweet. Some, for example, will stress my point at the bottom that hacking the endpoint (the phone) breaks all the apps, and thus justify the above tweet from that point of view. Others will point out that all software has bugs, and it’s quite possible that Signal has some unknown bug that the Russians are exploiting.

So I’m not attempting to speak for what all experts might say here in the general case and what long lecture they can opine about. I am, though, pointing out the basics that virtually everyone agrees on, the consensus of open-source and working crypto.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Post Syndicated from Ed Lima original https://aws.amazon.com/blogs/compute/secure-api-access-with-amazon-cognito-federated-identities-amazon-cognito-user-pools-and-amazon-api-gateway/

Ed Lima, Solutions Architect

 

Our identities are what define us as human beings. Philosophical discussions aside, it also applies to our day-to-day lives. For instance, I need my work badge to get access to my office building or my passport to travel overseas. My identity in this case is attached to my work badge or passport. As part of the system that checks my access, these documents or objects help define whether I have access to get into the office building or travel internationally.

This exact same concept can also be applied to cloud applications and APIs. To provide secure access to your application users, you define who can access the application resources and what kind of access can be granted. Access is based on identity controls that can confirm authentication (AuthN) and authorization (AuthZ), which are different concepts. According to Wikipedia:

 

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that “you are who you say you are,” authorization is the process of verifying that “you are permitted to do what you are trying to do.” This does not mean authorization presupposes authentication; an anonymous agent could be authorized to a limited action set.

Amazon Cognito allows building, securing, and scaling a solution to handle user management and authentication, and to sync across platforms and devices. In this post, I discuss the different ways that you can use Amazon Cognito to authenticate API calls to Amazon API Gateway and secure access to your own API resources.

 

Amazon Cognito Concepts

 

It’s important to understand that Amazon Cognito provides three different services:

Today, I discuss the use of the first two. One service doesn’t need the other to work; however, they can be configured to work together.
 

Amazon Cognito Federated Identities

 
To use Amazon Cognito Federated Identities in your application, create an identity pool. An identity pool is a store of user data specific to your account. It can be configured to require an identity provider (IdP) for user authentication, after you enter details such as app IDs or keys related to that specific provider.

After the user is validated, the provider sends an identity token to Amazon Cognito Federated Identities. In turn, Amazon Cognito Federated Identities contacts the AWS Security Token Service (AWS STS) to retrieve temporary AWS credentials based on a configured, authenticated IAM role linked to the identity pool. The role has appropriate IAM policies attached to it and uses these policies to provide access to other AWS services.

Amazon Cognito Federated Identities currently supports the IdPs listed in the following graphic.

 



Continue reading Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Kodi Addon Navi-X Bites The Dust After 10 Years

Post Syndicated from Andy original https://torrentfreak.com/kodi-addon-navi-x-bites-the-dust-after-10-years-170513/

One of the main questions asked by new users of the Kodi media player is what addons should be installed to get the best experience right from the start.

Over the years, hit add-ons such as Exodus, Phoenix, SALTS and SportsDevil have all been top of the list but due to its wide range of content, one in particular has enjoyed broad appeal.

Navi-X began life ten years ago in 2007. Developed by Netherlands-based coder ‘Rodejo’, it debuted on XBMC (Kodi’s previous name) on the original XBoX.

“Navi-X originally only played back media items of video and audio content and was eventually expanded to included many other media types like text, RSS, live streams and podcasts,” the team at TV Addons explain.

Over the years, however, things changed dramatically. Due to the way Navi-X works, the addon can import playlists from any number of sources, and they have invariably been dominated by copyrighted content, from movies and TV shows through to live sports.

This earned the addon a massive following, estimated by TV Addons – the site that maintained the software – as numbering in the hundreds of thousands. Soon, however, Navi-X will be no more.

“Every good thing must come to an end. After ten years of successful operation, Navi-X is sadly being discontinued. Navi-X was first released in April 2007, and is the oldest Kodi addon of its kind,” TV Addons explain.

“There are a few reasons why we made the decision to close Navi-X, and hope that the hundreds of thousands of people who still used Navi-X daily will understand why it was best to discontinue Navi-X while it was still on top.”

The team says that the main reason for discontinuing the addon and its underlying service is the current legal climate. Hosting Navi-X playlists is something that TV Addons no longer feels comfortable with “due to the potential liability that comes with it.”

Also, the team says that Navi-X was slowly being overrun by people trying to make a profit from the service. Playlists were being filled with spam, often advertising premium illegal IPTV services, which TV Addons strongly opposes.

Mislabeling of adult content was also causing issues, and despite TV Addons’ best efforts to get rid of the offending content, they were fighting a losing battle.

“We tried to moderate the database, but there was just too much content, no one had the time to watch thousands of videos to remove ads and distasteful content,” the team explains.

Unlike other addons that have come under legal pressure, the shutdown of Navi-X is entirely voluntary. TV Addons extends thanks to developers rodejo16 and turner3d, plus Blazetamer and crzen from more recent times.

The repository also thanks those who took the time to create the playlists upon which Navi-X relied. It is this that shines a light at the end of the tunnel for those wondering how to fill the void left by the addon.

“We’d also like to recognize all the dedicated playlisters, who we invite to get in touch with us if they are interested in releasing their own addons sometime in the near future,” TV Addons concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Friday Squid Blogging: Chilean Squid Producer Diversifies

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/friday_squid_bl_572.html

In another symptom of climate change, Chile’s largest squid producer “plans to diversify its offering in the future, selling sea urchin, cod and octopus, to compensate for the volatility of giant squid catches….”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Coming in 2018 – New AWS Region in Sweden

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/coming-in-2018-new-aws-region-in-sweden/

Last year we launched new AWS Regions in Canada, India, Korea, the UK (London), and the United States (Ohio), and announced that new regions are coming to France (Paris) and China (Ningxia).

Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Stockholm, Sweden in 2018. This region will give AWS partners and customers in Denmark, Finland, Iceland, Norway, and Sweden low-latency connectivity and the ability to run their workloads and store their data close to home.

The Nordics is well known for its vibrant startup community and highly innovative business climate. With successful global enterprises like ASSA ABLOY, IKEA, and Scania along with fast growing startups like Bambora, Supercell, Tink, and Trustpilot, it comes as no surprise that Forbes ranks Sweden as the best country for business, with all the other Nordic countries in the top 10. Even better, the European Commission ranks Sweden as the most innovative country in EU.

This will be the fifth AWS Region in Europe joining four other Regions there — EU (Ireland), EU (London), EU (Frankfurt) and an additional Region in France expected to launch in the coming months. Together, these Regions will provide our customers with a total of 13 Availability Zones (AZs) and allow them to architect highly fault tolerant applications while storing their data in the EU.

Today, our infrastructure comprises 42 Availability Zones across 16 geographic regions worldwide, with another three AWS Regions (and eight Availability Zones) in France, China and Sweden coming online throughout 2017 and 2018, (see the AWS Global Infrastructure page for more info).

We are looking forward to serving new and existing Nordic customers and working with partners across Europe. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Sweden. Public sector organizations (government agencies, educational institutions, and nonprofits) in Sweden will be able to use this region to store sensitive data in-country (the AWS in the Public Sector page has plenty of success stories drawn from our worldwide customer base).

If you are a customer or a partner and have specific questions about this Region, you can contact our Nordic team.

Help Wanted
As part of our launch, we are hiring individual contributors and managers for IT support, electrical, logistics, and physical security positions. If you are interested in learning more, please contact [email protected].

Jeff;

 

Congress Removes FCC Privacy Protections on Your Internet Usage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/congress_remove.html

Think about all of the websites you visit every day. Now imagine if the likes of Time Warner, AT&T, and Verizon collected all of your browsing history and sold it on to the highest bidder. That’s what will probably happen if Congress has its way.

This week, lawmakers voted to allow Internet service providers to violate your privacy for their own profit. Not only have they voted to repeal a rule that protects your privacy, they are also trying to make it illegal for the Federal Communications Commission to enact other rules to protect your privacy online.

That this is not provoking greater outcry illustrates how much we’ve ceded any willingness to shape our technological future to for-profit companies and are allowing them to do it for us.

There are a lot of reasons to be worried about this. Because your Internet service provider controls your connection to the Internet, it is in a position to see everything you do on the Internet. Unlike a search engine or social networking platform or news site, you can’t easily switch to a competitor. And there’s not a lot of competition in the market, either. If you have a choice between two high-speed providers in the US, consider yourself lucky.

What can telecom companies do with this newly granted power to spy on everything you’re doing? Of course they can sell your data to marketers — and the inevitable criminals and foreign governments who also line up to buy it. But they can do more creepy things as well.

They can snoop through your traffic and insert their own ads. They can deploy systems that remove encryption so they can better eavesdrop. They can redirect your searches to other sites. They can install surveillance software on your computers and phones. None of these are hypothetical.

They’re all things Internet service providers have done before, and they are some of the reasons the FCC tried to protect your privacy in the first place. And now they’ll be able to do all of these things in secret, without your knowledge or consent. And, of course, governments worldwide will have access to these powers. And all of that data will be at risk of hacking, either by criminals and other governments.

Telecom companies have argued that other Internet players already have these creepy powers — although they didn’t use the word “creepy” — so why should they not have them as well? It’s a valid point.

Surveillance is already the business model of the Internet, and literally hundreds of companies spy on your Internet activity against your interests and for their own profit.

Your e-mail provider already knows everything you write to your family, friends, and colleagues. Google already knows our hopes, fears, and interests, because that’s what we search for.

Your cellular provider already tracks your physical location at all times: it knows where you live, where you work, when you go to sleep at night, when you wake up in the morning, and — because everyone has a smartphone — who you spend time with and who you sleep with.

And some of the things these companies do with that power is no less creepy. Facebook has run experiments in manipulating your mood by changing what you see on your news feed. Uber used its ride data to identify one-night stands. Even Sony once installed spyware on customers’ computers to try and detect if they copied music files.

Aside from spying for profit, companies can spy for other purposes. Uber has already considered using data it collects to intimidate a journalist. Imagine what an Internet service provider can do with the data it collects: against politicians, against the media, against rivals.

Of course the telecom companies want a piece of the surveillance capitalism pie. Despite dwindling revenues, increasing use of ad blockers, and increases in clickfraud, violating our privacy is still a profitable business — especially if it’s done in secret.

The bigger question is: why do we allow for-profit corporations to create our technological future in ways that are optimized for their profits and anathema to our own interests?

When markets work well, different companies compete on price and features, and society collectively rewards better products by purchasing them. This mechanism fails if there is no competition, or if rival companies choose not to compete on a particular feature. It fails when customers are unable to switch to competitors. And it fails when what companies do remains secret.

Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their Internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful.

Today, technology is changing the fabric of our society faster than at any other time in history. We have big questions that we need to tackle: not just privacy, but questions of freedom, fairness, and liberty. Algorithms are making decisions about policing, healthcare.

Driverless vehicles are making decisions about traffic and safety. Warfare is increasingly being fought remotely and autonomously. Censorship is on the rise globally. Propaganda is being promulgated more efficiently than ever. These problems won’t go away. If anything, the Internet of things and the computerization of every aspect of our lives will make it worse.

In today’s political climate, it seems impossible that Congress would legislate these things to our benefit. Right now, regulatory agencies such as the FTC and FCC are our best hope to protect our privacy and security against rampant corporate power. That Congress has decided to reduce that power leaves us at enormous risk.

It’s too late to do anything about this bill — Trump will certainly sign it — but we need to be alert to future bills that reduce our privacy and security.

This post previously appeared on the Guardian.

EDITED TO ADD: Former FCC Commissioner Tom Wheeler wrote a good op-ed on the subject. And here’s an essay laying out what this all means to the average Internet user.

Utopia

Post Syndicated from Eevee original https://eev.ee/blog/2017/03/08/utopia/

It’s been a while, but someone’s back on the Patreon blog topic tier! IndustrialRobot asks:

What does your personal utopia look like? Do you think we (as mankind) can achieve it? Why/why not?

Hm.

I spent the month up to my eyeballs in a jam game, but this question was in the back of my mind a lot. I could use it as a springboard to opine about anything, especially in the current climate: politics, religion, nationalism, war, economics, etc., etc. But all of that has been done to death by people who actually know what they’re talking about.

The question does say “personal”. So in a less abstract sense… what do I want the world to look like?

Mostly, I want everyone to have the freedom to make things.

I’ve been having a surprisingly hard time writing the rest of this without veering directly into the ravines of “basic income is good” and “maybe capitalism is suboptimal”. Those are true, but not really the tone I want here, and anyway they’ve been done to death by better writers than I. I’ve talked this out with Mel a few times, and it sounds much better aloud, so I’m going to try to drop my Blog Voice and just… talk.

*ahem*

Art versus business

So, art. Art is good.

I’m construing “art” very broadly here. More broadly than “media”, too. I’m including shitty robots, weird Twitter almost-bots, weird Twitter non-bots, even a great deal of open source software. Anything that even remotely resembles creative work — driven perhaps by curiosity, perhaps by practicality, but always by a soul bursting with ideas and a palpable need to get them out.

Western culture thrives on art. Most culture thrives on art. I’m not remotely qualified to defend this, but I suspect you could define culture in terms of art. It’s pretty important.

You’d think this would be reflected in how we discuss art, but often… it’s not. Tell me how often you’ve heard some of these gems.

  • I could do that.”
  • My eight-year-old kid could do that.”
  • Jokes about the worthlessness of liberal arts degrees.
  • Jokes about people trying to write novels in their spare time, the subtext being that only dreamy losers try to write novels, or something.
  • The caricature of a hippie working on a screenplay at Starbucks.

Oh, and then there was the guy who made a bot to scrape tons of art from artists who were using Patreon as a paywall — and a primary source of income. The justification was that artists shouldn’t expect to make a living off of, er, doing art, and should instead get “real jobs”.

I do wonder. How many of the people repeating these sentiments listen to music, or go to movies, or bought an iPhone because it’s prettier? Are those things not art that took real work to create? Is creating those things not a “real job”?

Perhaps a “real job” has to be one that’s not enjoyable, not a passion? And yet I can’t recall ever hearing anyone say that Taylor Swift should get a “real job”. Or that, say, pro football players should get “real jobs”. What do pro football players even do? They play a game a few times a year, and somehow this drives the flow of unimaginable amounts of money. We dress it up in the more serious-sounding “sport”, but it’s a game in the same general genre as hopscotch. There’s nothing wrong with that, but somehow it gets virtually none of the scorn that art does.

Another possible explanation is America’s partly-Christian, partly-capitalist attitude that you deserve exactly whatever you happen to have at the moment. (Whereas I deserve much more and will be getting it any day now.) Rich people are rich because they earned it, and we don’t question that further. Poor people are poor because they failed to earn it, and we don’t question that further, either. To do so would suggest that the system is somehow unfair, and hard work does not perfectly correlate with any particular measure of success.

I’m sure that factors in, but it’s not quite satisfying: I’ve also seen a good deal of spite aimed at people who are making a fairly decent chunk through Patreon or similar. Something is missing.

I thought, at first, that the key might be the American worship of work. Work is an inherent virtue. Politicians run entire campaigns based on how many jobs they’re going to create. Notably, no one seems too bothered about whether the work is useful, as long as someone decided to pay you for it.

Finally I stumbled upon the key. America doesn’t actually worship work. America worships business. Business means a company is deciding to pay you. Business means legitimacy. Business is what separates a hobby from a career.

And this presents a problem for art.

If you want to provide a service or sell a product, that’ll be hard, but America will at least try to look like it supports you. People are impressed that you’re an entrepreneur, a small business owner. Politicians will brag about policies made in your favor, whether or not they’re stabbing you in the back.

Small businesses have a particular structure they can develop into. You can divide work up. You can have someone in sales, someone in accounting. You can provide specifications and pay a factory to make your product. You can defer all of the non-creative work to someone else, whether that means experts in a particular field or unskilled labor.

But if your work is inherently creative, you can’t do that. The very thing you’re making is your idea in your style, driven by your experience. This is not work that’s readily parallelizable. Even if you sell physical merchandise and register as an LLC and have a dedicated workspace and do various other formal business-y things, the basic structure will still look the same: a single person doing the thing they enjoy. A hobbyist.

Consider the bulleted list from above. Those are all individual painters or artists or authors or screenwriters. The kinds of artists who earn respect without question are generally those managed by a business, those with branding: musical artists signed to labels, actors working for a studio. Even football players are part of a tangle of business.

(This doesn’t mean that business automatically confers respect, of course; tech in particular is full of anecdotes about nerds’ disdain for people whose jobs are design or UI or documentation or whathaveyou. But a businessy look seems to be a significant advantage.)

It seems that although art is a large part of what informs culture, we have a culture that defines “serious” endeavors in such a way that independent art cannot possibly be “serious”.

Art versus money

Which wouldn’t really matter at all, except that we also have a culture that expects you to pay for food and whatnot.

The reasoning isn’t too outlandish. Food is produced from a combination of work and resources. In exchange for getting the food, you should give back some of your own work and resources.

Obviously this is riddled with subtle flaws, but let’s roll with it for now and look at a case study. Like, uh, me!

Mel and I built and released two games together in the six weeks between mid-January and the end of February. Together, those games have made $1,000 in sales. The sales trail off fairly quickly within a few days of release, so we’ll call that the total gross for our effort.

I, dumb, having never actually sold anything before, thought this was phenomenal. Then I had the misfortune of doing some math.

Itch takes at least 10%, so we’re down to $900 net. Divided over six weeks, that’s $150 per week, before taxes — or $3.75 per hour if we’d been working full time.

Ah, but wait! There are two of us. And we hadn’t been working full time — we’d been working nearly every waking hour, which is at least twice “full time” hours. So we really made less than a dollar an hour. Even less than that, if you assume overtime pay.

From the perspective of capitalism, what is our incentive to do this? Between us, we easily have over thirty years of experience doing the things we do, and we spent weeks in crunch mode working on something, all to earn a small fraction of minimum wage. Did we not contribute back our own work and resources? Was our work worth so much less than waiting tables?

Waiting tables is a perfectly respectable way to earn a living, mind you. Ah, but wait! I’ve accidentally done something clever here. It is generally expected that you tip your waiter, because waiters are underpaid by the business, because the business assumes they’ll be tipped. Not tipping is actually, almost impressively, one of the rudest things you can do. And yet it’s not expected that you tip an artist whose work you enjoy, even though many such artists aren’t being paid at all.

Now, to be perfectly fair, both games were released for free. Even a dollar an hour is infinitely more than the zero dollars I was expecting — and I’m amazed and thankful we got as much as we did! Thank you so much. I bring it up not as a complaint, but as an armchair analysis of our systems of incentives.

People can take art for granted and whatever, yes, but there are several other factors at play here that hamper the ability for art to make money.

For one, I don’t want to sell my work. I suspect a great deal of independent artists and writers and open source developers (!) feel the same way. I create things because I want to, because I have to, because I feel so compelled to create that having a non-creative full-time job was making me miserable. I create things for the sake of expressing an idea. Attaching a price tag to something reduces the number of people who’ll experience it. In other words, selling my work would make it less valuable in my eyes, in much the same way that adding banner ads to my writing would make it less valuable.

And yet, I’m forced to sell something in some way, or else I’ll have to find someone who wants me to do bland mechanical work on their ideas in exchange for money… at the cost of producing sharply less work of my own. Thank goodness for Patreon, at least.

There’s also the reverse problem, in that people often don’t want to buy creative work. Everyone does sometimes, but only sometimes. It’s kind of a weird situation, and the internet has exacerbated it considerably.

Consider that if I write a book and print it on paper, that costs something. I have to pay for the paper and the ink and the use of someone else’s printer. If I want one more book, I have to pay a little more. I can cut those costs pretty considerable by printing a lot of books at once, but each copy still has a price, a marginal cost. If I then gave those books away, I would be actively losing money. So I can pretty well justify charging for a book.

Along comes the internet. Suddenly, copying costs nothing. Not only does it cost nothing, but it’s the fundamental operation. When you download a file or receive an email or visit a web site, you’re really getting a copy! Even the process which ultimately shows it on your screen involves a number of copies. This is so natural that we don’t even call it copying, don’t even think of it as copying.

True, bandwidth does cost something, but the rate is virtually nothing until you start looking at very big numbers indeed. I pay $60/mo for hosting this blog and a half dozen other sites — even that’s way more than I need, honestly, but downgrading would be a hassle — and I get 6TB of bandwidth. Even the longest of my posts haven’t exceeded 100KB. A post could be read by 64 million people before I’d start having a problem. If that were the population of a country, it’d be the 23rd largest in the world, between Italy and the UK.

How, then, do I justify charging for my writing? (Yes, I realize the irony in using my blog as an example in a post I’m being paid $88 to write.)

Well, I do pour effort and expertise and a fraction of my finite lifetime into it. But it doesn’t cost me anything tangible — I already had this hosting for something else! — and it’s easier all around to just put it online.

The same idea applies to a vast bulk of what’s online, and now suddenly we have a bit of a problem. Not only are we used to getting everything for free online, but we never bothered to build any sensible payment infrastructure. You still have to pay for everything by typing in a cryptic sequence of numbers from a little physical plastic card, which will then give you a small loan and charge the seller 30¢ plus 2.9% for the “convenience”.

If a website could say “pay 5¢ to read this” and you clicked a button in your browser and that was that, we might be onto something. But with our current setup, it costs far more than 5¢ to transfer 5¢, even though it’s just a number in a computer somewhere. The only people with the power and resources to fix this don’t want to fix it — they’d rather be the ones charging you the 30¢ plus 2.9%.

That leads to another factor of platforms and publishers, which are more than happy to eat a chunk of your earnings even when you do sell stuff. Google Play, the App Store, Steam, and anecdotally many other big-name comparative platforms all take 30% of your sales. A third! And that’s good! It seems common among book publishers to take 85% to 90%. For ebook sales — i.e., ones that don’t actually cost anything — they may generously lower that to a mere 75% to 85%.

Bless Patreon for only taking 5%. Itch.io is even better: it defaults to 10%, but gives you a slider, which you can set to anything from 0% to 100%.

I’ve mentioned all this before, so here’s a more novel thought: finite disposable income. Your audience only has so much money to spend on media right now. You can try to be more compelling to encourage them to spend more of it, rather than saving it, but ultimately everyone has a limit before they just plain run out of money.

Now, popularity is heavily influenced by social and network effects, so it tends to create a power law distribution: a few things are ridiculously hyperpopular, and then there’s a steep drop to a long tail of more modestly popular things.

If a new hyperpopular thing comes out, everyone is likely to want to buy it… but then that eats away a significant chunk of that finite pool of money that could’ve gone to less popular things.

This isn’t bad, and buying a popular thing doesn’t make you a bad person; it’s just what happens. I don’t think there’s any satisfying alternative that doesn’t involve radically changing the way we think about our economy.

Taylor Swift, who I’m only picking on because her infosec account follows me on Twitter, has sold tens of millions of albums and is worth something like a quarter of a billion dollars. Does she need more? If not, should she make all her albums free from now on?

Maybe she does, and maybe she shouldn’t. The alternative is for someone to somehow prevent her from making more money, which doesn’t sit well. Yet it feels almost heretical to even ask if someone “needs” more money, because we take for granted that she’s earned it — in part by being invested in by a record label and heavily advertised. The virtue is work, right? Don’t a lot of people work just as hard? (“But you have to be talented too!” Then please explain how wildly incompetent CEOs still make millions, and leave burning businesses only to be immediately hired by new ones? Anyway, are we really willing to bet there is no one equally talented but not as popular by sheer happenstance?)

It’s kind of a moot question anyway, since she’s probably under contract with billionaires and it’s not up to her.

Where the hell was I going with this.


Right, so. Money. Everyone needs some. But making it off art can be tricky, unless you’re one of the lucky handful who strike gold.

And I’m still pretty goddamn lucky to be able to even try this! I doubt I would’ve even gotten into game development by now if I were still working for an SF tech company — it just drained so much of my creative energy, and it’s enough of an uphill battle for me to get stuff done in the first place.

How many people do I know who are bursting with ideas, but have to work a tedious job to keep the lights on, and are too tired at the end of the day to get those ideas out? Make no mistake, making stuff takes work — a lot of it. And that’s if you’re already pretty good at the artform. If you want to learn to draw or paint or write or code, you have to do just as much work first, with much more frustration, and not as much to show for it.

Utopia

So there’s my utopia. I want to see a world where people have the breathing room to create the things they dream about and share them with the rest of us.

Can it happen? Maybe. I think the cultural issues are a fairly big blocker; we’d be much better off if we treated independent art with the same reverence as, say, people who play with a ball for twelve hours a year. Or if we treated liberal arts degrees as just as good as computer science degrees. (“But STEM can change the world!” Okay. How many people with computer science degrees would you estimate are changing the world, and how many are making a website 1% faster or keeping a lumbering COBOL beast running or trying to trick 1% more people into clicking on ads?)

I don’t really mean stuff like piracy, either. Piracy is a thing, but it’s… complicated. In my experience it’s not even artists who care the most about piracy; it’s massive publishers, the sort who see artists as a sponge to squeeze money out of. You know, the same people who make everything difficult to actually buy, infest it with DRM so it doesn’t work on half the stuff you own, and don’t even sell it in half the world.

I mean treating art as a free-floating commodity, detached from anyone who created it. I mean neo-Nazis adopting a comic book character as their mascot, against the creator’s wishes. I mean politicians and even media conglomerates using someone else’s music in well-funded videos and ads without even asking. I mean assuming Google Image Search, wonder that it is, is some kind of magical free art machine. I mean the snotty Reddit post I found while looking up Patreon’s fee structure, where some doofus was insisting that Patreon couldn’t possibly pay for a full-time YouTuber’s time, because not having a job meant they had lots of time to spare.

Maybe I should go one step further: everyone should create at least once or twice. Everyone should know what it’s like to have crafted something out of nothing, to be a fucking god within the microcosm of a computer screen or a sewing machine or a pottery table. Everyone should know that spark of inspiration that we don’t seem to know how to teach in math or science classes, even though it’s the entire basis of those as well. Everyone should know that there’s a good goddamn reason I listed open source software as a kind of art at the beginning of this post.

Basic income and more arts funding for public schools. If Uber can get billions of dollars for putting little car icons on top of Google Maps and not actually doing any of their own goddamn service themselves, I think we can afford to pump more cash into webcomics and indie games and, yes, even underwater basket weaving.

Weekly roundup: Strawberry jam END

Post Syndicated from Eevee original https://eev.ee/dev/2017/03/05/weekly-roundup-strawberry-jam-end/

Hi! It’s been a while. Per tradition, I didn’t write roundups while in the middle of frantically working on a video game.

  • fox flux: Mostly, I made a video game for a month-long game jam. It’s an hourish-long puzzle platformer and is somewhat NSFW but you can snag it from itch.io if you like. (If you install it via the itch app, it’ll patch automatically and more quickly in the future. Though I’m not sure how itch will handle the Linux “build”, which is just a LÖVE file.)

    My ever-evolving physics code got some improvements again, but the biggest time constraint by far was the art — I am not very fast, and I had to learn a lot of things as I went. I’ll be writing about some of that in the near future. Still, it’s pretty cool that (with Mel’s indispensable advice) I managed to produce enough art for a competent-looking game.

    I did all the sounds as well, though Mel composed the music, which made the whole thing much better.

    Post-release, I fixed a few critical bugs, and I’ve been working on some little vignettes to make the ending a bit less, er, anticlimactic.

  • bolthaven: I also worked on Mel’s game for the same jam, but the script turned out to be much longer than expected, so it wasn’t finished in time.

  • blog: I’ve started writing both a Patreon post for February and a spontaneous post on some art insights from the past month.

Lots of work, but a short summary. I’m exhausted and moving a bit pokily these last couple days, but I’m more inspired than ever. I’ve got a lot of stuff to catch up on and plenty more gamedev to do; hope you enjoy some of it!

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

Authorizing Access Through a Proxy Resource to Amazon API Gateway and AWS Lambda Using Amazon Cognito User Pools

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/authorizing-access-through-a-proxy-resource-to-amazon-api-gateway-and-aws-lambda-using-amazon-cognito-user-pools/


Ed Lima, Solutions Architect

Want to create your own user directory that can scale to hundreds of millions of users? Amazon Cognito user pools are fully managed so that you don’t have to worry about the heavy lifting associated with building, securing, and scaling authentication to your apps.

The AWS Mobile blog post Integrating Amazon Cognito User Pools with API Gateway back in May explained how to integrate user pools with Amazon API Gateway using an AWS Lambda custom authorizer. Since then, we’ve released a new feature where you can directly configure a Cognito user pool authorizer to authenticate your API calls; more recently, we released a new proxy resource feature. In this post, I show how to use these new great features together to secure access to an API backed by a Lambda proxy resource.

Walkthrough

In this post, I assume that you have some basic knowledge about the services involved. If not, feel free to review our documentation and tutorials on:

Start by creating a user pool called “myApiUsers”, and enable verifications with optional MFA access for extra security:

cognitouserpoolsauth_1.png

Be mindful that if you are using a similar solution for production workloads you will need to request a SMS spending threshold limit increase from Amazon SNS in order to send SMS messages to users for phone number verification or for MFA. For the purposes of this article, since we are only testing our API authentication with a single user the default limit will suffice.

Now, create an app in your user pool, making sure to clear Generate client secret:

cognitouserpoolsauth_2.png

Using the client ID of your newly created app, add a user, “jdoe”, with the AWS CLI. The user needs a valid email address and phone number to receive MFA codes:

aws cognito-idp sign-up \
--client-id 12ioh8c17q3stmndpXXXXXXXX \
--username jdoe \
--password [email protected] \
--region us-east-1 \
--user-attributes '[{"Name":"given_name","Value":"John"},{"Name":"family_name","Value":"Doe"},{"Name":"email","Value":"[email protected]"},{"Name":"gender","Value":"Male"},{"Name":"phone_number","Value":"+61XXXXXXXXXX"}]'  

In the Cognito User Pools console, under Users, select the new user and choose Confirm User and Enable MFA:

cognitouserpoolsauth_3.png

Your Cognito user is now ready and available to connect.

Next, create a Node.js Lambda function called LambdaForSimpleProxy with a basic execution role. Here’s the code:

'use strict';
console.log('Loading CUP2APIGW2Lambda Function');

exports.handler = function(event, context) {
    var responseCode = 200;
    console.log("request: " + JSON.stringify(event));
    
    var responseBody = {
        message: "Hello, " + event.requestContext.authorizer.claims.given_name + " " + event.requestContext.authorizer.claims.family_name +"!" + " You are authenticated to your API using Cognito user pools!",
        method: "This is an authorized "+ event.httpMethod + " to Lambda from your API using a proxy resource.",
        body: event.body
    };

    //Response including CORS required header
    var response = {
        statusCode: responseCode,
        headers: {
            "Access-Control-Allow-Origin" : "*"
        },
        body: JSON.stringify(responseBody)
    };

    console.log("response: " + JSON.stringify(response))
    context.succeed(response);
};

For the last piece of the back-end puzzle, create a new API called CUP2Lambda from the Amazon API Gateway console. Under Authorizers, choose Create, Cognito User Pool Authorizer with the following settings:

cognitouserpoolsauth_4.png

Create an ANY method under the root of the API as follows:

cognitouserpoolsauth_5.png

After that, choose Save, OK to give API Gateway permissions to invoke the Lambda function. It’s time to configure the authorization settings for your ANY method. Under Method Request, enter the Cognito user pool as the authorization for your API:

cognitouserpoolsauth_6.png

Finally, choose Actions, Enable CORS. This creates an OPTIONS method in your API:

cognitouserpoolsauth_7.png

Now it’s time to deploy the API to a stage (such as prod) and generate a JavaScript SDK from the SDK Generation tab. You can use other methods to connect to your API however in this article I’ll show how to use the API Gateway SDK. Since we are using an ANY method the SDK does not have calls for specific methods other than the OPTIONS method created by Enable CORS, you have to add a couple of extra functions to the apigClient.js file so that your SDK can perform GET and POST operations to your API:


    apigClient.rootGet = function (params, body, additionalParams) {
        if(additionalParams === undefined) { additionalParams = {}; }
        
        apiGateway.core.utils.assertParametersDefined(params, [], ['body']);       

        var rootGetRequest = {
            verb: 'get'.toUpperCase(),
            path: pathComponent + uritemplate('/').expand(apiGateway.core.utils.parseParametersToObject(params, [])),
            headers: apiGateway.core.utils.parseParametersToObject(params, []),
            queryParams: apiGateway.core.utils.parseParametersToObject(params, []),
            body: body
        };
        

        return apiGatewayClient.makeRequest(rootGetRequest, authType, additionalParams, config.apiKey);
    };

    apigClient.rootPost = function (params, body, additionalParams) {
        if(additionalParams === undefined) { additionalParams = {}; }
     
        apiGateway.core.utils.assertParametersDefined(params, ['body'], ['body']);
       
        var rootPostRequest = {
            verb: 'post'.toUpperCase(),
            path: pathComponent + uritemplate('/').expand(apiGateway.core.utils.parseParametersToObject(params, [])),
            headers: apiGateway.core.utils.parseParametersToObject(params, []),
            queryParams: apiGateway.core.utils.parseParametersToObject(params, []),
            body: body
        };
        
        return apiGatewayClient.makeRequest(rootPostRequest, authType, additionalParams, config.apiKey);

    };

You can now use a little front end web page to authenticate users and test authorized calls to your API. In order for it to work, you need to add some external libraries and dependencies including the API Gateway SDK you just generated. You can find more details in our Cognito as well as API Gateway SDK documentation guides.

With the dependencies in place, you can use the following JavaScript code to authenticate your Cognito user pool user and connect to your API in order to perform authorized calls (replace your own user pool Id and client ID details accordingly):

<script type="text/javascript">
 //Configure the AWS client with the Cognito role and a blank identity pool to get initial credentials

  AWS.config.update({
    region: 'us-east-1',
    credentials: new AWS.CognitoIdentityCredentials({
      IdentityPoolId: ''
    })
  });

  AWSCognito.config.region = 'us-east-1';
  AWSCognito.config.update({accessKeyId: 'null', secretAccessKey: 'null'});
  var token = "";
 
  //Authenticate user with MFA

  document.getElementById("buttonAuth").addEventListener("click", function(){  
    var authenticationData = {
      Username : document.getElementById('username').value,
      Password : document.getElementById('password').value,
      };

    var showGetPut = document.getElementById('afterLogin');
    var hideLogin = document.getElementById('login');

    var authenticationDetails = new AWSCognito.CognitoIdentityServiceProvider.AuthenticationDetails(authenticationData);

   // Replace with your user pool details

    var poolData = { 
        UserPoolId : 'us-east-1_XXXXXXXXX', 
        ClientId : '12ioh8c17q3stmndpXXXXXXXX', 
        Paranoia : 7
    };

    var userPool = new AWSCognito.CognitoIdentityServiceProvider.CognitoUserPool(poolData);

    var userData = {
        Username : document.getElementById('user').value,
        Pool : userPool
    };

    var cognitoUser = new AWSCognito.CognitoIdentityServiceProvider.CognitoUser(userData);
    cognitoUser.authenticateUser(authenticationDetails, {
      onSuccess: function (result) {
        token = result.getIdToken().getJwtToken(); // CUP Authorizer = ID Token
        console.log('ID Token: ' + result.getIdToken().getJwtToken()); // Show ID Token in the console
        var cognitoGetUser = userPool.getCurrentUser();
        if (cognitoGetUser != null) {
          cognitoGetUser.getSession(function(err, result) {
            if (result) {
              console.log ("User Successfuly Authenticated!");  
            }
          });
        }

        //Hide Login form after successful authentication
        showGetPut.style.display = 'block';
        hideLogin.style.display = 'none';
      },
    onFailure: function(err) {
        alert(err);
    },
    mfaRequired: function(codeDeliveryDetails) {
            var verificationCode = prompt('Please input a verification code.' ,'');
            cognitoUser.sendMFACode(verificationCode, this);
        }
    });
  });

//Send a GET request to the API

document.getElementById("buttonGet").addEventListener("click", function(){
  var apigClient = apigClientFactory.newClient();
  var additionalParams = {
      headers: {
        Authorization: token
      }
    };

  apigClient.rootGet({},{},additionalParams)
      .then(function(response) {
        console.log(JSON.stringify(response));
        document.getElementById("output").innerHTML = ('<pre align="left"><code>Response: '+JSON.stringify(response.data, null, 2)+'</code></pre>');
      }).catch(function (response) {
        document.getElementById('output').innerHTML = ('<pre align="left"><code>Error: '+JSON.stringify(response, null, 2)+'</code></pre>');
        console.log(response);
    });
//}
});

//Send a POST request to the API

document.getElementById("buttonPost").addEventListener("click", function(){
  var apigClient = apigClientFactory.newClient();
  var additionalParams = {
      headers: {
        Authorization: token
      }
    };
    
 var body = {
        "message": "Sample POST payload"
  };

  apigClient.rootPost({},body,additionalParams)
      .then(function(response) {
        console.log(JSON.stringify(response));
        document.getElementById("output").innerHTML = ('<pre align="left"><code>Response: '+JSON.stringify(response.data, null, 2)+'</code></pre>');
      }).catch(function (response) {
        document.getElementById('output').innerHTML = ('<pre align="left"><code>Error: '+JSON.stringify(response, null, 2)+'</code></pre>');
        console.log(response);
    });
});
</script>

As far as the front end is concerned you can use some simple HTML code to test, such as the following snippet:

<body>
<div id="container" class="container">
    <br/>
    <img src="http://awsmedia.s3.amazonaws.com/AWS_Logo_PoweredBy_127px.png">
    <h1>Cognito User Pools and API Gateway</h1>
    <form name="myform">
        <ul>
          <li class="fields">
            <div id="login">
            <label>User Name: </label>
            <input id="username" size="60" class="req" type="text"/>
            <label>Password: </label>
            <input id="password" size="60" class="req" type="password"/>
            <button class="btn" type="button" id='buttonAuth' title="Log in with your username and password">Log In</button>
            <br />
            </div>
            <div id="afterLogin" style="display:none;"> 
            <br />
            <button class="btn" type="button" id='buttonPost'>POST</button>
            <button class="btn" type="button" id='buttonGet' >GET</button>
            <br />
          </li>
        </ul>
      </form>
  <br/>
    <div id="output"></div>
  <br/>         
  </div>        
  <br/>
  </div>
</body>

After adding some extra CSS styling of your choice (for example adding "list-style: none" to remove list bullet points), the front end is ready. You can test it by using a local web server in your computer or a static website on Amazon S3.

Enter the user name and password details for John Doe and choose Log In:

cognitouserpoolsauth_8.png

A MFA code is then sent to the user and can be validated accordingly:

cognitouserpoolsauth_9.png

After authentication, you can see the ID token generated by Cognito for further access testing:

cognitouserpoolsauth_10.png

If you go back to the API Gateway console and test your Cognito user pool authorizer with the same token, you get the authenticated user claims accordingly:

cognitouserpoolsauth_11.png

In your front end, you can now perform authenticated GET calls to your API by choosing GET.

cognitouserpoolsauth_12.png

Or you can perform authenticated POST calls to your API by choosing POST.

cognitouserpoolsauth_13.png

The calls reach your Lambda proxy and return a valid response accordingly. You can also test from the command line using cURL, by sending the user pool ID token that you retrieved from the developer console earlier, in the “Authorization” header:

cognitouserpoolsauth_14.png

It’s possible to improve this solution by integrating an Amazon DynamoDB table, for instance. You could detect the method request on event.httpMethod in the Lambda function and issue a GetItem call to a table for a GET request or a PutItem call to a table for a POST request. There are lots of possibilities for this kind of proxy resource integration.

Summary

The Cognito user pools integration with API Gateway provides a new way to secure your API workloads, and the new proxy resource for Lambda allows you to perform any business logic or transformations to your API calls from Lambda itself instead of using body mapping templates. These new features provide very powerful options to secure and handle your API logic.

I hope this post helps with your API workloads. If you have questions or suggestions, please comment below.

2016: The Year In Tech, And A Sneak Peek Of What’s To Come

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/2016-year-tech-sneak-peek-whats-come/

2016 is safely in our rear-view mirrors. It’s time to take a look back at the year that was and see what technology had the biggest impact on consumers and businesses alike. We also have an eye to 2017 to see what the future holds.

AI and machine learning in the cloud

Truly sentient computers and robots are still the stuff of science fiction (and the premise of one of 2016’s most promising new SF TV series, HBO’s Westworld). Neural networks are nothing new, but 2016 saw huge strides in artificial intelligence and machine learning, especially in the cloud.

Google, Amazon, Apple, IBM, Microsoft and others are developing cloud computing infrastructures designed especially for AI work. It’s this technology that’s underpinning advances in image recognition technology, pattern recognition in cybersecurity, speech recognition, natural language interpretation and other advances.

Microsoft’s newly-formed AI and Research Group is finding ways to get artificial intelligence into Microsoft products like its Bing search engine and Cortana natural language assistant. Some of these efforts, while well-meaning, still need refinement: Early in 2016 Microsoft launched Tay, an AI chatbot designed to mimic the natural language characteristics of a teenage girl and learn from interacting with Twitter users. Microsoft had to shut Tay down after Twitter users exploited vulnerabilities that caused Tay to begin spewing really inappropriate responses. But it paves the way for future efforts that blur the line between man and machine.

Finance, energy, climatology – anywhere you find big data sets you’re going to find uses for machine learning. On the consumer end it can help your grocery app guess what you might want or need based on your spending habits. Financial firms use machine learning to help predict customer credit scores by analyzing profile information. One of the most intriguing uses of machine learning is in security: Pattern recognition helps systems predict malicious intent and figure out where exploits will come from.

Meanwhile we’re still waiting for Rosie the Robot from the Jetsons. And flying cars. So if Elon Musk has any spare time in 2017, maybe he can get on that.

AR Games

Augmented Reality (AR) games have been around for a good long time – ever since smartphone makers put cameras on them, game makers have been toying with the mix of real life and games.

AR games took a giant step forward with a game released in 2016 that you couldn’t get away from, at least for a little while. We’re talking about Pokémon GO, of course. Niantic, makers of another AR game called Ingress, used the framework they built for that game to power Pokémon GO. Kids, parents, young, old, it seemed like everyone with an iPhone that could run the game caught wild Pokémon, hatched eggs by walking, and battled each other in Pokémon gyms.

For a few weeks, anyway.

Technical glitches, problems with scale and limited gameplay value ultimately hurt Pokémon GO’s longevity. Today the game only garners a fraction of the public interest it did at peak. It continues to be successful, albeit not at the stratospheric pace it first set.

Niantic, the game’s developer, was able to tie together several factors to bring such an explosive and – if you’ll pardon the overused euphemism – disruptive – game to bear. One was its previous work with a game called Ingress, another AR-enhanced game that uses geomap data. In fact, Pokémon GO uses the same geomap data as Ingress, so Niantic had already done a huge amount of legwork needed to get Pokémon GO up and running. Niantic cleverly used Google Maps data to form the basis of both games, relying on already-identified public landmarks and other locations tagged by Ingress players (Ingress has been around since 2011).

Then, of course, there’s the Pokémon connection – an intensely meaningful gaming property that’s been popular with generations of video games and cartoon watchers since the 1990s. The dearth of Pokémon-branded games on smartphones meant an instant explosion of popularity upon Pokémon GO’s release.

2016 also saw the introduction of several new virtual reality (VR) headsets designed for home and mobile use. Samsung Gear VR and Google Daydream View made a splash. As these products continue to make consumer inroads, we’ll see more games push the envelope of what you can achieve with VR and AR.

Hybrid Cloud

Hybrid Cloud services combine public cloud storage (like B2 Cloud Storage) or public compute (like Amazon Web Services) with a private cloud platform. Specialized content and file management software glues it all together, making the experience seamless for the user.

Businesses get the instant access and speed they need to get work done, with the ability to fall back on on-demand cloud-based resources when scale is needed. B2’s hybrid cloud integrations include OpenIO, which helps businesses maintain data storage on-premise until it’s designated for archive and stored in the B2 cloud.

The cost of entry and usage of Hybrid Cloud services have continued to fall. For example, small and medium-sized organizations in the post production industry are finding Hybrid Cloud storage is now a viable strategy in managing the large amounts of information they use on a daily basis. This strategy is enabled by the low cost of B2 Cloud Storage that provides ready access to cloud-stored data.

There are practical deployment and scale issues that have kept Hybrid Cloud services from being used widespread in the largest enterprise environments. Small to medium businesses and vertical markets like Media & Entertainment have found promising, economical opportunities to use it, which bodes well for the future.

Inexpensive 3D printers

3D printing, once a rarified technology, has become increasingly commoditized over the past several years. That’s been in part thanks to the “Maker Movement:” Thousands of folks all around the world who love to tinker and build. XYZprinting is out in front of makers and others with its line of inexpensive desktop da Vinci printers.

The da Vinci Mini is a tabletop model aimed at home users which starts at under $300. You can download and tweak thousands of 3D models to build toys, games, art projects and educational items. They’re built using spools of biodegradable, non-toxic plastics derived from corn starch which dispense sort of like the bobbin on a sewing machine. The da Vinci Mini works with Macs and PCs and can connect via USB or Wi-Fi.

DIY Drones

Quadcopter drones have been fun tech toys for a while now, but the new trend we saw in 2016 was “do it yourself” models. The result was Flybrix, which combines lightweight drone motors with LEGO building toys. Flybrix was so successful that they blew out of inventory for the 2016 holiday season and are backlogged with orders into the new year.

Each Flybrix kit comes with the motors, LEGO building blocks, cables and gear you need to build your own quad, hex or octocopter drone (as well as a cheerful-looking LEGO pilot to command the new vessel). A downloadable app for iOS or Android lets you control your creation. A deluxe kit includes a handheld controller so you don’t have to tie up your phone.

If you already own a 3D printer like the da Vinci Mini, you’ll find plenty of model files available for download and modification so you can print your own parts, though you’ll probably need help from one of the many maker sites to know what else you’ll need to aerial flight and control.

5D Glass Storage

Research at the University of Southampton may yield the next big leap in optical storage technology meant for long-term archival. The boffins at the Optoelectronics Research Centre have developed a new data storage technique that embeds information in glass “nanostructures” on a storage disc the size of a U.S. quarter.

A Blu-Ray Disc can hold 50 GB, but one of the new 5D glass storage discs – only the size of a U.S. quarter – can hold 360 TB – 7200 times more. It’s like a super-stable supercharged version of a CD. Not only is the data inscribed on much smaller structures within the glass, but reflected at multiple angles, hence “5D.”

An upside to this is an absence of bit rot: The glass medium is extremely stable, with a shelf life predicted in billions of years. The downside is that this is still a write-once medium, so it’s intended for long term storage.

This tech is still years away from practical use, but it took a big step forward in 2016 when the University announced the development of a practical information encoding scheme to use with it.

Smart Home Tech

Are you ready to talk to your house to tell it to do things? If you’re not already, you probably will be soon. Google’s Google Home is a $129 voice-activated speaker powered by the Google Assistant. You can use it for everything from streaming music and video to a nearby TV to reading your calendar or to do list. You can also tell it to operate other supported devices like the Nest smart thermostat and Philips Hue lights.

Amazon has its own similar wireless speaker product called the Echo, powered by Amazon’s Alexa information assistant. Amazon has differentiated its Echo offerings by making the Dot – a hockey puck-sized device that connects to a speaker you already own. So Amazon customers can begin to outfit their connected homes for less than $50.

Apple’s HomeKit software kit isn’t a speaker like Amazon Echo or Google Home. It’s software. You use the Home app on your iOS 10-equipped iPhone or iPad to connect and configure supported devices. Use Siri, Apple’s own intelligent assistant, on any supported Apple device. HomeKit turns on lights, turns up the thermostat, operates switches and more.

Smart home tech has been coming in fits and starts for a while – the Nest smart thermostat is already in its third generation, for example. But 2016 was the year we finally saw the “Internet of things” coalescing into a smart home that we can control through voice and gestures in a … well, smart way.

Welcome To The Future

It’s 2017, welcome to our brave new world. While it’s anyone’s guess what the future holds, there are at least a few tech trends that are pretty safe to bet on. They include:

  • Internet of Things: More smart-connected devices are coming online in the home and at work every day, and this trend will accelerate in 2017 with more and more devices requiring some form of Internet connectivity to work. Expect to see a lot more appliances, devices, and accessories that make use of the API’s promoted by Google, Amazon, and Apple to help let you control everything in your life just using your voice and a smart speaker setup.
  • Blockchain security: Blockchain is the digital ledger security technology that makes Bitcoin work. Its distribution methodology and validation system help you make certain that no one’s tampered with the records, which make it well-suited for applications besides cryptocurrency, like make sure your smart thermostat (see above) hasn’t been hacked). Expect 2017 to be the year we see more mainstream acceptance, use, and development of blockchain technology from financial institutions, the creation of new private blockchain networks, and improved usability aimed at making blockchain easier for regular consumers to use. Blockchain-based voting is here too. It also wouldn’t surprise us, given all this movement, to see government regulators take a much deeper interest in blockchain, either.
  • 5G: Verizon is field-testing 5G on its wireless network, which it says deliver speeds 30-50 times faster than 4G LTE. We’ll be hearing a lot more about 5G from Verizon and other wireless players in 2017. In fairness, we’re still a few years away from widescale 5G deployment, but field-testing has already started.

Your Predictions?

Enough of our bloviation. Let’s open the floor to you. What do you think were the biggest technology trends in 2016? What’s coming in 2017 that has you the most excited? Let us know in the comments!

The post 2016: The Year In Tech, And A Sneak Peek Of What’s To Come appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

US Government Publishes New Plan to Target Pirate Sites

Post Syndicated from Andy original https://torrentfreak.com/us-government-publishes-new-plan-to-target-pirate-sites-161213/

whitehouse-logoThe Office of the Intellectual Property Enforcement Coordinator (IPEC) has just released its Joint Strategic Plan (JSP) for Intellectual Property Enforcement, titled Supporting Innovation, Creativity & Enterprise: Charting a Path Ahead.

“The Plan – which incorporates views from a variety of individual stakeholders across government, industry, educational institutions, trade organizations and public interest groups — offers a blueprint for the work to be carried out over the next three years by the Federal Government in support of a healthy and robust intellectual property enforcement policy environment,” a White House statement reads.

The plan has four stated goals:

– Enhance National understanding of the economic and social impacts flowing from misappropriation of trade secrets and the infringement of intellectual property rights

– Promote a safe and secure Internet by minimizing counterfeiting and IP-infringing activity online

– Secure and facilitate lawful trade

– Enhance domestic strategies and global collaboration in support of effective IP enforcement.

The 163-page report leaves few stones unturned, with Section 2 homing in on Internet piracy.

Follow The Money

While shutting down websites is often seen as the ultimate anti-piracy tool, more commonly authorities are targeting what they believe fuels online piracy – money. The report says that while original content is expensive to create, copies cost almost nothing, leading to large profits for pirates.

“An effective enforcement strategy against commercial-scale piracy and counterfeiting therefore, must target and dry up the illicit revenue flow of the actors engaged in commercial piracy online. That requires an examination of the revenue sources for commercial-scale pirates,” the report says.

“The operators of direct illicit download and streaming sites enjoy revenue through membership subscriptions serviced by way of credit card and similar payment-based transactions, as is the case with the sale and purchase of counterfeit goods, while the operators of torrent sites may rely more heavily on advertising revenue as the primary source of income.”

To cut-off this revenue, the government foresees voluntary collaboration between payment processor networks, online advertisers, and the banking sector.

Payment processors

“All legitimate payment processors prohibit the use of their services and platforms for unlawful conduct, including IP-infringing activities. They do so by way of policy and contract through terms of use and other agreements applicable to their users,” the JSP says.

“Yet, notwithstanding these prohibitions, payment processor platforms continue to be exploited by illicit merchants of counterfeit products and infringing content.”

The government says that pirates and counterfeiters use a number of techniques to exploit payment processors and have deployed systems that can thwart “test” transactions conducted by rightsholders and other investigators. Furthermore, the fact that some credit card companies do not have direct contractual relationships with merchants, enables websites to continue doing business after payment processing rights have been terminated.

The JSP calls for more coordination between companies in the ecosystem, increased transparency, greater geographic scope, and bi-lateral engagements with other governments.

“IPEC and USPTO, with private sector input, will facilitate benchmarking studies of current voluntary initiatives designed to combat revenue flow to rogue sites to determine whether existing voluntary initiatives are functioning effectively, and thereby promote a robust, datadriven voluntary initiative environment,” the report adds.

Advertising

The JSP begins with the comment that “Ad revenue is the oxygen that content theft to breathe” and it’s clear that the government wants to asphyxiate pirate sites. It believes that up to 86% of download and streaming platforms rely on advertising for revenue and the sector needs to be cleaned up.

In common with payment processors, the report notes that legitimate ad networks also have policies in place to stop their services appearing on pirate sites. However, “sophisticated entities” dedicated to infringement can exploit loopholes, with some doing so to display “high-risk” ads that include malware, pop-unders and pixel stuffing.

Collaboration is already underway among industry players but the government wants to see more integration and cooperation, to stay ahead of the tactics allegedly employed by sites such as the defunct KickassTorrents, which is highlighted in the report.

kat-ad

“IPEC and the IPR Center (with its constituent law enforcement partners), along with other relevant Federal agencies, will convene the advertising industry to hear further about their voluntary efforts. The U.S. Interagency Strategic Planning Committees on IP Enforcement will assess opportunities to support efforts to combat the flow of ad revenue to criminals,” the JSP reads.

“As part of best practices and initiatives, advertising networks are encouraged
to make appropriately generalized and anonymized data publicly available to permit study and analysis of illicit activity intercepted on their platforms and networks. Such data will allow study by public and private actors alike to identify patterns of behavior or tactics associated with illicit actors who seek to profit from ad revenue from content theft websites.”

Domain hopping

When pirate sites come under pressure from copyright holders, their domain names are often at risk of suspension or even seizure. This triggers a phenomenon known as domain hopping, a tactic most visibly employed by The Pirate Bay when it skipped all around the world with domains registered in several different countries.

tpb-hop

“To evade law enforcement, bad actors will register the same or different domain name with different registrars. They then attempt to evade law enforcement by moving from one registrar to another, thus prolonging the so-called ‘whack-a-mole’ pursuit. The result of this behavior is to drive up costs of time and resources spent on protecting intellectual property right,” the JSP notes.

The report adds that pirate sites are more likely to use ccTLDs (country code Top-Level Domains) than gTLDs (Generic Top-Level Domains) due to the way the former are administrated.

“The relationship between any given ccTLD administrator and its government will differ from case to case and may depend on complex and sensitive arrangements particular to the local political climate. Different ccTLD policies will reflect different approaches with respect to process for the suspension, transfer, or cancellation of a domain name registration,” it reads.

“Based on the most recent Notorious Markets lists available prior to issuance of this plan, ccTLDs comprise roughly half of all named ‘notorious’ top-level domains. Considering that ccTLDs are outnumbered by gTLDs in the domain name base by more than a 2-to-1 ratio, the frequency of bad faith ccTLD sites appear to be disproportionate in nature and worthy of further research and analysis.”

Once again, the US government calls for more cooperation alongside an investigation to assess the scope of “abusive domain name registration tactics and trend.”

Policies to improve DMCA takedown processes

As widely documented, rightsholders are generally very unhappy with the current DMCA regime as they are forced to send millions of notices every week to contain the flow of pirate content. Equally, service providers are also being placed under significant stress due to the processing of those same notices.

In its report, the government acknowledges the problems faced by both sides but indicates that the right discussions are already underway to address the issues.

“The continued development of private sector best practices, led through a multistakeholder process, may ease the burdens involved with the DMCA process for rights holders, Internet intermediaries, and users while decreasing infringing activity,” the report says.

“These best practices may focus on enhanced methods for identifying actionable infringement, preventing abuse of the system, establishing efficient takedown procedures, preventing the reappearance of previously removed infringing content, and providing opportunity for creators to assert their fair use rights.”

In summary, the government champions the Copyright Office’s current evaluation of Section 512 of the DMCA while calling for cooperation between stakeholders.

Social Media

The Joint Strategic Plan highlights the growing part social media has to play in the dissemination of infringing content, from driving traffic to websites selling illegal products, unlawful exploitation of third-party content, to suspect product reviews. Again, the solution can be found in collaboration, including with the public.

“[The government will] encourage the development of industry standards and best
practices, through a multistakeholder process, to curb abuses of social media channels for illicit purposes, while protecting the rights of users to use those channels for non-infringing and other lawful activities,” it notes.

“One underutilized resource may be the users themselves, who may be in a
position to report suspicious product offerings or other illicit activity, if provided a streamlined opportunity to do so, as some social media companies are beginning to explore.”

And finally – education

The government believes that greater knowledge among the public of where it can obtain content legally will assist in reducing instances of online piracy.

“The U.S. Interagency Strategic Planning Committees on IP Enforcement, and other relevant Federal agencies, as appropriate, will assess opportunities to support public-private collaborative efforts aimed at increasing awareness of legal sources of copyrighted material online and educating users about the harmful impacts of digital piracy,” it concludes.

The full report is available here (163 pages, PDF)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-snowmobile-move-exabytes-of-data-to-the-cloud-in-weeks/

Moving large amounts of on-premises data to the cloud as part of a migration effort is still more challenging than it should be! Even with high-end connections, moving petabytes or exabytes of film vaults, financial records, satellite imagery, or scientific data across the Internet can take years or decades.  On the business side, adding new networking or better connectivity to data centers that are scheduled to be decommissioned after a migration is expensive and hard to justify.

Last year we announced the AWS Snowball (see AWS Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances for more information) as a step toward addressing large-scale data migrations. With 80 TB of storage, these appliances address the needs of many of our customers, and are in widespread use today.

However, customers with exabyte-scale on-premises storage look at the 80 TB, do the math, and realize that an all-out data migration would still require lots of devices and some headache-inducing logistics.

Introducing AWS Snowmobile
In order to meet the needs of these customers, we are launching Snowmobile today. This secure data truck stores up to 100 PB of data and can help you to move exabytes to AWS in a matter of weeks (you can get more than one if necessary). Designed to meet the needs of our customers in the financial services, media & entertainment, scientific, and other industries, Snowmobile attaches to your network and appears as a local, NFS-mounted volume. You can use your existing backup and archiving tools to fill it up with data destined for Amazon Simple Storage Service (S3) or Amazon Glacier.

Physically, Snowmobile is a ruggedized, tamper-resistant shipping container 45 feet long, 9.6 feet high, and 8 feet wide. It is water-proof, climate-controlled, and can be parked in a covered or uncovered area adjacent to your existing data center. Each Snowmobile consumes about 350 KW of AC power; if you don’t have sufficient capacity on site we can arrange for a generator.

On the security side, Snowmobile incorporates multiple layers of logical and physical protection including chain-of-custody tracking and video surveillance. Your data is encrypted with your AWS Key Management Service (KMS) keys before it is written. Each container includes GPS tracking, with cellular or satellite connectivity back to AWS. We will arrange for a security vehicle escort when the Snowmobile is in transit; we can also arrange for dedicated security guards while your Snowmobile is on-premises.

Each Snowmobile includes a network cable connected to a high-speed switch capable of supporting 1 Tb/second of data transfer spread across multiple 40 Gb/second connections. Assuming that your existing network can transfer data at that rate, you can fill a Snowmobile in about 10 days.

Snowmobile in Action
I don’t happen to have an exabyte-scale data center and I certainly don’t have room next to my house for a 45 foot long container. In order to illustrate the process of arranging for and using a Snowmobile, I sat down at my LEGO table and (in the finest Doc Brown tradition) built a scale model. I hope that you enjoy this brick-based story telling!

Let’s start in your data center. It was built a while ago and is definitely showing its age. The racks are full of disk and tape drives of multiple vintages, each storing precious, mission-critical data. You and your colleagues spend too much time inside of the raised floor, tracking cables and trying to squeeze out just a bit more performance:

Your manager is getting frustrated and does not know what to do next:

Fortunately, one of your colleagues reads this blog every day and she knows just what to do:

A quick phone call to AWS and a meeting is set up:

Everyone gets together at a convenient AWS office to learn more about Snowmobile and to plan the migration:

Everyone gathers around to look at the scale model of the Snowmobile. Even the dog is intrigued, and your manager takes a picture:

A Snowmobile shows up at your data center:

AWS Professional Services helps you to get it connected and you initiate the data transfer:

The Snowmobile heads back to AWS and your data is imported as you specified!

Snowmobile at DigitalGlobe
Our friends at DigitalGlobe are using a Snowmobile to move 100 PB of satellite imagery to AWS. Here’s what Jay Littlepage (former Amazonian and now VP of Infrastructure & Operations at DigitalGlobe) has to say about this effort:

Like many large enterprises, we are in the process of migrating IT operations from our data centers to AWS. Our geospatial big data platform, GBDX, has been based in AWS since inception. But our unmatchable 16-year archive of high-resolution satellite imagery, visualizing 6 billion square kilometers of the Earth’s surface, has been stored within our facilities. We have slowly been migrating our archive to AWS but that process has been slow and inefficient. Our constellation of satellites generate more earth imagery each year (10 PB) than we have been able to migrate by these methods.

We needed a solution that could move our 100 PB archive but could not find one until now with AWS Snowmobile. DigitalGlobe is currently migrating our entire raw imagery archive with one Snowmobile transfer directly into an Amazon Glacier Vault. AWS Snowmobile operators are providing an amazing customized service where they manage the configuration, monitoring, and logistics. Using Snowmobile’s data transfer abilities will get our time-lapse imagery archive to the cloud more quickly, allowing our customers and partners to have access to uniquely massive data sets. By using AWS’ elastic computing platform within GBDX, we will run distributed image analysis, revealing the pace and pattern of world-wide change on an extraordinary scale, with unprecedented speed, in a more cost-effective manner – prioritizing insights over infrastructure. Without Snowmobile, we would not have been able to transfer our extremely large volume of data in such a short time or create new business opportunities for our customers. Snowmobile is truly a game changer!

Things to Know
Here are a couple of final things you should know about Snowmobile:

Data Export – The initial launch is aimed at data import (on-premises to AWS). We do know that some of our customers are interested in data export, with a particular focus on disaster recovery (DR) use cases.

AvailabilitySnowmobile is available in all AWS Regions. As you can see from reading the previous section, this is not a self-serve product. My AWS Sales colleagues are ready to discuss your data import needs with you.

Pricing – I don’t have pricing info to share. However, we intend to make sure that Snowmobile is both faster and less expensive than using a network-based data transfer model.

Jeff;

PS – Check out my Snowmobile Photo Album for some high-res pictures of my creation. Special thanks to Matt Gutierrez (Symbionix) for the final staging and the photo shoot.

PPS – I will build and personally deliver (in exchange for a photo op and a bloggable story) Snowmobile models to the first 5 customers.