Tag Archives: identification

SecureLogin For Java Web Applications

Post Syndicated from Bozho original https://techblog.bozho.net/securelogin-java-web-applications/

No, there is not a missing whitespace in the title. It’s not about any secure login, it’s about the SecureLogin protocol developed by Egor Homakov, a security consultant, who became famous for committing to master in the Rails project without having permissions.

The SecureLogin protocol is very interesting, as it does not rely on any central party (e.g. OAuth providers like Facebook and Twitter), thus avoiding all the pitfalls of OAuth (which Homakov has often criticized). It is not a password manager either. It is just a client-side software that performs a bit of crypto in order to prove to the server that it is indeed the right user. For that to work, two parts are key:

  • Using a master password to generate a private key. It uses a key-derivation function, which guarantees that the produced private key has sufficient entropy. That way, using the same master password and the same email, you will get the same private key everytime you use the password, and therefore the same public key. And you are the only one who can prove this public key is yours, by signing a message with your private key.
  • Service providers (websites) identify you by your public key by storing it in the database when you register and then looking it up on each subsequent login

The client-side part is performed ideally by a native client – a browser plugin (one is available for Chrome) or a OS-specific application (including mobile ones). That may sound tedious, but it’s actually quick and easy and a one-time event (and is easier than password managers).

I have to admit – I like it, because I’ve been having a similar idea for a while. In my “biometric identification” presentation (where I discuss the pitfalls of using biometrics-only identification schemes), I proposed (slide 23) an identification scheme that uses biometrics (e.g. scanned with your phone) + a password to produce a private key (using a key-derivation function). And the biometric can easily be added to SecureLogin in the future.

It’s not all roses, of course, as one issue isn’t fully resolved yet – revocation. In case someone steals your master password (or you suspect it might be stolen), you may want to change it and notify all service providers of that change so that they can replace your old public key with a new one. That has two implications – first, you may not have a full list of sites that you registered on, and since you may have changed devices, or used multiple devices, there may be websites that never get to know about your password change. There are proposed solutions (points 3 and 4), but they are not intrinsic to the protocol and rely on centralized services. The second issue is – what if the attacker changes your password first? To prevent that, service providers should probably rely on email verification, which is neither part of the protocol, nor is encouraged by it. But you may have to do it anyway, as a safeguard.

Homakov has not only defined a protocol, but also provided implementations of the native clients, so that anyone can start using it. So I decided to add it to a project I’m currently working on (the login page is here). For that I needed a java implementation of the server verification, and since no such implementation existed (only ruby and node.js are provided for now), I implemented it myself. So if you are going to use SecureLogin with a Java web application, you can use that instead of rolling out your own. While implementing it, I hit a few minor issues that may lead to protocol changes, so I guess backward compatibility should also be somehow included in the protocol (through versioning).

So, how does the code look like? On the client side you have a button and a little javascript:

<!-- get the latest sdk.js from the GitHub repo of securelogin
   or include it from https://securelogin.pw/sdk.js -->
<script src="js/securelogin/sdk.js"></script>
....
<p class="slbutton" id="securelogin">&#9889; SecureLogin</p>
$("#securelogin").click(function() {
  SecureLogin(function(sltoken){
	// TODO: consider adding csrf protection as in the demo applications
        // Note - pass as request body, not as param, as the token relies 
        // on url-encoding which some frameworks mess with
	$.post('/app/user/securelogin', sltoken, function(result) {
            if(result == 'ok') {
		 window.location = "/app/";
            } else {
                 $.notify("Login failed, try again later", "error");
            }
	});
  });
  return false;
});

A single button can be used for both login and signup, or you can have a separate signup form, if it has to include additional details rather than just an email. Since I added SecureLogin in addition to my password-based login, I kept the two forms.

On the server, you simply do the following:

@RequestMapping(value = "/securelogin/register", method = RequestMethod.POST)
@ResponseBody
public String secureloginRegister(@RequestBody String token, HttpServletResponse response) {
    try {
        SecureLogin login = SecureLogin.verify(request.getSecureLoginToken(), Options.create(websiteRootUrl));
        UserDetails details = userService.getUserDetailsByEmail(login.getEmail());
        if (details == null || !login.getRawPublicKey().equals(details.getSecureLoginPublicKey())) {
            return "failure";
        }
        // sets the proper cookies to the response
        TokenAuthenticationService.addAuthentication(response, login.getEmail(), secure));
        return "ok";
    } catch (SecureLoginVerificationException e) {
        return "failure";
    }
}

This is spring-mvc, but it can be any web framework. You can also incorporate that into a spring-security flow somehow. I’ve never liked spring-security’s complexity, so I did it manually. Also, instead of strings, you can return proper status codes. Note that I’m doing a lookup by email and only then checking the public key (as if it’s a password). You can do the other way around if you have the proper index on the public key column.

I wouldn’t suggest having a SecureLogin-only system, as the project is still in an early stage and users may not be comfortable with it. But certainly adding it as an option is a good idea.

The post SecureLogin For Java Web Applications appeared first on Bozho's tech blog.

Apple’s FaceID

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/apples_faceid.html

This is a good interview with Apple’s SVP of Software Engineering about FaceID.

Honestly, I don’t know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can’t be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you “quick disabled” Face ID in tricky scenarios — like being stopped by police, or being asked by a thief to hand over your device.

“On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while — we’ll take you to the power down [screen]. But that also has the effect of disabling Face ID,” says Federighi. “So, if you were in a case where the thief was asking to hand over your phone — you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID.”

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the “5 clicks” because it’s less obtrusive. When you do this, it defaults back to your passcode.

More:

It’s worth noting a few additional details here:

  • If you haven’t used Face ID in 48 hours, or if you’ve just rebooted, it will ask for a passcode.
  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode — it tried to read the people setting the phones up on the podium.)

  • Developers do not have access to raw sensor data from the Face ID array. Instead, they’re given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

  • You’ll also get a passcode request if you haven’t unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn’t unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you’re a researcher or security wonk looking for more, he says it will have “extreme levels of detail” about the security of the system.

Here’s more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop’s owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won’t be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user’s face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face’s 3-D shape­ — a trick similar to the kind now used to capture actors’ faces to morph them into animated and digitally enhanced characters.

It’ll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.

Security Flaw in Estonian National ID Card

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/security_flaw_i.html

We have no idea how bad this really is:

On 30 August, an international team of researchers informed the Estonian Information System Authority (RIA) of a vulnerability potentially affecting the digital use of Estonian ID cards. The possible vulnerability affects a total of almost 750,000 ID-cards issued starting from October 2014, including cards issued to e-residents. The ID-cards issued before 16 October 2014 use a different chip and are not affected. Mobile-IDs are also not impacted.

My guess is that it’s worse than the politicians are saying:

According to Peterkop, the current data shows this risk to be theoretical and there is no evidence of anyone’s digital identity being misused. “All ID-card operations are still valid and we will take appropriate actions to secure the functioning of our national digital-ID infrastructure. For example, we have restricted the access to Estonian ID-card public key database to prevent illegal use.”

And because this system is so important in local politics, the effects are significant:

In the light of current events, some Estonian politicians called to postpone the upcoming local elections, due to take place on 16 October. In Estonia, approximately 35% of the voters use digital identity to vote online.

But the Estonian prime minister, Jüri Ratas, said at a press conference on 5 September that “this incident will not affect the course of the Estonian e-state.” Ratas also recommended to use Mobile-IDs where possible. The prime minister said that the State Electoral Office will decide whether it will allow the usage of ID cards at the upcoming local elections.

The Estonian Police and Border Guard estimates it will take approximately two months to fix the issue with faulty cards. The authority will involve as many Estonian experts as possible in the process.

This is exactly the sort of thing I worry about as ID systems become more prevalent and more centralized. Anyone want to place bets on whether a foreign country is going to try to hack the next Estonian election?

Another article.

Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Post Syndicated from Ed Lima original https://aws.amazon.com/blogs/compute/secure-api-access-with-amazon-cognito-federated-identities-amazon-cognito-user-pools-and-amazon-api-gateway/

Ed Lima, Solutions Architect

 

Our identities are what define us as human beings. Philosophical discussions aside, it also applies to our day-to-day lives. For instance, I need my work badge to get access to my office building or my passport to travel overseas. My identity in this case is attached to my work badge or passport. As part of the system that checks my access, these documents or objects help define whether I have access to get into the office building or travel internationally.

This exact same concept can also be applied to cloud applications and APIs. To provide secure access to your application users, you define who can access the application resources and what kind of access can be granted. Access is based on identity controls that can confirm authentication (AuthN) and authorization (AuthZ), which are different concepts. According to Wikipedia:

 

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that “you are who you say you are,” authorization is the process of verifying that “you are permitted to do what you are trying to do.” This does not mean authorization presupposes authentication; an anonymous agent could be authorized to a limited action set.

Amazon Cognito allows building, securing, and scaling a solution to handle user management and authentication, and to sync across platforms and devices. In this post, I discuss the different ways that you can use Amazon Cognito to authenticate API calls to Amazon API Gateway and secure access to your own API resources.

 

Amazon Cognito Concepts

 

It’s important to understand that Amazon Cognito provides three different services:

Today, I discuss the use of the first two. One service doesn’t need the other to work; however, they can be configured to work together.
 

Amazon Cognito Federated Identities

 
To use Amazon Cognito Federated Identities in your application, create an identity pool. An identity pool is a store of user data specific to your account. It can be configured to require an identity provider (IdP) for user authentication, after you enter details such as app IDs or keys related to that specific provider.

After the user is validated, the provider sends an identity token to Amazon Cognito Federated Identities. In turn, Amazon Cognito Federated Identities contacts the AWS Security Token Service (AWS STS) to retrieve temporary AWS credentials based on a configured, authenticated IAM role linked to the identity pool. The role has appropriate IAM policies attached to it and uses these policies to provide access to other AWS services.

Amazon Cognito Federated Identities currently supports the IdPs listed in the following graphic.

 



Continue reading Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

The RIAA is Now Copyright Troll Rightscorp’s Biggest Customer

Post Syndicated from Andy original https://torrentfreak.com/the-riaa-is-now-copyright-troll-rightscorps-biggest-customer-170424/

Nurturing what appears to be a failing business model, anti-piracy outfit Rightscorp has been on life-support for a number of years, never making a cent while losing millions of dollars.

As a result, every annual report filed by the company is expected to reveal yet more miserable numbers. This year’s, filed two weeks late a few days ago, doesn’t break the trend. It is, however, a particularly interesting read.

For those out of the loop, Rightscorp generates revenue from monitoring BitTorrent networks, logging infringements, and sending warning notices to ISPs. It hopes those ISPs will forward notices to customers who are asked to pay $20 or $30 per offense. Once paid, Rightscorp splits this revenue with its copyright holder customers.

The company’s headline sales figures for 2016 are somewhat similar to those of the previous year. In 2015 the company generated $832,215 in revenue but in 2016 that had dropped to $778,215. While yet another reduction in revenue won’t be welcome, the company excelled in trimming its costs.

In 2015, Rightscorp’s total operating costs were almost $5.47m, something which led the company to a file an eye-watering $4.63 million operational loss.

In 2016, the company somehow managed to reduce its costs to ‘just’ $2.73m, a vast improvement over the previous year. But, despite the effort, Rightscorp still couldn’t make money in 2016. In its latest accounts, the company reveals an operational loss of $1.95m and little salvation on the bottom line.

“During the year ended December 31, 2016, the Company incurred a net loss of $1,355,747 and used cash in operations of $807,530, and at December 31, 2016, the Company had a stockholders’ deficit of $2,092,060,” the company reveals.

While a nose-diving Rightscorp has been a familiar story in recent years, there are some nuggets of information in 2016’s report that makes it stand out.

According to Rightscorp, in 2014 BMG Rights Management accounted for 76% of the company’s sales, with Warner Bros. Entertainment made up a token 13%. In 2015 it was a similar story, but during 2016, big developments took place with a brand new and extremely important customer.

“For the year ended December 31, 2016, our contract with Recording Industry Association of America accounted for approximately 44% of our sales, and our contract with BMG Rights Management accounted for 23% of our sales,” the company’s report reveals.

The fact that the RIAA is now Rightscorp’s biggest customer to the tune of $342,000 in business during 2016 is a pretty big reveal, not only for the future of the anti-piracy company but also the interests of millions of BitTorrent users around the United States.

While it’s certainly possible that the RIAA plans to start sending settlement demands to torrent users (Warner has already done so), there are very clear signs that the RIAA sees value in Rightscorp elsewhere. As shown in the table below, between 2015 and 2016 there has been a notable shift in how Rightscorp reports its revenue.

In 2015, all of Rightscorp’s revenue came from copyright settlements. In 2016, roughly 50% of its revenue (a little over the amount accounted for by the RIAA’s business) is listed as ‘consulting revenue’. It seems more than likely that the lion’s share of this revenue came from the RIAA, but why?

On Friday the RIAA filed a big lawsuit against Texas-based ISP Grande Communications. Detailed here, the multi-million suit accuses the ISP of failing to disconnect subscribers accused of infringement multiple times.

The data being used to prosecute that case was obtained by the RIAA from Rightscorp, who in turn collected that data from BitTorrent networks. The company obtained a patent under its previous Digital Rights Corp. guise which specifically covers repeat infringer identification. It has been used successfully in the ongoing case against another ISP, Cox Communications.

In short, the RIAA seems to be planning to do to Grande Communications what BMG and Rightscorp have already done to Cox. They will be seeking to show that Grande knew that its subscribers were multiple infringers yet failed to disconnect them from the Internet. This inaction, they will argue, means that Grande loses its protection from liability under the safe harbor provisions of the DMCA.

Winning the case against Grande Communications is extremely important for the RIAA and for reasons best understood by the parties involved, it clearly places value on the data held by Rightscorp. Whether the RIAA will pay another few hundred thousand dollars to the anti-piracy outfit in 2017 remans to be seen, but Rightscorp will be hoping so as it’s desperate for the cash.

The company’s year-end filing raises “substantial doubt about the Company’s ability to continue as a going concern” while noting that its management believes that the company will need at least another $500,000 to $1,000,000 to fund operations in 2017.

This new relationship between the RIAA and Rightscorp is an interesting one and one that’s likely to prove controversial. Grande Communications is being sued today, but the big question is which other ISPs will follow in the months and years to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Is it on AWS? Domain Identification Using AWS Lambda

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/is-it-on-aws-domain-identification-using-aws-lambda/

In the guest post below, my colleague Tim Bray explains how he built IsItOnAWS.com . Powered by the list of AWS IP address ranges and using a pair of AWS Lambda functions that Tim wrote, the site aims to tell you if your favorite website is running on AWS.

Jeff;


Is it on AWS?
I did some recreational programming over Christmas and ended up with a little Lambda function that amused me and maybe it’ll amuse you too. It tells you whether or not a given domain name (or IP address) (even IPv6!) is in the published list of AWS IP address ranges. You can try it out over at IsItOnAWS.com. Part of the construction involves one Lambda function creating another.

That list of of ranges, given as IPv4 and IPv6 CIDRs wrapped in JSON, is here; the how-to documentation is here and there’s a Jeff Barr blog. Here are a few lines of the “IP-Ranges” JSON:

{
  "syncToken": "1486776130",
  "createDate": "2017-02-11-01-22-10",
  "prefixes": [
    {
      "ip_prefix": "13.32.0.0/15",
      "region": "GLOBAL",
      "service": "AMAZON"
    },
    ...
  "ipv6_prefixes": [
    {
      "ipv6_prefix": "2400:6500:0:7000::/56",
      "region": "ap-southeast-1",
      "service": "AMAZON"
    },

As soon as I saw it, I thought “I wonder if IsItOnAWS.com is available?” It was, and so I had to build this thing. I wanted it to be:

  1. Serverless (because that’s what the cool kids are doing),
  2. simple (because it’s a simple problem, look up a number in a range of numbers), and
  3. fast. Because well of course.

Database or Not?
The construction seemed pretty obvious: Simplify the IP-Ranges into a table, then look up addresses in it. So, where to put the table? I thought about Amazon DynamoDB, but it’s not obvious how best to search on what in effect is a numeric range. I thought about SQL databases, where it is obvious, but note #2 above. I thought about Redis or some such, but then you have to provision instances, see #1 above. I actually ended up stuck for a few days scratching my head over this one.

Then a question occurred to me: How big is that list of ranges? It turns out to have less than a thousand entries. So who needs a database anyhow? Let’s just sort that JSON into an array and binary-search it. OK then, where does the array go? Amazon S3 would be easy, but hey, look at #3 above; S3’s fast, but why would I want it in the loop for every request? So I decided to just generate a little file containing the ranges as an array literal, and include it right into the IsItOnAWS Lambda function. Which meant I’d have to rebuild and upload the function every time the IP addresses change.

It turns out that if you care about those addresses, you can subscribe to an Amazon Simple Notification Service (SNS) topic that will notify you whenever it changes (in my recent experience, once or twice a week). And you can hook your subscription up to a Lambda function. With that, I felt I’d found all the pieces anyone could need. There are two Lambda functions: the first, newranges.js, gets the change notifications, generates the JavaScript form of the IP-Ranges data, and uploads a second Lambda function, isitonaws.js, which includes that JavaScript. Vigilant readers will have deduced this is all with the Node runtime.

The new-ranges function, your typical async/waterfall thing, is a little more complex than I’d expected going in.

Postmodern IP Addresses
Its first task is to fetch the IP-Ranges, a straightforward HTTP GET. Then you take that JSON and smooth it out to make it more searchable. Unsurprisingly, there are both IPv4 and IPv6 ranges, and to make things easy I wanted to mash ’em all together into a single array that I could search with simple string or numeric matching. And since IPv6 addresses are way too big for JavaScript numbers to hold, they needed to be strings.

It turns out the way the IPv4 space embeds into IPv6’s ("::ffff:0:0/96") is a little surprising. I’d always assumed it’d be like the BMP mapping into the low bits of Unicode. I idly wonder why it’s this way, but not enough to research it.

The code for crushing all those CIDRs together into a nice searchable array ended up being kind of brutish, but it gets the job done.

Building Lambda in Lambda
Next, we need to construct the lambda that’s going to actually handle the IsItOnAWS request. This has to be a Zipfile, and NPM has tools to make those. Then it was a matter of jamming the zipped bytes into S3 and uploading them to make the new Lambda function.

The sharp-eyed will note that once I’d created the zip, I could have just uploaded it to Lambda directly. I used the S3 interim step because I wanted to to be able to download the generated “ranges” data structure and actually look at it; at some point I may purify the flow.

The actual IsItOnAWS runtime is laughably simple, aside from a bit of work around hitting DNS to look up addresses for names, then mashing them into the same format we used for the ranges array. I didn’t do any HTML templating, just read it out of a file in the zip and replaced an invisible <div> with the results if there were any. Except for, I got to code up a binary search method, which only happens once a decade or so but makes me happy.

Putting the Pieces Together
Once I had all this code working, I wanted to connect it to the world, which meant using Amazon API Gateway. I’ve found this complex in the past, but this time around I plowed through Create an API with Lambda Proxy Integration through a Proxy Resource, and found it reasonably linear and surprise-free.

However, it’s mostly focused on constructing APIs (i.e. JSON in/out) as opposed to human experiences. It doesn’t actually say how to send HTML for a human to consume in a browser, but it’s not hard to figure out. Here’s how (from Node):

context.succeed({
  "statusCode": 200,
  "headers": { "Content-type": "text/html" },
  "body": "<html>Your HTML Here</html>"
});

Once I had everything hooked up to API Gateway, the last step was pointing isitonaws.com at it. And that’s why I wrote this code in December-January, but am blogging at you now. Back then, Amazon Certificate Manager (ACM) certs couldn’t be used with API Gateway, and in 2017, life is just too short to go through the old-school ceremony for getting a cert approved and hooked up. ACM makes the cert process a real no-brainer. What with ACM and Let’s Encrypt loose in the wild, there’s really no excuse any more for having a non-HTTPS site. Both are excellent, but if you’re using AWS services like API Gateway and CloudFront like I am here, ACM is a smoother fit. Also it auto-renews, which you have to like.

So as of now, hooking up a domain name via HTTPS and CloudFront to your API Gateway API is dead easy; see Use Custom Domain Name as API Gateway API Host Name. Worked for me, first time, but something to watch out for (in March 2017, anyhow): When you get to the last step of connecting your ACM cert to your API, you get a little spinner that wiggles at you for several minutes while it hooks things up; this is apparently normal. Fortunately I got distracted and didn’t give up and refresh or cancel or anything, which might have screwed things up.

By the way, as a side-effect of using API Gateway, this is all running through CloudFront. So what with that, and not having a database, you’d expect it to be fast. And yep, it sure is, from here in Vancouver anyhow. Fast enough to not bother measuring.

I also subscribed my email to the “IP-Ranges changed” SNS topic, so every now and then I get an email telling me it’s changed, and I smile because I know that my Lambda wrote a new Lambda, all automatic, hands-off, clean, and fast.

Tim Bray, Senior Principal Engineer

 

A Day in the Life of a Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/day-life-datacenter-part/

Editor’s note: We’ve reposted this very popular 2016 blog entry because we often get questions about how Backblaze stores data, and we wanted to give you a look inside!

A data center is part of the “cloud”; as in cloud backup, cloud storage, cloud computing, and so on. It is often where your data goes or goes through, once it leaves your home, office, mobile phone, tablet, etc. While many of you have never been inside a data center, chances are you’ve seen one. Cleverly disguised to fit in, data centers are often nondescript buildings with few if any windows and little if any signage. They can be easy to miss. There are exceptions of course, but most data centers are happy to go completely unnoticed.

We’re going to take a look at a typical day in the life of a data center.

Getting Inside A Data Center

As you approach a data center, you’ll notice there isn’t much to notice. There’s no “here’s my datacenter” signage, and the parking lot is nearly empty. You might wonder, “is this the right place?” While larger, more prominent, data centers will have armed guards and gates, most data centers have a call-box outside of a locked door. In either case, data centers don’t like drop-in visitors, so unless you’ve already made prior arrangements, you’re going to be turned away. In short, regardless of whether it is a call-box or an armed guard, a primary line of defense is to know everyone whom you let in the door.

Once inside the building, you’re still a long way from being in the real data center. You’ll start by presenting the proper identification to the guard and fill out some paperwork. Depending on the facility and your level of access, you will have to provide a fingerprint for biometric access/exit confirmation. Eventually, you get a badge or other form of visual identification that shows your level of access. For example, you could have free range of the place (highly doubtful), or be allowed in certain defined areas (doubtful), or need an escort wherever you go (likely). For this post, we’ll give you access to the Backblaze areas in the data center, accompanied of course.

We’re ready to go inside, so attach your badge with your picture on it, get your finger ready to be scanned, and remember to smile for the cameras as you pass through the “box.” While not the only method, the “box” is a widely used security technique that allows one person at a time to pass through a room where they are recorded on video and visually approved before they can leave. Speaking of being on camera, by the time you get to this point, you will have passed dozens of cameras – hidden, visible, behind one-way glass, and so on.

Once past the “box,” you’re in the data center, right? Probably not. Data centers can be divided into areas or blocks each with different access codes and doors. Once out of the box, you still might only be able to access the snack room and the bathrooms. These “rooms” are always located outside of the data center floor. Let’s step inside, “badge in please.”

Inside the Data Center

While every data center is different, there are three things that most people find common in their experience; how clean it is, the noise level and the temperature.

Data Centers are Clean

From the moment you walk into a typical data center, you’ll notice that it is clean. While most data centers are not cleanrooms by definition, they do ensure the environment is suitable for the equipment housed there.

Data center Entry Mats

Cleanliness starts at the door. Mats like this one capture the dirt from the bottom of your shoes. These mats get replaced regularly. As you look around, you might notice that there are no trashcans on the data center floor. As a consequence, the data center staff follows the “whatever you bring in, you bring out” philosophy, sort of like hiking in the woods. Most data centers won’t allow food or drink on the data center floor, either. Instead, one has to leave the datacenter floor to have a snack or use the restroom.

Besides being visually clean, the air in a data center is also amazingly clean: Filtration systems filter particulates to the sub-micron level. Data center filters have a 99.97% (or higher) efficiency rating in removing 0.3-micron particles. In comparison, your typical home filter provides a 70% sub-micron efficiency level. That might explain the dust bunnies behind your gaming tower.

Data Centers Are Noisy

Data center Noise Levels

The decibel level in a given data center can vary considerably. As you can see, the Backblaze datacenter is between 76 and 78 decibels. This is the level when you are near the racks of Storage Pods. How loud is 78dB? Normal conversation is 60dB, a barking dog is 70dB, and a screaming child is only 80dB. In the US, OSHA has established 85dB as the lower threshold for potential noise damage. Still, 78dB is loud enough that we insist our data center staff wear ear protection on the floor. Their favorite earphones are Bose’s noise reduction models. They are a bit costly but well worth it.

The noise comes from a combination of the systems needed to operate the data center. Air filtration, heating, cooling, electric, and other systems in use. 6,000 spinning 3-inch fans in the Storage Pods produce a lot of noise.

Data Centers Are Hot and Cold

As noted, part of the noise comes from heating and air-conditioning systems, mostly air-conditioning. As you walk through the racks and racks of equipment in many data centers, you’ll alternate between warm aisles and cold aisles. In a typical raised floor data center, cold air rises from vents in the floor in front of each rack. Fans inside the servers in the racks, in our case Storage Pods, pull the air in from the cold aisle and through the server. By the time the air reaches the other side, the warm row, it is warmer and is sucked away by vents in the ceiling or above the racks.

There was a time when data centers were like meat lockers with some kept as cold as 55°F (12.8°C). Warmer heads prevailed, and over the years the average temperature has risen to over 80°F (26.7°C) with some companies pushing that even higher. That works for us, but in our case, we are more interested in the temperature inside our Storage Pods and more precisely the hard drives within. Previously we looked at the correlation between hard disk temperature and failure rate. The conclusion: As long as you run drives well within their allowed range of operating temperatures, there is no problem operating a data center at 80°F (26.7°C) or even higher. As for the employees, if they get hot they can always work in the cold aisle for a while and vice-versa.

Getting Out of a Data Center

When you’re finished visiting the data center, remember to leave yourself a few extra minutes to get out. The first challenge is to find your way back to the entrance. If an escort accompanies you, there’s no issue, but if you’re on your own, I hope you paid attention to the way inside. It’s amazing how all the walls and doors look alike as you’re wandering around looking for the exit, and with data centers getting larger and larger the task won’t get any easier. For example, the Switch SUPERNAP datacenter complex in Reno Nevada will be over 6.4 million square feet, roughly the size of the Pentagon. Having worked in the Pentagon, I can say that finding your way around a facility that large can be daunting. Of course, a friendly security guard is likely to show up to help if you get lost or curious.

On your way back out you’ll pass through the “box” once again for your exit cameo. Also, if you are trying to leave with more than you came in with you will need a fair bit of paperwork before you can turn in your credentials and exit the building. Don’t forget to wave at the cameras in the parking lot.

The post A Day in the Life of a Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security Vulnerabilities in Mobile MAC Randomization

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/security_vulner_8.html

Interesting research: “A Study of MAC Address Randomization in Mobile Devices When it Fails“:

Abstract: Media Access Control (MAC) address randomization is a privacy technique whereby mobile devices rotate through random hardware addresses in order to prevent observers from singling out their traffic or physical location from other nearby devices. Adoption of this technology, however, has been sporadic and varied across device manufacturers. In this paper, we present the first wide-scale study of MAC address randomization in the wild, including a detailed breakdown of different randomization techniques by operating system, manufacturer, and model of device. We then identify multiple flaws in these implementations which can be exploited to defeat randomization as performed by existing devices. First, we show that devices commonly make improper use of randomization by sending wireless frames with the true, global address when they should be using a randomized address. We move on to extend the passive identification techniques of Vanhoef et al. to effectively defeat randomization in 96% of Android phones. Finally, we show a method that can be used to track 100% of devices using randomization, regardless of manufacturer, by exploiting a previously unknown flaw in the way existing wireless chipsets handle low-level control frames.

Basically, iOS and Android phones are not very good at randomizing their MAC addresses. And tricks with level-2 control frames can exploit weaknesses in their chipsets.

Slashdot post.

SAML for Your Serverless JavaScript Application: Part II

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/saml-for-your-serverless-javascript-application-part-ii/

Contributors: Richard Threlkeld, Gene Ting, Stefano Buliani

The full code for both scenarios—including SAM templates—can be found at the samljs-serverless-sample GitHub repository. We highly recommend you use the SAM templates in the GitHub repository to create the resources, opitonally you can manually create them.


This is the second part of a two part series for using SAML providers in your application and receiving short-term credentials to access AWS Services. These credentials can be limited with IAM roles so the users of the applications can perform actions like fetching data from databases or uploading files based on their level of authorization. For example, you may want to build a JavaScript application that allows a user to authenticate against Active Directory Federation Services (ADFS). The user can be granted scoped AWS credentials to invoke an API to display information in the application or write to an Amazon DynamoDB table.

Part I of this series walked through a client-side flow of retrieving SAML claims and passing them to Amazon Cognito to retrieve credentials. This blog post will take you through a more advanced scenario where logic can be moved to the backend for a more comprehensive and flexible solution.

Prerequisites

As in Part I of this series, you need ADFS running in your environment. The following configurations are used for reference:

  1. ADFS federated with the AWS console. For a walkthrough with an AWS CloudFormation template, see Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0.
  2. Verify that you can authenticate with user example\bob for both the ADFS-Dev and ADFS-Production groups via the sign-in page.
  3. Create an Amazon Cognito identity pool.

Scenario Overview

The scenario in the last blog post may be sufficient for many organizations but, due to size restrictions, some browsers may drop part or all of a query string when sending a large number of claims in the SAMLResponse. Additionally, for auditing and logging reasons, you may wish to relay SAML assertions via POST only and perform parsing in the backend before sending credentials to the client. This scenario allows you to perform custom business logic and validation as well as putting tracking controls in place.

In this post, we want to show you how these requirements can be achieved in a Serverless application. We also show how different challenges (like XML parsing and JWT exchange) can be done in a Serverless application design. Feel free to mix and match, or swap pieces around to suit your needs.

This scenario uses the following services and features:

  • Cognito for unique ID generation and default role mapping
  • S3 for static website hosting
  • API Gateway for receiving the SAMLResponse POST from ADFS
  • Lambda for processing the SAML assertion using a native XML parser
  • DynamoDB conditional writes for session tracking exceptions
  • STS for credentials via Lambda
  • KMS for signing JWT tokens
  • API Gateway custom authorizers for controlling per-session access to credentials, using JWT tokens that were signed with KMS keys
  • JavaScript-generated SDK from API Gateway using a service proxy to DynamoDB
  • RelayState in the SAMLRequest to ADFS to transmit the CognitoID and a short code from the client to your AWS backend

At a high level, this solution is similar to that of Scenario 1; however, most of the work is done in the infrastructure rather than on the client.

  • ADFS still uses a POST binding to redirect the SAMLResponse to API Gateway; however, the Lambda function does not immediately redirect.
  • The Lambda function decodes and uses an XML parser to read the properties of the SAML assertion.
  • If the user’s assertion shows that they belong to a certain group matching a specified string (“Prod” in the sample), then you assign a role that they can assume (“ADFS-Production”).
  • Lambda then gets the credentials on behalf of the user and stores them in DynamoDB as well as logging an audit record in a separate table.
  • Lambda then returns a short-lived, signed JSON Web Token (JWT) to the JavaScript application.
  • The application uses the JWT to get their stored credentials from DynamoDB through an API Gateway custom authorizer.

The architecture you build in this tutorial is outlined in the following diagram.

lambdasamltwo_1.png

First, a user visits your static website hosted on S3. They generate an ephemeral random code that is transmitted during redirection to ADFS, where they are prompted for their Active Directory credentials.

Upon successful authentication, the ADFS server redirects the SAMLResponse assertion, along with the code (as the RelayState) via POST to API Gateway.

The Lambda function parses the SAMLResponse. If the user is part of the appropriate Active Directory group (AWS-Production in this tutorial), it retrieves credentials from STS on behalf of the user.

The credentials are stored in a DynamoDB table called SAMLSessions, along with the short code. The user login is stored in a tracking table called SAMLUsers.

The Lambda function generates a JWT token, with a 30-second expiration time signed with KMS, then redirects the client back to the static website along with this token.

The client then makes a call to an API Gateway resource acting as a DynamoDB service proxy that retrieves the credentials via a DeleteItem call. To make this call, the client passes the JWT in the authorization header.

A custom authorizer runs to validate the token using the KMS key again as well as the original random code.

Now that the client has credentials, it can use these to access AWS resources.

Tutorial: Backend processing and audit tracking

Before you walk through this tutorial you will need the source code from the samljs-serverless-sample Github Repository. You should use the SAM template provided in order to streamline the process but we’ll outline how you you would manually create resources too. There is a readme in the repository with instructions for using the SAM template. Either way you will still perform the manual steps of KMS key configuration, ADFS enablement of RelayState, and Amazon Cognito Identity Pool creation. The template will automate the process in creating the S3 website, Lambda functions, API Gateway resources and DynamoDB tables.

We walk through the details of all the steps and configuration below for illustrative purposes in this tutorial calling out the sections that can be omitted if you used the SAM template.

KMS key configuration

To sign JWT tokens, you need an encrypted plaintext key, to be stored in KMS. You will need to complete this step even if you use the SAM template.

  1. In the IAM console, choose Encryption Keys, Create Key.
  2. For Alias, type sessionMaster.
  3. For Advanced Options, choose KMS, Next Step.
  4. For Key Administrative Permissions, select your administrative role or user account.
  5. For Key Usage Permissions, you can leave this blank as the IAM Role (next section) will have individual key actions configured. This allows you to perform administrative actions on the set of keys while the Lambda functions have rights to just create data keys for encryption/decryption and use them to sign JWTs.
  6. Take note of the Key ID, which is needed for the Lambda functions.

IAM role configuration

You will need an IAM role for executing your Lambda functions. If you are using the SAM template this can be skipped. The sample code in the GitHub repository under Scenario2 creates separate roles for each function, with limited permissions on individual resources when you use the SAM template. We recommend separate roles scoped to individual resources for production deployments. Your Lambda functions need the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1432927122000",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                “dynamodb:GetItem”,
                “dynamodb:DeleteItem”,
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "kms:GenerateDataKey*",
                “kms:Decrypt”
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Lambda function configuration

If you are not using the SAM template, create the following three Lambda functions from the GitHub repository in /Scenario2/lambda using the following names and environment variables. The Lambda functions are written in Node.js.

  • GenerateKey_awslabs_samldemo
  • ProcessSAML_awslabs_samldemo
  • SAMLCustomAuth_awslabs_samldemo

The functions above are built, packaged, and uploaded to Lambda. For two of the functions, this can be done from your workstation (the sample commands for each function assume OSX or Linux). The third will need to be built on an AWS EC2 instance running the current Lambda AMI.

GenerateKey_awslabs_samldemo

This function is only used one time to create keys in KMS for signing JWT tokens. The function calls GenerateDataKey and stores the encrypted CipherText blob as Base64 in DynamoDB. This is used by the other two functions for getting the PlainTextKey for signing with a Decrypt operation.

This function only requires a single file. It has the following environment variables:

  • KMS_KEY_ID: Unique identifier from KMS for your sessionMaster Key
  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or something unique to your organization)
  • RAND_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/GenerateKey and run the following commands:

zip –r generateKey.zip .

aws lambda create-function --function-name GenerateKey_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler index.handler --timeout 10 --memory-size 512 --zip-file fileb://generateKey.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,RAND_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,KMS_KEY_ID=<kms key="KEY" id="ID">}

SAMLCustomAuth_awslabs_samldemo

This is an API Gateway custom authorizer called after the client has been redirected to the website as part of the login workflow. This function calls a GET against the service proxy to DynamoDB, retrieving credentials. The function uses the KMS key signing validation of the JWT created in the ProcessSAML_awslabs_samldemo function and also validates the random code that was generated at the beginning of the login workflow.

You must install the dependencies before zipping this function up. It has the following environment variables:

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKey_awslabs_samldemo)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/CustomAuth and run:

npm install

zip –r custom_auth.zip .

aws lambda create-function --function-name SAMLCustomAuth_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler CustomAuth.handler --timeout 10 --memory-size 512 --zip-file fileb://custom_auth.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,ID_HASH= us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }

ProcessSAML_awslabs_samldemo

This function is called when ADFS sends the SAMLResponse to API Gateway. The function parses the SAML assertion to select a role (based on a simple string search) and extract user information. It then uses this data to get short-term credentials from STS via AssumeRoleWithSAML and stores this information in a SAMLSessions table and tracks the user login via a SAMLUsers table. Both of these are DynamoDB tables but you could also store the user information in another AWS database type, as this is for auditing purposes. Finally, this function creates a JWT (signed with the KMS key) which is only valid for 30 seconds and is returned to the client as part of a 302 redirect from API Gateway.

This function needs to be built on an EC2 server running Amazon Linux. This function leverages two main external libraries:

  • nJwt: Used for secure JWT creation for individual client sessions to get access to their records
  • libxmljs: Used for XML XPath queries of the decoded SAMLResponse from AD FS

Libxmljs uses native build tools and you should run this on EC2 running the same AMI as Lambda and with Node.js v4.3.2; otherwise, you might see errors. For more information about current Lambda AMI information, see Lambda Execution Environment and Available Libraries.

After you have the correct AMI launched in EC2 and have SSH open to that host, install Node.js. Ensure that the Node.js version on EC2 is 4.3.2, to match Lambda. If your version is off, you can roll back with NVM.

After you have set up Node.js, run the following command:

yum install -y make gcc*

Now, create a /saml folder on your EC2 server and copy up ProcessSAML.js and package.json from /Scenario2/lambda/ProcessSAML to the EC2 server. Here is a sample SCP command:

cd ProcessSAML/

ls

package.json    ProcessSAML.js

scp -i ~/path/yourpemfile.pem ./* [email protected]:/home/ec2-user/saml/

Then you can SSH to your server, cd into the /saml directory, and run:

npm install

A successful build should look similar to the following:

lambdasamltwo_2.png

Finally, zip up the package and create the function using the following AWS CLI command and these environment variables. Configure the CLI with your credentials as needed.

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKeyawslabssamldemo)
  • PRINCIPAL_ARN: Full ARN of the AD FS IdP created in the IAM console
  • USER_DDB_TABLE: SAMLUsers
  • REDIRECT_URL: Endpoint URL of your static S3 website (or CloudFront distribution domain name if you did that optional step)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
zip –r saml.zip .

aws lambda create-function --function-name ProcessSAML_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler ProcessSAML.handler --timeout 10 --memory-size 512 --zip-file fileb://saml.zip –environment Variables={USER_DDB_TABLE=SAMLUsers,SESSION_DDB_TABLE= SAMLSessions,REDIRECT_URL=<your S3 bucket and test page path>,ID_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,ENC_CONTEXT=ADFS,PRINCIPAL_ARN=<your ADFS IdP ARN>}

If you built the first two functions on your workstation and created the ProcessSAML_awslabs_samldemo function separately in the Lambda console before building on EC2, you can update the code after building on with the following command:

aws lambda update-function-code --function-name ProcessSAML_awslabs_samldemo --zip-file fileb://saml.zip

Role trust policy configuration

This scenario uses STS directly to assume a role. You will need to complete this step even if you use the SAM template. Modify the trust policy, as you did before when Amazon Cognito was assuming the role. In the GitHub repository sample code, ProcessSAML.js is preconfigured to filter and select a role with “Prod” in the name via the selectedRole variable.

This is an example of business logic you can alter in your organization later, such as a callout to an external mapping database for other rules matching. In this tutorial, it corresponds to the ADFS-Production role that was created.

  1. In the IAM console, choose Roles and open the ADFS-Production Role.
  2. Edit the Trust Permissions field and replace the content with the following:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": [
              "arn:aws:iam::ACCOUNTNUMBER:saml-provider/ADFS"
    ]
          },
          "Action": "sts:AssumeRoleWithSAML"
        }
      ]
    }

If you end up using another role (or add more complex filtering/selection logic), ensure that those roles have similar trust policy configurations. Also note that the sample policy above purposely uses an array for the federated provider matching the IdP ARN that you added. If your environment has multiple SAML providers, you could list them here and modify the code in ProcessSAML.js to process requests from different IdPs and grant or revoke credentials accordingly.

DynamoDB table creation

If you are not using the SAM template, create two DynamoDB tables:

  • SAMLSessions: Temporarily stores credentials from STS. Credentials are removed by an API Gateway Service Proxy to the DynamoDB DeleteItem call that simultaneously returns the credentials to the client.
  • SAMLUsers: This table is for tracking user information and the last time they authenticated in the system via ADFS.

The following AWS CLI commands creates the tables (indexed only with a primary key hash, called identityHash and CognitoID respectively):

aws dynamodb create-table \
    --table-name SAMLSessions \
    --attribute-definitions \
        AttributeName=group,AttributeType=S \
    --key-schema AttributeName=identityhash,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
aws dynamodb create-table \
    --table-name SAMLUsers \
    --attribute-definitions \
        AttributeName=CognitoID,AttributeType=S \
    --key-schema AttributeName=CognitoID,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

After the tables are created, you should be able to run the GenerateKey_awslabs_samldemo Lambda function and see a CipherText key stored in SAMLSessions. This is only for convenience of this post, to demonstrate that you should persist CipherText keys in a data store and never persist plaintext keys that have been decrypted. You should also never log plaintext keys in your code.

API Gateway configuration

If you are not using the SAM template, you will need to create API Gateway resources. If you have created resources for Scenario 1 in Part I, then the naming of these resources may be similar. If that is the case, then simply create an API with a different name (SAMLAuth2 or similar) and follow these steps accordingly.

  1. In the API Gateway console for your API, choose Authorizers, Custom Authorizer.
  2. Select your region and enter SAMLCustomAuth_awslabs_samldemo for the Lambda function. Choose a friendly name like JWTParser and ensure that Identity token source is method.request.header.Authorization. This tells the custom authorizer to look for the JWT in the Authorization header of the HTTP request, which is specified in the JavaScript code on your S3 webpage. Save the changes.

    lambdasamltwo_3.png

Now it’s time to wire up the Lambda functions to API Gateway.

  1. In the API Gateway console, choose Resources, select your API, and then create a Child Resource called SAML. This includes a POST and a GET method. The POST method uses the ProcessSAML_awslabs_samldemo Lambda function and a 302 redirect, while the GET method uses the JWTParser custom authorizer with a service proxy to DynamoDB to retrieve credentials upon successful authorization.
  2. lambdasamltwo_4.png

  3. Create a POST method. For Integration Type, choose Lambda and add the ProcessSAML_awslabs_samldemo Lambda function. For Method Request, add headers called RelayState and SAMLResponse.

    lambdasamltwo_5.png

  4. Delete the Method Response code for 200 and add a 302. Create a response header called Location. In the Response Models section, for Content-Type, choose application/json and for Models, choose Empty.

    lambdasamltwo_6.png

  5. Delete the Integration Response section for 200 and add one for 302 that has a Method response status of 302. Edit the response header for Location to add a Mapping value of integration.response.body.location.

    lambdasamltwo_7.png

  6. Finally, in order for Lambda to capture the SAMLResponse and RelayState values, choose Integration Request.

  7. In the Body Mapping Template section, for Content-Type, enter application/x-www-form-urlencoded and add the following template:

    {
    "SAMLResponse" :"$input.params('SAMLResponse')",
    "RelayState" :"$input.params('RelayState')",
    "formparams" : $input.json('$')
    }

  8. Create a GET method with an Integration Type of Service Proxy. Select the region and DynamoDB as the AWS Service. Use POST for the HTTP method and DeleteItem for the Action. This is important as you leverage a DynamoDB feature to return the current records when you perform deletion. This simultaneously allows credentials in this system to not be stored long term and also allows clients to retrieve them. For Execution role, use the Lambda role from earlier or a new role that only has IAM scoped permissions for DeleteItem on the SAMLSessions table.

    lambdasamltwo_8.png

  9. Save this and open Method Request.

  10. For Authorization, select your custom authorizer JWTParser. Add in a header called COGNITO_ID and save the changes.

    lambdasamltwo_9.png

  11. In the Integration Request, add in a header name of Content-Type and a value for Mapped of ‘application/x-amzn-json-1.0‘ (you need the single quotes surrounding the entry).

  12. Next, in the Body Mapping Template section, for Content-Type, enter application/json and add the following template:

    {
        "TableName": "SAMLSessions",
        "Key": {
            "identityhash": {
                "S": "$input.params('COGNITO_ID')"
            }
        },
        "ReturnValues": "ALL_OLD"
    }

Inspect this closely for a moment. When your client passes the JWT in an Authorization Header to this GET method, the JWTParser Custom Authorizer grants/denies executing a DeleteItem call on the SAMLSessions table.

ADF

If it is granted, then there needs to be an item to delete the reference as a primary key to the table. The client JavaScript (seen in a moment) passes its CognitoID through as a header called COGNITO_ID that is mapped above. DeleteItem executes to remove the credentials that were placed there via a call to STS by the ProcessSAML_awslabs_samldemo Lambda function. Because the above action specifies ALL_OLD under the ReturnValues mapping, DynamoDB returns these credentials at the same time.

lambdasamltwo_10.png

  1. Save the changes and open your /saml resource root.
  2. Choose Actions, Enable CORS.
  3. In the Access-Control-Allow-Headers section, add COGNITO_ID into the end (inside the quotes and separated from other headers by a comma), then choose Enable CORS and replace existing CORS headers.
  4. When completed, choose Actions, Deploy API. Use the Prod stage or another stage.
  5. In the Stage Editor, choose SDK Generation. For Platform, choose JavaScript and then choose Generate SDK. Save the folder someplace close. Take note of the Invoke URL value at the top, as you need this for ADFS configuration later.

Website configuration

If you are not using the SAM template, create an S3 bucket and configure it as a static website in the same way that you did for Part I.

If you are using the SAM template this will automatically be created for you however the steps below will still need to be completed:

In the source code repository, edit /Scenario2/website/configs.js.

  1. Ensure that the identityPool value matches your Amazon Cognito Pool ID and the region is correct.
  2. Leave adfsUrl the same if you’re testing on your lab server; otherwise, update with the AD FS DNS entries as appropriate.
  3. Update the relayingPartyId value as well if you used something different from the prerequisite blog post.

Next, download the minified version of the AWS SDK for JavaScript in the Browser (aws-sdk.min.js) and place it along with the other files in /Scenario2/website into the S3 bucket.

Copy the files from the API Gateway Generated SDK in the last section to this bucket so that the apigClient.js is in the root directory and lib folder is as well. The imports for these scripts (which do things like sign API requests and configure headers for the JWT in the Authorization header) are already included in the index.html file. Consult the latest API Gateway documentation if the SDK generation process updates in the future

ADFS configuration

Now that the AWS setup is complete, modify your ADFS setup to capture RelayState information about the client and to send the POST response to API Gateway for processing. You will need to complete this step even if you use the SAM template.

If you’re using Windows Server 2008 with ADFS 2.0, ensure that Update Rollup 2 is installed before enabling RelayState. Please see official Microsoft documentation for specific download information.

  1. After Update Rollup 2 is installed, modify %systemroot%\inetpub\adfs\ls\web.config. If you’re on a newer version of Windows Server running AD FS 3.0, modify %systemroot%\ADFS\Microsoft.IdentityServer.Servicehost.exe.config.
  2. Find the section in the XML marked <Microsoft.identityServer.web> and add an entry for <useRelayStateForIdpInitiatedSignOn enabled="true">. If you have the proper ADFS rollup or version installed, this should allow the RelayState parameter to be accepted by the service provider.
  3. In the ADFS console, open Relaying Party Trusts for Amazon Web Services and choose Endpoints.
  4. For Binding, choose POST and for Invoke URL,enter the URL to your API Gateway from the stage that you noted earlier.

At this point, you are ready to test out your webpage. Navigate to the S3 static website Endpoint URL and it should redirect you to the ADFS login screen. If the user login has been recent enough to have a valid SAML cookie, then you should see the login pass-through; otherwise, a login prompt appears. After the authentication has taken place, you should quickly end up back at your original webpage. Using the browser debugging tools, you see “Successful DDB call” followed by the results of a call to STS that were stored in DynamoDB.

lambdasamltwo_11.png

As in Scenario 1, the sample code under /scenario2/website/index.html has a button that allows you to “ping” an endpoint to test if the federated credentials are working. If you have used the SAM template this should already be working and you can test it out (it will fail at first – keep reading to find out how to set the IAM permissions!). If not go to API Gateway and create a new Resource called /users at the same level of /saml in your API with a GET method.

lambdasamltwo_12.png

For Integration type, choose Mock.

lambdasamltwo_13.png

In the Method Request, for Authorization, choose AWS_IAM. In the Integration Response, in the Body Mapping Template section, for Content-Type, choose application/json and add the following JSON:

{
    "status": "Success",
    "agent": "${context.identity.userAgent}"
}

lambdasamltwo_14.png

Before using this new Mock API as a test, configure CORS and re-generate the JavaScript SDK so that the browser knows about the new methods.

  1. On the /saml resource root and choose Actions, Enable CORS.
  2. In the Access-Control-Allow-Headers section, add COGNITO_ID into the endpoint and then choose Enable CORS and replace existing CORS headers.
  3. Choose Actions, Deploy API. Use the stage that you configured earlier.
  4. In the Stage Editor, choose SDK Generation and select JavaScript as your platform. Choose Generate SDK.
  5. Upload the new apigClient.js and lib directory to the S3 bucket of your static website.

One last thing must be completed before testing (You will need to complete this step even if you use the SAM template) if the credentials can invoke this mock endpoint with AWS_IAM credentials. The ADFS-Production Role needs execute-api:Invoke permissions for this API Gateway resource.

  1. In the IAM console, choose Roles, and open the ADFS-Production Role.

  2. For testing, you can attach the AmazonAPIGatewayInvokeFullAccess policy; however, for production, you should scope this down to the resource as documented in Control Access to API Gateway with IAM Permissions.

  3. After you have attached a policy with invocation rights and authenticated with AD FS to finish the redirect process, choose PING.

If everything has been set up successfully you should see an alert with information about the user agent.

Final Thoughts

We hope these scenarios and sample code help you to not only begin to build comprehensive enterprise applications on AWS but also to enhance your understanding of different AuthN and AuthZ mechanisms. Consider some ways that you might be able to evolve this solution to meet the needs of your own customers and innovate in this space. For example:

  • Completing the CloudFront configuration and leveraging SSL termination for site identification. See if this can be incorporated into the Lambda processing pipeline.
  • Attaching a scope-down IAM policy if the business rules are matched. For example, the default role could be more permissive for a group but if the user is a contractor (username with –C appended) they get extra restrictions applied when assumeRoleWithSaml is called in the ProcessSAML_awslabs_samldemo Lambda function.
  • Changing the time duration before credentials expire on a per-role basis. Perhaps if the SAMLResponse parsing determines the user is an Administrator, they get a longer duration.
  • Passing through additional user claims in SAMLResponse for further logical decisions or auditing by adding more claim rules in the ADFS console. This could also be a mechanism to synchronize some Active Directory schema attributes with AWS services.
  • Granting different sets of credentials if a user has accounts with multiple SAML providers. While this tutorial was made with ADFS, you could also leverage it with other solutions such as Shibboleth and modify the ProcessSAML_awslabs_samldemo Lambda function to be aware of the different IdP ARN values. Perhaps your solution grants different IAM roles for the same user depending on if they initiated a login from Shibboleth rather than ADFS?

The Lambda functions can be altered to take advantage of these options which you can read more about here. For more information about ADFS claim rule language manipulation, see The Role of the Claim Rule Language on Microsoft TechNet.

We would love to hear feedback from our customers on these designs and see different secure application designs that you’re implementing on the AWS platform.

Community Profile: Alex Eames

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-alex-eames/

This column is from The MagPi issue 52. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

Alex purchased his first Raspberry Pi in May 2012, after a BBC article caught his eye. Already teaching ICT at his son’s school, he was drawn to the idea of a $35 computer to aid the education of his ten-year-old students.

Alex Eames

Alex is truly a member of the Raspberry Pi community, providing support and resources to those new to, and experienced in, the world of the Pi

Less than a month later, Alex started his website, RasPi.TV. The website allowed him to document his progress with the Raspberry Pi, and to curate an easy-to-use reference library for others.

“I found that when I wanted to learn something new, generally the ‘instructions’ on other Linux sites were either out of date or incomplete. I wanted a place where I could record procedures that I could use again, but that would also be available to others.”

Alex was determined to provide tutorials that worked first time, understanding the frustration for newcomers when their hard work didn’t always pay off. “It’s off-putting for people to follow a list of instructions, get it all right, and then find the process fails,” he says. RasPi.TV was all about “instructions that work first time – even if you’ve never done it before.”

Alex Eames Community Profile

The RasPi.TV website is packed full of tutorials, reviews, and videos, all of which have the aim of helping newcomers and seasoned Raspberry Pi users to expand their skill set and interests. Alex’s YouTube channel boasts more than 8,000 subscribers, with viewing figures of well over 1.5 million across his 121 videos.

In 2012, Alex began to build his own RasPiO boards, with the first releases making an appearance in March 2014. The GPIO labeller, Breakout, and Breakout Pro were successful across the community, earning an honourable mention on the official Raspberry Pi blog. The Pro has since been upgraded to the Pro HAT, while the labeller has been replaced with a newer 40-pin version. The RasPiO collection has now increased to ten different units, each available for direct purchase from the website. A few originally found their feet via successful crowdfunding campaigns.

Alex Eames Community Profile

The RasPiO family is a series of add-on boards, port labellers, GPIO rulers, and tools to aid makers in building with the Raspberry Pi. The ruler, for example, offers GPIO pin reference for easy identification, along with a code reference for using the GPIO Zero library.

Even if you’ve yet to visit either RasPi.TV or Alex’s YouTube channel, the chances are that you’ve seen one aspect of his online contribution to the Raspberry Pi Community. Alex maintains a Raspberry Pi ‘family photo’ on his website, showcasing every model built across the years. It’s a picture that often does the rounds of blogs, news articles, and social media.

Raspberry Pi Family Photo 2017

Updated 28th Feb 2017 to include the newly released Raspberry Pi Zero W

Outside of his life of Pi, Alex has a background in analytical chemistry, a profession that certainly explains his desire for the clean, precise, and well-tested tutorials that brought about the creation of RasPi.TV. From working as a translator to writing his own e-books, Alex is definitely well suited to the maker life, moving on from his past life of pharmaceutical development.

Duinocam designed by Alex Eames

The Duinocam is set up in Alex’s home in Poland. During daylight hours, it emails him photos and temperature data while also responding to tweeted commands
such as video capture and upload. Using a Pi Model B, a RasPiO Duino, a Camera Module, and two servos, the unit can pan and tilt to survey the area.

His tutorial and review videos on YouTube reach viewing figures in the thousands, with his popular Raspberry Pi DSI Display Launch video garnering close to 300,000 views at the time of writing of this article. While Alex has updated us on his newest unreleased projects and plans, we’ll keep them quiet for now. You’ll have to watch the RasPi.TV website for details.

Note – Since writing this article, Alex has continued his work, producing new content to support the Raspberry Pi Zero W, while also releasing his newest crowdfunding campaign, RasPiO InsPiRing.

The post Community Profile: Alex Eames appeared first on Raspberry Pi.

Music Industry Wants Piracy Filters, No Takedown Whack-a-Mole

Post Syndicated from Ernesto original https://torrentfreak.com/music-industry-wants-piracy-filters-no-takedown-whack-a-mole-170222/

Signed into law nearly twenty years ago, the DMCA is one of the best known pieces of Internet related legislation.

The law provides a safe harbor for Internet services, shielding them from copyright infringement liability as long as they process takedown notices and deal with repeat infringers.

In recent years, however, various parties have complained about shortcomings and abuse of the system. On the one hand, rightsholders believe that the law doesn’t do enough to protect creators, while the opposing side warns of increased censorship and abuse.

To address these concerns, the U.S. Copyright Office is currently running an extended public consultation.

This week a new round of comments was submitted, including a detailed response from a coalition of music industry groups such as the RIAA, National Music Publishers’ Association, and SoundExchange. When it comes to their views of the DMCA the music groups are very clear: It’s failing.

The music groups note that they are currently required to police the entire Internet in search of infringing links and files, which they then have to take down one at a time. This doesn’t work, they argue.

They say that the present situation forces rightsholders to participate in a never-ending whack-a-mole game which doesn’t fix the underlying problem. Instead, it results in a “frustrating, burdensome and ultimately ineffective takedown process.”

“…as numerous copyright owners point out in their comments, the notice and takedown system as currently configured results in an endless game of whack-a-mole, with infringing content that is removed from a site one moment reposted to the same site and other sites moments later, to be repeated ad infinitem.”

Instead of leaving all the work up to copyright holders, the music groups want Internet services to filter out and block infringing content proactively. With the use of automated hash filtering tools, for example.

“One possible solution to this problem would be to require that, once a service provider receives a takedown notice with respect to a given work, the service provider use automated content identification technology to prevent the same work from being uploaded in the future,” the groups write.

“Automated content identification technologies are one important type of standard technical measure that should be adopted across the industry, and at a minimum by service providers who give the public access to large amounts of works uploaded by users.”

These anti-piracy filters are already in use by some companies and are relatively cheap to implement, even for relatively smaller services, the music groups note.

The whack-a-mole problem doesn’t only apply to hosting providers but also to search engines, the music groups complain.

While companies such as Google remove links to infringing material upon request, these links often reappear under a different URL. At the same time, pirate sites often appear before legitimate services in search results. A fix for this problem would be to stop indexing known pirate sites completely.

“One possible solution would be to require search engines to de-index structurally infringing sites that are the subject of a large number of takedown notices,” the groups recommend.

Ideally, they want copyright holders and Internet services to reach a voluntary agreement on how to filter pirated content. This could be similar to YouTube’s Content-ID system, or the hash filtering mechanisms Dropbox and Google Drive employ, for example.

If service providers are not interested in helping out, however, the music industry says new legislation might be needed to give them a push.

“The Music Community stands ready to work with service providers and other copyright owners on the development and implementation of standard technical measures and voluntary measures. However, to the extent such measures are not forthcoming, legislative solutions will be necessary to restore the balance Congress intended,” the recommendation reads.

Interestingly, this collaborative stance doesn’t appear to apply to all parties. File-hosting service 4Shared previously informed TorrentFreak that several prominent music groups have shown little interest in their voluntary piracy fingerprint tool.

The notion of piracy filters isn’t new. A few months ago the European Commission released its proposal to modernize the EU’s copyright law, under which online services will also be required to install mandatory piracy filters.

Whether the U.S. Government will follow suit has yet to be seen. In any case, rightsholders are likely to keep lobbying for change until they see significant improvements.

The full submission of the music groups is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Raspberry Pi Zero PiE-Ink Name Badge

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-zero-pie-ink-name-badge/

Gone, it would seem, are the days of ‘Hello, My name is…’ stickers and Sharpies. Who wants a simple sticker on their chest, so flat and dull, when they can wear an entire computer, displaying their name and face in pixelated perfection?

PiE-Ink Name Badge

I created this video with the YouTube Video Editor (http://www.youtube.com/editor)

With this PiE-Ink Name Badge, maker Josh King has taken this simple means of identification and upgraded it. And in his Instructables tutorial, he explains exactly how. But here’s the TL;DR for those wanting to get the basic gist of the build.

Josh King e-ink name badge Raspberry Pi

For the badge, Josh uses a Raspberry Pi Zero, a PaPiRus 2″ e-ink HAT, an Adafruit Powerboost 1000c, and a LiPo battery. He also uses various other components, such as magnets and adhesive putty.

Josh prepped the Zero, soldering the header pins in place, and then attached the Powerboost, allowing the LiPo battery to power the unit and be charged at the same time.

Josh King e-ink name badge Raspberry Pi

From there, he attaches the PaPiRus HAT and secures the whole thing with the putty, to ensure a snug fit. He also attaches a mini slide switch to allow an on/off function.

Josh King e-ink name badge Raspberry Pi

Having pre-installed Raspbian on the SD card, Josh follows the setup for the PaPiRus, ensuring all library information is in place and that the Pi recognises the 2″ screen. The code for the badge can then be downloaded directly from Josh’s GitHub account.  You’ll need to scale your image down to 200×96 in order for it to fit on the e-ink screen.

Josh King e-ink name badge Raspberry Pi

And there you have it. One Raspberry Pi Zero e-ink name badge, ready for you to show off at the next work function, conference, or when you visit Grandma and she still can’t get your name right.

The post Raspberry Pi Zero PiE-Ink Name Badge appeared first on Raspberry Pi.

hashID – Identify Different Types of Hashes

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/5LdJ3ibqZSc/

hashID is a tool to help you identify different types of hashes used to encrypt data, especially passwords. It’s written in Python 3 and supports the identification of over 220 unique hash types using regular expressions. hashID is able to identify a single hash, parse a file or read multiple files in a directory and […]

The post hashID…

Read the full post at darknet.org.uk

De-Anonymizing Browser History Using Social-Network Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/de-anonymizing_1.html

Interesting research: “De-anonymizing Web Browsing Data with Social Networks“:

Abstract: Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show — theoretically, via simulation, and through experiments on real user data — that de-identified web browsing histories can\ be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one’s feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user’s social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time. To gauge the real-world effectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on sufficiently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is — to our knowledge — the largest scale demonstrated de-anonymization to date.