Tag Archives: keys

The 10 Most Viewed Security-Related AWS Knowledge Center Articles and Videos for November 2017

Post Syndicated from Maggie Burke original https://aws.amazon.com/blogs/security/the-10-most-viewed-security-related-aws-knowledge-center-articles-and-videos-for-november-2017/

AWS Knowledge Center image

The AWS Knowledge Center helps answer the questions most frequently asked by AWS Support customers. The following 10 Knowledge Center security articles and videos have been the most viewed this month. It’s likely you’ve wondered about a few of these topics yourself, so here’s a chance to learn the answers!

  1. How do I create an AWS Identity and Access Management (IAM) policy to restrict access for an IAM user, group, or role to a particular Amazon Virtual Private Cloud (VPC)?
    Learn how to apply a custom IAM policy to restrict IAM user, group, or role permissions for creating and managing Amazon EC2 instances in a specified VPC.
  2. How do I use an MFA token to authenticate access to my AWS resources through the AWS CLI?
    One IAM best practice is to protect your account and its resources by using a multi-factor authentication (MFA) device. If you plan use the AWS Command Line Interface (CLI) while using an MFA device, you must create a temporary session token.
  3. Can I restrict an IAM user’s EC2 access to specific resources?
    This article demonstrates how to link multiple AWS accounts through AWS Organizations and isolate IAM user groups in their own accounts.
  4. I didn’t receive a validation email for the SSL certificate I requested through AWS Certificate Manager (ACM)—where is it?
    Can’t find your ACM validation emails? Be sure to check the email address to which you requested that ACM send validation emails.
  5. How do I create an IAM policy that has a source IP restriction but still allows users to switch roles in the AWS Management Console?
    Learn how to write an IAM policy that not only includes a source IP restriction but also lets your users switch roles in the console.
  6. How do I allow users from another account to access resources in my account through IAM?
    If you have the 12-digit account number and permissions to create and edit IAM roles and users for both accounts, you can permit specific IAM users to access resources in your account.
  7. What are the differences between a service control policy (SCP) and an IAM policy?
    Learn how to distinguish an SCP from an IAM policy.
  8. How do I share my customer master keys (CMKs) across multiple AWS accounts?
    To grant another account access to your CMKs, create an IAM policy on the secondary account that grants access to use your CMKs.
  9. How do I set up AWS Trusted Advisor notifications?
    Learn how to receive free weekly email notifications from Trusted Advisor.
  10. How do I use AWS Key Management Service (AWS KMS) encryption context to protect the integrity of encrypted data?
    Encryption context name-value pairs used with AWS KMS encryption and decryption operations provide a method for checking ciphertext authenticity. Learn how to use encryption context to help protect your encrypted data.

The AWS Security Blog will publish an updated version of this list regularly going forward. You also can subscribe to the AWS Knowledge Center Videos playlist on YouTube.

– Maggie

Your Holiday Cybersecurity Guide

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/your-holiday-cybersecurity-guide.html

Many of us are visiting parents/relatives this Thanksgiving/Christmas, and will have an opportunity to help our them with cybersecurity issues. I thought I’d write up a quick guide of the most important things.

1. Stop them from reusing passwords

By far the biggest threat to average people is that they re-use the same password across many websites, so that when one website gets hacked, all their accounts get hacked.
To demonstrate the problem, go to haveibeenpwned.com and enter the email address of your relatives. This will show them a number of sites where their password has already been stolen, like LinkedIn, Adobe, etc. That should convince them of the severity of the problem.

They don’t need a separate password for every site. You don’t care about the majority of website whether you get hacked. Use a common password for all the meaningless sites. You only need unique passwords for important accounts, like email, Facebook, and Twitter.

Write down passwords and store them in a safe place. Sure, it’s a common joke that people in offices write passwords on Post-It notes stuck on their monitors or under their keyboards. This is a common security mistake, but that’s only because the office environment is widely accessible. Your home isn’t, and there’s plenty of places to store written passwords securely, such as in a home safe. Even if it’s just a desk drawer, such passwords are safe from hackers, because they aren’t on a computer.

Write them down, with pen and paper. Don’t put them in a MyPasswords.doc, because when a hacker breaks in, they’ll easily find that document and easily hack your accounts.

You might help them out with getting a password manager, or two-factor authentication (2FA). Good 2FA like YubiKey will stop a lot of phishing threats. But this is difficult technology to learn, and of course, you’ll be on the hook for support issues, such as when they lose the device. Thus, while 2FA is best, I’m only recommending pen-and-paper to store passwords. (AccessNow has a guide, though I think YubiKey/U2F keys for Facebook and GMail are the best).

2. Lock their phone (passcode, fingerprint, faceprint)
You’ll lose your phone at some point. It has the keys all all your accounts, like email and so on. With your email, phones thieves can then reset passwords on all your other accounts. Thus, it’s incredibly important to lock the phone.

Apple has made this especially easy with fingerprints (and now faceprints), so there’s little excuse not to lock the phone.

Note that Apple iPhones are the most secure. I give my mother my old iPhones so that they will have something secure.

My mom demonstrates a problem you’ll have with the older generation: she doesn’t reliably have her phone with her, and charged. She’s the opposite of my dad who religiously slaved to his phone. Even a small change to make her lock her phone means it’ll be even more likely she won’t have it with her when you need to call her.

3. WiFi (WPA)
Make sure their home WiFi is WPA encrypted. It probably already is, but it’s worthwhile checking.

The password should be written down on the same piece of paper as all the other passwords. This is importance. My parents just moved, Comcast installed a WiFi access point for them, and they promptly lost the piece of paper. When I wanted to debug some thing on their network today, they didn’t know the password, and couldn’t find the paper. Get that password written down in a place it won’t get lost!

Discourage them from extra security features like “SSID hiding” and/or “MAC address filtering”. They provide no security benefit, and actually make security worse. It means a phone has to advertise the SSID when away from home, and it makes MAC address randomization harder, both of which allows your privacy to be tracked.

If they have a really old home router, you should probably replace it, or at least update the firmware. A lot of old routers have hacks that allow hackers (like me masscaning the Internet) to easily break in.

4. Ad blockers or Brave

Most of the online tricks that will confuse your older parents will come via advertising, such as popups claiming “You are infected with a virus, click here to clean it”. Installing an ad blocker in the browser, such as uBlock Origin, stops most all this nonsense.

For example, here’s a screenshot of going to the “Speedtest” website to test the speed of my connection (I took this on the plane on the way home for Thanksgiving). Ignore the error (plane’s firewall Speedtest) — but instead look at the advertising banner across the top of the page insisting you need to download a browser extension. This is tricking you into installing malware — the ad appears as if it’s a message from Speedtest, it’s not. Speedtest is just selling advertising and has no clue what the banner says. This sort of thing needs to be blocked — it fools even the technologically competent.

uBlock Origin for Chrome is the one I use. Another option is to replace their browser with Brave, a browser that blocks ads, but at the same time, allows micropayments to support websites you want to support. I use Brave on my iPhone.
A side benefit of ad blockers or Brave is that web surfing becomes much faster, since you aren’t downloading all this advertising. The smallest NYtimes story is 15 megabytes in size due to all the advertisements, for example.

5. Cloud Backups
Do backups, in the cloud. It’s a good idea in general, especially with the threat of ransomware these days.

In particular, consider your photos. Over time, they will be lost, because people make no effort to keep track of them. All hard drives will eventually crash, deleting your photos. Sure, a few key ones are backed up on Facebook for life, but the rest aren’t.
There are so many excellent online backup services out there, like DropBox and Backblaze. Or, you can use the iCloud feature that Apple provides. My favorite is Microsoft’s: I already pay $99 a year for Office 365 subscription, and it comes with 1-terabyte of online storage.

6. Separate email accounts
You should have three email accounts: work, personal, and financial.

First, you really need to separate your work account from personal. The IT department is already getting misdirected emails with your spouse/lover that they don’t want to see. Any conflict with your work, such as getting fired, gives your private correspondence to their lawyers.

Second, you need a wholly separate account for financial stuff, like Amazon.com, your bank, PayPal, and so on. That prevents confusion with phishing attacks.

Consider this warning today:

If you had split accounts, you could safely ignore this. The USPS would only your financial email account, which gets no phishing attacks, because it’s not widely known. When your receive the phishing attack on your personal email, you ignore it, because you know the USPS doesn’t know your personal email account.

Phishing emails are so sophisticated that even experts can’t tell the difference. Splitting financial from personal emails makes it so you don’t have to tell the difference — anything financial sent to personal email can safely be ignored.

7. Deauth those apps!

Twitter user @tompcoleman comments that we also need deauth apps.
Social media sites like Facebook, Twitter, and Google encourage you to enable “apps” that work their platforms, often demanding privileges to generate messages on your behalf. The typical scenario is that you use them only once or twice and forget about them.
A lot of them are hostile. For example, my niece’s twitter account would occasional send out advertisements, and she didn’t know why. It’s because a long time ago, she enabled an app with the permission to send tweets for her. I had to sit down and get rid of most of her apps.
Now would be a good time to go through your relatives Facebook, Twitter, and Google/GMail and disable those apps. Don’t be a afraid to be ruthless — they probably weren’t using them anyway. Some will still be necessary. For example, Twitter for iPhone shows up in the list of Twitter apps. The URL for editing these apps for Twitter is https://twitter.com/settings/applications. Google link is here (thanks @spextr). I don’t know of simple URLs for Facebook, but you should find it somewhere under privacy/security settings.
Update: Here’s a more complete guide for a even more social media services.
https://www.permissions.review/

8. Up-to-date software? maybe

I put this last because it can be so much work.

You should install the latest OS (Windows 10, macOS High Sierra), and also turn on automatic patching.

But remember it may not be worth the huge effort involved. I want my parents to be secure — but no so secure I have to deal with issues.

For example, when my parents updated their HP Print software, the icon on the desktop my mom usually uses to scan things in from the printer disappeared, and needed me to spend 15 minutes with her helping find the new way to access the software.
However, I did get my mom a new netbook to travel with instead of the old WinXP one. I want to get her a Chromebook, but she doesn’t want one.
For iOS, you can probably make sure their phones have the latest version without having these usability problems.

Conclusion

You can’t solve every problem for your relatives, but these are the more critical ones.

New – Interactive AWS Cost Explorer API

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-interactive-aws-cost-explorer-api/

We launched the AWS Cost Explorer a couple of years ago in order to allow you to track, allocate, and manage your AWS costs. The response to that launch, and to additions that we have made since then, has been very positive. However our customers are, as Jeff Bezos has said, “beautifully, wonderfully, dissatisfied.”

I see this first-hand every day. We launch something and that launch inspires our customers to ask for even more. For example, with many customers going all-in and moving large parts of their IT infrastructure to the AWS Cloud, we’ve had many requests for the raw data that feeds into the Cost Explorer. These customers want to programmatically explore their AWS costs, update ledgers and accounting systems with per-application and per-department costs, and to build high-level dashboards that summarize spending. Some of these customers have been going to the trouble of extracting the data from the charts and reports provided by Cost Explorer!

New Cost Explorer API
Today we are making the underlying data that feeds into Cost Explorer available programmatically. The new Cost Explorer API gives you a set of functions that allow you do everything that I described above. You can retrieve cost and usage data that is filtered and grouped across multiple dimensions (Service, Linked Account, tag, Availability Zone, and so forth), aggregated by day or by month. This gives you the power to start simple (total monthly costs) and to refine your requests to any desired level of detail (writes to DynamoDB tables that have been tagged as production) while getting responses in seconds.

Here are the operations:

GetCostAndUsage – Retrieve cost and usage metrics for a single account or all accounts (master accounts in an organization have access to all member accounts) with filtering and grouping.

GetDimensionValues – Retrieve available filter values for a specified filter over a specified period of time.

GetTags – Retrieve available tag keys and tag values over a specified period of time.

GetReservationUtilization – Retrieve EC2 Reserved Instance utilization over a specified period of time, with daily or monthly granularity plus filtering and grouping.

I believe that these functions, and the data that they return, will give you the ability to do some really interesting things that will give you better insights into your business. For example, you could tag the resources used to support individual marketing campaigns or development projects and then deep-dive into the costs to measure business value. You how have the potential to know, down to the penny, how much you spend on infrastructure for important events like Cyber Monday or Black Friday.

Things to Know
Here are a couple of things to keep in mind as you start to think about ways to make use of the API:

Grouping – The Cost Explorer web application provides you with one level of grouping; the APIs give you two. For example you could group costs or RI utilization by Service and then by Region.

Pagination – The functions can return very large amounts of data and follow the AWS-wide model for pagination by including a nextPageToken if additional data is available. You simply call the same function again, supplying the token, to move forward.

Regions – The service endpoint is in the US East (Northern Virginia) Region and returns usage data for all public AWS Regions.

Pricing – Each API call costs $0.01. To put this into perspective, let’s say you use this API to build a dashboard and it gets 1000 hits per month from your users. Your operating cost for the dashboard should be $10 or so; this is far less expensive than setting up your own systems to extract & ingest the data and respond to interactive queries.

The Cost Explorer API is available now and you can start using it today. To learn more, read about the Cost Explorer API.

Jeff;

Vulnerability in Amazon Key

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/vulnerability_i.html

Amazon Key is an IoT door lock that can enable one-time access codes for delivery people. To further secure that system, Amazon sells Cloud Cam, a camera that watches the door to ensure that delivery people don’t abuse their one-time access privilege.

Cloud Cam has been hacked:

But now security researchers have demonstrated that with a simple program run from any computer in Wi-Fi range, that camera can be not only disabled but frozen. A viewer watching its live or recorded stream sees only a closed door, even as their actual door is opened and someone slips inside. That attack would potentially enable rogue delivery people to stealthily steal from Amazon customers, or otherwise invade their inner sanctum.

And while the threat of a camera-hacking courier seems an unlikely way for your house to be burgled, the researchers argue it potentially strips away a key safeguard in Amazon’s security system.

Amazon is patching the system.

How to Recover From Ransomware

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/complete-guide-ransomware/

Here’s the scenario. You’re working on your computer and you notice that it seems slower. Or perhaps you can’t access document or media files that were previously available.

You might be getting error messages from Windows telling you that a file is of an “Unknown file type” or “Windows can’t open this file.”

Windows error message

If you’re on a Mac, you might see the message “No associated application,” or “There is no application set to open the document.”

MacOS error message

Another possibility is that you’re completely locked out of your system. If you’re in an office, you might be looking around and seeing that other people are experiencing the same problem. Some are already locked out, and others are just now wondering what’s going on, just as you are.

Then you see a message confirming your fears.

wana decrypt0r ransomware message

You’ve been infected with ransomware.

You’ll have lots of company this year. The number of ransomware attacks on businesses tripled in the past year, jumping from one attack every two minutes in Q1 to one every 40 seconds by Q3.There were over four times more new ransomware variants in the first quarter of 2017 than in the first quarter of 2016, and damages from ransomware are expected to exceed $5 billion this year.

Growth in Ransomware Variants Since December 2015

Source: Proofpoint Q1 2017 Quarterly Threat Report

This past summer, our local PBS and NPR station in San Francisco, KQED, was debilitated for weeks by a ransomware attack that forced them to go back to working the way they used to prior to computers. Five months have passed since the attack and they’re still recovering and trying to figure out how to prevent it from happening again.

How Does Ransomware Work?

Ransomware typically spreads via spam or phishing emails, but also through websites or drive-by downloads, to infect an endpoint and penetrate the network. Once in place, the ransomware then locks all files it can access using strong encryption. Finally, the malware demands a ransom (typically payable in bitcoins) to decrypt the files and restore full operations to the affected IT systems.

Encrypting ransomware or “cryptoware” is by far the most common recent variety of ransomware. Other types that might be encountered are:

  • Non-encrypting ransomware or lock screens (restricts access to files and data, but does not encrypt them)
  • Ransomware that encrypts the Master Boot Record (MBR) of a drive or Microsoft’s NTFS, which prevents victims’ computers from being booted up in a live OS environment
  • Leakware or extortionware (exfiltrates data that the attackers threaten to release if ransom is not paid)
  • Mobile Device Ransomware (infects cell-phones through “drive-by downloads” or fake apps)

The typical steps in a ransomware attack are:

1
Infection
After it has been delivered to the system via email attachment, phishing email, infected application or other method, the ransomware installs itself on the endpoint and any network devices it can access.
2
Secure Key Exchange
The ransomware contacts the command and control server operated by the cybercriminals behind the attack to generate the cryptographic keys to be used on the local system.
3
Encryption
The ransomware starts encrypting any files it can find on local machines and the network.
4
Extortion
With the encryption work done, the ransomware displays instructions for extortion and ransom payment, threatening destruction of data if payment is not made.
5
Unlocking
Organizations can either pay the ransom and hope for the cybercriminals to actually decrypt the affected files (which in many cases does not happen), or they can attempt recovery by removing infected files and systems from the network and restoring data from clean backups.

Who Gets Attacked?

Ransomware attacks target firms of all sizes — 5% or more of businesses in the top 10 industry sectors have been attacked — and no no size business, from SMBs to enterprises, are immune. Attacks are on the rise in every sector and in every size of business.

Recent attacks, such as WannaCry earlier this year, mainly affected systems outside of the United States. Hundreds of thousands of computers were infected from Taiwan to the United Kingdom, where it crippled the National Health Service.

The US has not been so lucky in other attacks, though. The US ranks the highest in the number of ransomware attacks, followed by Germany and then France. Windows computers are the main targets, but ransomware strains exist for Macintosh and Linux, as well.

The unfortunate truth is that ransomware has become so wide-spread that for most companies it is a certainty that they will be exposed to some degree to a ransomware or malware attack. The best they can do is to be prepared and understand the best ways to minimize the impact of ransomware.

“Ransomware is more about manipulating vulnerabilities in human psychology than the adversary’s technological sophistication.” — James Scott, expert in Artificial Intelligence

Phishing emails, malicious email attachments, and visiting compromised websites have been common vehicles of infection (we wrote about protecting against phishing recently), but other methods have become more common in past months. Weaknesses in Microsoft’s Server Message Block (SMB) and Remote Desktop Protocol (RDP) have allowed cryptoworms to spread. Desktop applications — in one case an accounting package — and even Microsoft Office (Microsoft’s Dynamic Data Exchange — DDE) have been the agents of infection.

Recent ransomware strains such as Petya, CryptoLocker, and WannaCry have incorporated worms to spread themselves across networks, earning the nickname, “cryptoworms.”

How to Defeat Ransomware

1
Isolate the Infection
Prevent the infection from spreading by separating all infected computers from each other, shared storage, and the network.
2
Identify the Infection
From messages, evidence on the computer, and identification tools, determine which malware strain you are dealing with.
3
Report
Report to the authorities to support and coordinate measures to counter attacks.
4
Determine Your Options
You have a number of ways to deal with the infection. Determine which approach is best for you.
5
Restore and Refresh
Use safe backups and program and software sources to restore your computer or outfit a new platform.
6
Plan to Prevent Recurrence
Make an assessment of how the infection occurred and what you can do to put measures into place that will prevent it from happening again.

1 — Isolate the Infection

The rate and speed of ransomware detection is critical in combating fast moving attacks before they succeed in spreading across networks and encrypting vital data.

The first thing to do when a computer is suspected of being infected is to isolate it from other computers and storage devices. Disconnect it from the network (both wired and Wi-Fi) and from any external storage devices. Cryptoworms actively seek out connections and other computers, so you want to prevent that happening. You also don’t want the ransomware communicating across the network with its command and control center.

Be aware that there may be more than just one patient zero, meaning that the ransomware may have entered your organization or home through multiple computers, or may be dormant and not yet shown itself on some systems. Treat all connected and networked computers with suspicion and apply measures to ensure that all systems are not infected.

This Week in Tech (TWiT.tv) did a videocast showing what happens when WannaCry is released on an isolated system and encrypts files and trys to spread itself to other computers. It’s a great lesson on how these types of cryptoworms operate.

2 — Identify the Infection

Most often the ransomware will identify itself when it asks for ransom. There are numerous sites that help you identify the ransomware, including ID Ransomware. The No More Ransomware! Project provides the Crypto Sheriff to help identify ransomware.

Identifying the ransomware will help you understand what type of ransomware you have, how it propagates, what types of files it encrypts, and maybe what your options are for removal and disinfection. It also will enable you to report the attack to the authorities, which is recommended.

wanna decryptor 2.0 ransomware message

WannaCry Ransomware Extortion Dialog

3 — Report to the Authorities

You’ll be doing everyone a favor by reporting all ransomware attacks to the authorities. The FBI urges ransomware victims to report ransomware incidents regardless of the outcome. Victim reporting provides law enforcement with a greater understanding of the threat, provides justification for ransomware investigations, and contributes relevant information to ongoing ransomware cases. Knowing more about victims and their experiences with ransomware will help the FBI to determine who is behind the attacks and how they are identifying or targeting victims.

You can file a report with the FBI at the Internet Crime Complaint Center.

There are other ways to report ransomware, as well.

4 — Determine Your Options

Your options when infected with ransomware are:

  1. Pay the ransom
  2. Try to remove the malware
  3. Wipe the system(s) and reinstall from scratch

It’s generally considered a bad idea to pay the ransom. Paying the ransom encourages more ransomware, and in most cases the unlocking of the encrypted files is not successful.

In a recent survey, more than three-quarters of respondents said their organization is not at all likely to pay the ransom in order to recover their data (77%). Only a small minority said they were willing to pay some ransom (3% of companies have already set up a Bitcoin account in preparation).

Even if you decide to pay, it’s very possible you won’t get back your data.

5 — Restore or Start Fresh

You have the choice of trying to remove the malware from your systems or wiping your systems and reinstalling from safe backups and clean OS and application sources.

Get Rid of the Infection

There are internet sites and software packages that claim to be able to remove ransomware from systems. The No More Ransom! Project is one. Other options can be found, as well.

Whether you can successfully and completely remove an infection is up for debate. A working decryptor doesn’t exist for every known ransomware, and unfortunately it’s true that the newer the ransomware, the more sophisticated it’s likely to be and a perhaps a decryptor has not yet been created.

It’s Best to Wipe All Systems Completely

The surest way of being certain that malware or ransomware has been removed from a system is to do a complete wipe of all storage devices and reinstall everything from scratch. If you’ve been following a sound backup strategy, you should have copies of all your documents, media, and important files right up to the time of the infection.

Be sure to determine as well as you can from file dates and other information what was the date of infection. Consider that an infection might have been dormant in your system for a while before it activated and made significant changes to your system. Identifying and learning about the particular malware that attacked your systems will enable you to understand how that malware operates and what your best strategy should be for restoring your systems.

Backblaze Backup enables you to go back in time and specify the date prior to which you wish to restore files. That date should precede the date your system was infected.

Choose files to restore from earlier date in Backblaze Backup

If you’ve been following a good backup policy with both local and off-site backups, you should be able to use backup copies that you are sure were not connected to your network after the time of attack and hence protected from infection. Backup drives that were completely disconnected should be safe, as are files stored in the cloud, as with Backblaze Backup.

System Restores Are not the Best Strategy for Dealing with Ransomware and Malware

You might be tempted to use a System Restore point to get your system back up and running. System Restore is not a good solution for removing viruses or other malware. Since malicious software is typically buried within all kinds of places on a system, you can’t rely on System Restore being able to root out all parts of the malware. Instead, you should rely on a quality virus scanner that you keep up to date. Also, System Restore does not save old copies of your personal files as part of its snapshot. It also will not delete or replace any of your personal files when you perform a restoration, so don’t count on System Restore as working like a backup. You should always have a good backup procedure in place for all your personal files.

Local backups can be encrypted by ransomware. If your backup solution is local and connected to a computer that gets hit with ransomware, the chances are good your backups will be encrypted along with the rest of your data.

With a good backup solution that is isolated from your local computers, such as Backblaze Backup, you can easily obtain the files you need to get your system working again. You have the flexility to determine which files to restore, from which date you want to restore, and how to obtain the files you need to restore your system.

Choose how to obtain your backup files

You’ll need to reinstall your OS and software applications from the source media or the internet. If you’ve been managing your account and software credentials in a sound manner, you should be able to reactivate accounts for applications that require it.

If you use a password manager, such as 1Password or LastPass, to store your account numbers, usernames, passwords, and other essential information, you can access that information through their web interface or mobile applications. You just need to be sure that you still know your master username and password to obtain access to these programs.

6 — How to Prevent a Ransomware Attack

“Ransomware is at an unprecedented level and requires international investigation.” — European police agency EuroPol

A ransomware attack can be devastating for a home or a business. Valuable and irreplaceable files can be lost and tens or even hundreds of hours of effort can be required to get rid of the infection and get systems working again.

Security experts suggest several precautionary measures for preventing a ransomware attack.

  1. Use anti-virus and anti-malware software or other security policies to block known payloads from launching.
  2. Make frequent, comprehensive backups of all important files and isolate them from local and open networks. Cybersecurity professionals view data backup and recovery (74% in a recent survey) by far as the most effective solution to respond to a successful ransomware attack.
  3. Keep offline backups of data stored in locations inaccessible from any potentially infected computer, such as external storage drives or the cloud, which prevents them from being accessed by the ransomware.
  4. Install the latest security updates issued by software vendors of your OS and applications. Remember to Patch Early and Patch Often to close known vulnerabilities in operating systems, browsers, and web plugins.
  5. Consider deploying security software to protect endpoints, email servers, and network systems from infection.
  6. Exercise cyber hygiene, such as using caution when opening email attachments and links.
  7. Segment your networks to keep critical computers isolated and to prevent the spread of malware in case of attack. Turn off unneeded network shares.
  8. Turn off admin rights for users who don’t require them. Give users the lowest system permissions they need to do their work.
  9. Restrict write permissions on file servers as much as possible.
  10. Educate yourself, your employees, and your family in best practices to keep malware out of your systems. Update everyone on the latest email phishing scams and human engineering aimed at turning victims into abettors.

It’s clear that the best way to respond to a ransomware attack is to avoid having one in the first place. Other than that, making sure your valuable data is backed up and unreachable by ransomware infection will ensure that your downtime and data loss will be minimal or avoided completely.

Have you endured a ransomware attack or have a strategy to avoid becoming a victim? Please let us know in the comments.

The post How to Recover From Ransomware appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Microsoft Sued Over ‘Baseless’ Piracy Threats

Post Syndicated from Ernesto original https://torrentfreak.com/microsoft-sued-over-baseless-piracy-threats-171113/

For many years, Microsoft and the Business Software Alliance (BSA) have carried out piracy investigations into organizations large and small.

Companies accused of using Microsoft software without permission usually get a letter asking them to pay up, or face legal consequences.

Rhode Island-based company Hanna Instruments is one of the most recent targets. The company stands accused of using Microsoft Office products without a proper license.

However, instead of Microsoft going after Hanna in court for copyright infringement, Hanna has filed a lawsuit against BSA and Microsoft asking for a declaratory judgment that it did nothing wrong.

The lawsuit is the result of a long back-and-forth that started in June. At the time, BSA’s lawyers sent Hanna a letter accusing it of using Microsoft products without a proper license, while requesting an audit.

Hanna’s management wasn’t aware of any pirated products but after repeated requests, the company decided to go ahead and conduct a thorough investigation. The results, combined in a detailed spreadsheet, showed that it purchased 126 copies of Microsoft Office software, while only 120 were in use.

Perfectly fine, they assumed, but the BSA was not convinced.

Since Hanna only had Microsoft generated key cards for the most recent purchases, the company used purchase orders, requisitions, and price quotes to prove that it properly licensed earlier copies of Microsoft Office. Not good enough, according to the BSA, which wanted to see money instead.

The BSA’s lawyers informed Hanna that the company would face up to $4,950,000 in damages if the case went to court. Instead, however, they offered to settle the matter for $72,074.

From the complaint

Hanna wasn’t planning to pay and pointed out that they sent in as much proof as they could find, documenting legal purchases of Microsoft Office licenses for a period covering more than ten years. While the BSA appreciated the effort, it didn’t accept this as hard evidence.

“…the provision of purchase orders, price quotes, purchase requisitions are not acceptable as valid proof of purchase to our client. Reason being, the aforesaid documents do not demonstrate that a purchase has taken place, they merely establish intent to make a purchase of software,” the BSA wrote in yer another email.

Interestingly, the BSA itself still failed to provide any solid proof that Hanna was using unlicensed software. The Rhode Island company repeatedly requested this, but the BSA simply replied that it’s neither appropriate nor efficient to request evidence from their clients in every case.

The BSA then went a step further and suggested that Microsoft did the company a favor by approaching it directly. The alternative would have been to call in the U.S. Marshals and raid the company’s headquarters.

“The rights holders had the alternative option of simply commencing litigation and seeking a court order permitting a raid by U.S. Marshals,” the BSA’s lawyers wrote in one of their letters.

This ‘threat’ wasn’t completely in vain. In the past, the BSA and Microsoft’s accusations have developed into fully-fledged raids, with armed law enforcement officials assisting the software vendor, taking away computers for further inspection.

Still, Hanna maintained that it didn’t do anything wrong. At this point, they’d spent $25,000 on disproving the BSA’s “baseless” claims, and saw no other option than to take the matter to court.

Late last week the company submitted a complaint against Microsoft and the BSA in a Rhode Island federal court, asking for a declaratory judgment and monetary compensation.

“To date, the Defendants have not provided any documentation supporting the baseless allegation that Hanna illegally copied Microsoft Office, in spite of repeated requests by Plaintiff’s counsel that BSA produce such information,” the complaint reads.

“By this Complaint, Hanna seeks a declaration by the Court that it has not infringed any Microsoft copyrights, that Hanna has been harmed by BSA’s relentless and unsupported charges, and that Defendants pay Hanna’s costs and expenses for this action, together with reasonable attorney fees, and any additional monetary award this Court deems appropriate.”

It’s now up to the court to decide who’s right and who’s wrong, but the case already provides a rare and intriguing insight into the anti-piracy practices of Microsoft and the BSA.

This isn’t the first time that one of these cases has gone to court. In Belgium, the BSA and Microsoft lost a similar case. Here, a local company was ordered to pay a settlement on the spot or lose its computers. With law enforcement at the ready, the owner decided to pay, despite owning valid licenses.

The full complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Visualize AWS Cloudtrail Logs using AWS Glue and Amazon Quicksight

Post Syndicated from Luis Caro Perez original https://aws.amazon.com/blogs/big-data/streamline-aws-cloudtrail-log-visualization-using-aws-glue-and-amazon-quicksight/

Being able to easily visualize AWS CloudTrail logs gives you a better understanding of how your AWS infrastructure is being used. It can also help you audit and review AWS API calls and detect security anomalies inside your AWS account. To do this, you must be able to perform analytics based on your CloudTrail logs.

In this post, I walk through using AWS Glue and AWS Lambda to convert AWS CloudTrail logs from JSON to a query-optimized format dataset in Amazon S3. I then use Amazon Athena and Amazon QuickSight to query and visualize the data.

Solution overview

To process CloudTrail logs, you must implement the following architecture:

CloudTrail delivers log files in an Amazon S3 bucket folder. To correctly crawl these logs, you modify the file contents and folder structure using an Amazon S3-triggered Lambda function that stores the transformed files in an S3 bucket single folder. When the files are in a single folder, AWS Glue scans the data, converts it into Apache Parquet format, and catalogs it to allow for querying and visualization using Amazon Athena and Amazon QuickSight.

Walkthrough

Let’s look at the steps that are required to build the solution.

Set up CloudTrail logs

First, you need to set up a trail that delivers log files to an S3 bucket. To create a trail in CloudTrail, follow the instructions in Creating a Trail.

When you finish, the trail settings page should look like the following screenshot:

In this example, I set up log files to be delivered to the cloudtraillfcaro bucket.

Consolidate CloudTrail reports into a single folder using Lambda

AWS CloudTrail delivers log files using the following folder structure inside the configured Amazon S3 bucket:

AWSLogs/ACCOUNTID/CloudTrail/REGION/YEAR/MONTH/HOUR/filename.json.gz

Additionally, log files have the following structure:

{
    "Records": [{
        "eventVersion": "1.01",
        "userIdentity": {
            "type": "IAMUser",
            "principalId": "AIDAJDPLRKLG7UEXAMPLE",
            "arn": "arn:aws:iam::123456789012:user/Alice",
            "accountId": "123456789012",
            "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
            "userName": "Alice",
            "sessionContext": {
                "attributes": {
                    "mfaAuthenticated": "false",
                    "creationDate": "2014-03-18T14:29:23Z"
                }
            }
        },
        "eventTime": "2014-03-18T14:30:07Z",
        "eventSource": "cloudtrail.amazonaws.com",
        "eventName": "StartLogging",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "72.21.198.64",
        "userAgent": "signin.amazonaws.com",
        "requestParameters": {
            "name": "Default"
        },
        "responseElements": null,
        "requestID": "cdc73f9d-aea9-11e3-9d5a-835b769c0d9c",
        "eventID": "3074414d-c626-42aa-984b-68ff152d6ab7"
    },
    ... additional entries ...
    ]

If AWS Glue crawlers are used to catalog these files as they are written, the following obstacles arise:

  1. AWS Glue identifies different tables per different folders because they don’t follow a traditional partition format.
  2. Based on the structure of the file content, AWS Glue identifies the tables as having a single column of type array.
  3. CloudTrail logs have JSON attributes that use uppercase letters. According to the Best Practices When Using Athena with AWS Glue, it is recommended that you convert these to lowercase.

To have AWS Glue catalog all log files in a single table with all the columns describing each event, implement the following Lambda function:

from __future__ import print_function
import json
import urllib
import boto3
import gzip

s3 = boto3.resource('s3')
client = boto3.client('s3')

def convertColumntoLowwerCaps(obj):
    for key in obj.keys():
        new_key = key.lower()
        if new_key != key:
            obj[new_key] = obj[key]
            del obj[key]
    return obj


def lambda_handler(event, context):

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    print(bucket)
    print(key)
    try:
        newKey = 'flatfiles/' + key.replace("/", "")
        client.download_file(bucket, key, '/tmp/file.json.gz')
        with gzip.open('/tmp/out.json.gz', 'w') as output, gzip.open('/tmp/file.json.gz', 'rb') as file:
            i = 0
            for line in file: 
                for record in json.loads(line,object_hook=convertColumntoLowwerCaps)['records']:
            		if i != 0:
            		    output.write("\n")
            		output.write(json.dumps(record))
            		i += 1
        client.upload_file('/tmp/out.json.gz', bucket,newKey)
        return "success"
    except Exception as e:
        print(e)
        print('Error processing object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e

The function goes over each element of the records array, changes uppercase letters to lowercase in column names, and inserts each element of the array as a single line of a new file. The new file is saved inside a flatfiles folder created by the function without any subfolders in the S3 bucket.

The function should have a role containing a policy with at least the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::cloudtraillfcaro/*",
                "arn:aws:s3:::cloudtraillfcaro"
            ],
            "Effect": "Allow"
        }
    ]
}

In this example, CloudTrail delivers logs to the cloudtraillfcaro bucket. Make sure that you replace this name with your bucket name in the policy. For more information about how to work with inline policies, see Working with Inline Policies.

After the Lambda function is created, you can set up the following trigger using the Triggers tab on the AWS Lambda console.

Choose Add trigger, and choose S3 as a source of the trigger.

After choosing the source, configure the following settings:

In the trigger, any file that is written to the path for the log files—which in this case is AWSLogs/119582755581/CloudTrail/—is processed. Make sure that the Enable trigger check box is selected and that the bucket and prefix parameters match your use case.

After you set up the function and receive log files, the bucket (in this case cloudtraillfcaro) should contain the processed files inside the flatfiles folder.

Catalog source data

Once the files are processed by the Lambda function, set up a crawler named cloudtrail to catalog them.

The crawler must point to the flatfiles folder.

All the crawlers and AWS Glue jobs created for this solution must have a role with the AWSGlueServiceRole managed policy and an inline policy with permissions to modify the S3 buckets used on the Lambda function. For more information, see Working with Managed Policies.

The role should look like the following:

In this example, the inline policy named s3perms contains the permissions to modify the S3 buckets.

After you choose the role, you can schedule the crawler to run on demand.

A new database is created, and the crawler is set to use it. In this case, the cloudtrail database is used for all the tables.

After the crawler runs, a single table should be created in the catalog with the following structure:

The table should contain the following columns:

Create and run the AWS Glue job

To convert all the CloudTrail logs to a columnar store in Parquet, set up an AWS Glue job by following these steps.

Upload the following script into a bucket in Amazon S3:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import boto3
import time

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "cloudtrail", table_name = "flatfiles", transformation_ctx = "datasource0")
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "make_struct", transformation_ctx = "resolvechoice1")
relationalized1 = resolvechoice1.relationalize("trail", args["TempDir"]).select("trail")
datasink = glueContext.write_dynamic_frame.from_options(frame = relationalized1, connection_type = "s3", connection_options = {"path": "s3://cloudtraillfcaro/parquettrails"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

In the example, you load the script as a file named cloudtrailtoparquet.py. Make sure that you modify the script and update the “{"path": "s3://cloudtraillfcaro/parquettrails"}” with the destination in which you want to store your results.

After uploading the script, add a new AWS Glue job. Choose a name and role for the job, and choose the option of running the job from An existing script that you provide.

To avoid processing the same data twice, enable the Job bookmark setting in the Advanced properties section of the job properties.

Choose Next twice, and then choose Finish.

If logs are already in the flatfiles folder, you can run the job on demand to generate the first set of results.

Once the job starts running, wait for it to complete.

When the job is finished, its Run status should be Succeeded. After that, you can verify that the Parquet files are written to the Amazon S3 location.

Catalog results

To be able to process results from Athena, you can use an AWS Glue crawler to catalog the results of the AWS Glue job.

In this example, the crawler is set to use the same database as the source named cloudtrail.

You can run the crawler using the console. When the crawler finishes running and has processed the Parquet results, a new table should be created in the AWS Glue Data Catalog. In this example, it’s named parquettrails.

The table should have the classification set to parquet.

It should have the same columns as the flatfiles table, with the exception of the struct type columns, which should be relationalized into several columns:

In this example, notice how the requestparameters column, which was a struct in the original table (flatfiles), was transformed to several columns—one for each key value inside it. This is done using a transformation native to AWS Glue called relationalize.

Query results with Athena

After crawling the results, you can query them using Athena. For example, to query what events took place in the time frame between 2017-10-23t12:00:00 and 2017-10-23t13:00, use the following select statement:

select *
from cloudtrail.parquettrails
where eventtime > '2017-10-23T12:00:00Z' AND eventtime < '2017-10-23T13:00:00Z'
order by eventtime asc;

Be sure to replace cloudtrail.parquettrails with the names of your database and table that references the Parquet results. Replace the datetimes with an hour when your account had activity and was processed by the AWS Glue job.

Visualize results using Amazon QuickSight

Once you can query the data using Athena, you can visualize it using Amazon QuickSight. Before connecting Amazon QuickSight to Athena, be sure to grant QuickSight access to Athena and the associated S3 buckets in your account. For more information, see Managing Amazon QuickSight Permissions to AWS Resources. You can then create a new data set in Amazon QuickSight based on the Athena table that you created.

After setting up permissions, you can create a new analysis in Amazon QuickSight by choosing New analysis.

Then add a new data set.

Choose Athena as the source.

Give the data source a name (in this case, I named it cloudtrail).

Choose the name of the database and the table referencing the Parquet results.

Then choose Visualize.

After that, you should see the following screen:

Now you can create some visualizations. First, search for the sourceipaddress column, and drag it to the AutoGraph section.

You can see a list of the IP addresses that you have used to interact with AWS. To review whether these IP addresses have been used from IAM users, internal AWS services, or roles, use the type value that is inside the useridentity field of the original log files. Thanks to the relationalize transformation, this value is available as the useridentity.type column. After the column is added into the Group/Color box, the visualization should look like the following:

You can now see and distinguish the most used IPs and whether they are used from roles, AWS services, or IAM users.

After following all these steps, you can use Amazon QuickSight to add different columns from CloudTrail and perform different types of visualizations. You can build operational dashboards that continuously monitor AWS infrastructure usage and access. You can share those dashboards with others in your organization who might need to see this data.

Summary

In this post, you saw how you can use a simple Lambda function and an AWS Glue script to convert text files into Parquet to improve Athena query performance and data compression. The post also demonstrated how to use AWS Lambda to preprocess files in Amazon S3 and transform them into a format that is recognizable by AWS Glue crawlers.

This example, used AWS CloudTrail logs, but you can apply the proposed solution to any set of files that after preprocessing, can be cataloged by AWS Glue.


Additional Reading

Learn how to Harmonize, Query, and Visualize Data from Various Providers using AWS Glue, Amazon Athena, and Amazon QuickSight.


About the Authors

Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Amazon ElastiCache Update – Online Resizing for Redis Clusters

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-elasticache-update-online-resizing-for-redis-clusters/

Amazon ElastiCache makes it easy to for you to set up a fast, in-memory data store and cache. With support for the two most popular open source offerings (Redis and Memcached), ElastiCache supports the demanding needs of game leaderboards, in-memory analytics, and large-scale messaging.

Today I would like to tell you about an important addition to Amazon ElastiCache for Redis. You can already create clusters with up to 15 shards, each responsible for storing keys and values for a specific set of slots (each cluster has exactly 16,384 slots). A single cluster can expand to store 3.55 terabytes of in-memory data while supporting up to 20 million reads and 4.5 million writes per second.

Now with Online Resizing
You can now adjust the number of shards in a running ElastiCache for Redis cluster while the cluster remains online and responding to requests. This gives you the power to respond to changes in traffic and data volume without having to take the cluster offline or to start with an empty cache. You can also rebalance a running cluster to uniformly redistribute slot space without changing the number of shards.

When you initiate a resharding or rebalancing operation, ElastiCache for Redis starts by preparing a plan that will result in an even distribution of slots across the shards in the cluster. Then it transfers slots across shards, moving many in parallel for efficiency. This all happens while the cluster continues to respond to requests, with a modest impact on write throughput for writes to a slot that is in motion. The migration rate is dependent on the instance type, network speed, read/write traffic to the slots, and is generally about 1 gigabyte per minute.

The resharding and rebalancing operations apply to Redis clusters that were created with Cluster Mode enabled:

Resharding a Cluster
In general, you will know that it is time to expand a cluster via resharding when it starts to face significant memory pressure or when individual nodes are becoming bottlenecks. You can watch the cluster’s CloudWatch metrics to identify each situation:

Memory Pressure – FreeableMemory, SwapUsage, BytesUsedForCache.

CPU Bottleneck – CPUUtilization, CurrConnections, NewConnections.

Network Bottleneck – NetworkBytesIn, NetworkBytesOut.

You can use CloudWatch Dashboards to monitor these metrics, and CloudWatch Alarms to automate the resharding process.

To reshard a Redis cluster from the ElastiCache Dashboard, click on the cluster to visit the detail page, and then click on the Add shards button:

Enter the number of shards to add and (optionally) the desired Availability Zones, then click on Add:

The status of the cluster will change to modifying and the resharding process will begin. It can take anywhere from a few minutes to several hours, as indicated above. You can track the progress on the detail page for the cluster:

You can see the slots moving from shard to shard:

You can also watch the Events for the cluster:

During the resharding you should avoid the use of the KEYS and SMEMBERS commands, as well as compute-intensive Lua scripts in order to moderate the load on the cluster shards. You should avoid the FLUSHDB and FLUSHALL commands entirely; using them will interrupt and then abort the resharding process.

The status of each shard will return to available when the process is complete:

The same process takes place when you delete shards.

Rebalancing Slots
You can perform this operation by heading to the cluster’s detail page and clicking on Rebalance Slot Distribution:

Things to Know
Here are a couple of things to keep in mind about this new feature:

Engine Version – Your cluster must be running version 3.2.10 of the Redis engine.

Migration Size – Slots that contain items that are larger than 256 megabytes after serialization are not migrated.

Cluster Endpoint – The cluster endpoint does not change as a result of a resharding or rebalancing.

Available Now
This feature is available now and you can start using it today.

Jeff;

 

B2 Cloud Storage Roundup

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/b2-cloud-storage-roundup/

B2 Integrations
Over the past several months, B2 Cloud Storage has continued to grow like we planted magic beans. During that time we have added a B2 Java SDK, and certified integrations with GoodSync, Arq, Panic, UpdraftPlus, Morro Data, QNAP, Archiware, Restic, and more. In addition, B2 customers like Panna Cooking, Sermon Audio, and Fellowship Church are happy they chose B2 as their cloud storage provider. If any of that sounds interesting, read on.

The B2 Java SDK

While the Backblaze B2 API is well documented and straight-forward to implement, we were asked by a few of our Integration Partners if we had an SDK they could use. So we developed one as an open-course project on GitHub, where we hope interested parties will not only use our Java SDK, but make it better for everyone else.

There are different reasons one might use the Java SDK, but a couple of areas where the SDK can simplify the coding process are:

Expiring Authorization — B2 requires an application key for a given account be reissued once a day when using the API. If the application key expires while you are in the middle of transferring files or some other B2 activity (bucket list, etc.), the SDK can be used to detect and then update the application key on the fly. Your B2 related activities will continue without incident and without having to capture and code your own exception case.

Error Handling — There are different types of error codes B2 will return, from expired application keys to detecting malformed requests to command time-outs. The SDK can dramatically simplify the coding needed to capture and account for the various things that can happen.

While Backblaze has created the Java SDK, developers in the GitHub community have also created other SDKs for B2, for example, for PHP (https://github.com/cwhite92/b2-sdk-php,) and Go (https://github.com/kurin/blazer.) Let us know in the comments about other SDKs you’d like to see or perhaps start your own GitHub project. We will publish any updates in our next B2 roundup.

What You Can Do with Affordable and Available Cloud Storage

You’re probably aware that B2 is up to 75% less expensive than other similar cloud storage services like Amazon S3 and Microsoft Azure. Businesses and organizations are finding that projects that previously weren’t economically feasible with other Cloud Storage services are now not only possible, but a reality with B2. Here are a few recent examples:

SermonAudio logo SermonAudio wanted their media files to be readily available, but didn’t want to build and manage their own internal storage farm. Until B2, cloud storage was just too expensive to use. Now they use B2 to store their audio and video files, and also as the primary source of downloads and streaming requests from their subscribers.
Fellowship Church logo Fellowship Church wanted to escape from the ever increasing amount of time they were spending saving their data to their LTO-based system. Using B2 saved countless hours of personnel time versus LTO, fit easily into their video processing workflow, and provided instant access at any time to their media library.
Panna logo Panna Cooking replaced their closet full of archive hard drives with a cost-efficient hybrid-storage solution combining 45Drives and Backblaze B2 Cloud Storage. Archived media files that used to take hours to locate are now readily available regardless of whether they reside in local storage or in the B2 Cloud.

B2 Integrations

Leading companies in backup, archive, and sync continue to add B2 Cloud Storage as a storage destination for their customers. These companies realize that by offering B2 as an option, they can dramatically lower the total cost of ownership for their customers — and that’s always a good thing.

If your favorite application is not integrated to B2, you can do something about it. One integration partner told us they received over 200 customer requests for a B2 integration. The partner got the message and the integration is currently in beta test.

Below are some of the partner integrations completed in the past few months. You can check the B2 Partner Integrations page for a complete list.

Archiware — Both P5 Archive and P5 Backup can now store data in the B2 Cloud making your offsite media files readily available while keeping your off-site storage costs predictable and affordable.

Arq — Combine Arq and B2 for amazingly affordable backup of external drives, network drives, NAS devices, Windows PCs, Windows Servers, and Macs to the cloud.

GoodSync — Automatically synchronize and back up all your photos, music, email, and other important files between all your desktops, laptops, servers, external drives, and sync, or back up to B2 Cloud Storage for off-site storage.

QNAP — QNAP Hybrid Backup Sync consolidates backup, restoration, and synchronization functions into a single QTS application to easily transfer your data to local, remote, and cloud storage.

Morro Data — Their CloudNAS solution stores files in the cloud, caches them locally as needed, and syncs files globally among other CloudNAS systems in an organization.

Restic – Restic is a fast, secure, multi-platform command line backup program. Files are uploaded to a B2 bucket as de-duplicated, encrypted chunks. Each backup is a snapshot of only the data that has changed, making restores of a specific date or time easy.

Transmit 5 by Panic — Transmit 5, the gold standard for macOS file transfer apps, now supports B2. Upload, download, and manage files on tons of servers with an easy, familiar, and powerful UI.

UpdraftPlus — WordPress developers and admins can now use the UpdraftPlus Premium WordPress plugin to affordably back up their data to the B2 Cloud.

Getting Started with B2 Cloud Storage

If you’re using B2 today, thank you. If you’d like to try B2, but don’t know where to start, here’s a guide to getting started with the B2 Web Interface — no programming or scripting is required. You get 10 gigabytes of free storage and 1 gigabyte a day in free downloads. Give it a try.

The post B2 Cloud Storage Roundup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon ElastiCache for Redis Is Now a HIPAA Eligible Service and You Can Use It to Power Real-Time Healthcare Applications

Post Syndicated from Manan Goel original https://aws.amazon.com/blogs/security/now-you-can-use-amazon-elasticache-for-redis-a-hipaa-eligible-service-to-power-real-time-healthcare-applications/

HIPAA image

Amazon ElastiCache for Redis is now a HIPAA Eligible Service and has been added to the AWS Business Associate Addendum (BAA). This means you can use ElastiCache for Redis to help you power healthcare applications as well as process, maintain, and store protected health information (PHI). ElastiCache for Redis is a Redis-compatible, fully-managed, in-memory data store and cache in the cloud that provides sub-millisecond latency to power applications. Now you can use the speed, simplicity, and flexibility of ElastiCache for Redis to build secure, fast, and internet-scale healthcare applications.

ElastiCache for Redis with HIPAA eligibility is available for all current-generation instance node types and requires Redis engine version 3.2.6. You must ensure that nodes are configured to encrypt the data in transit and at rest, and to authenticate Redis commands before the engine executes them. See Architecting for HIPAA Security and Compliance on Amazon Web Services for information about how to configure Amazon HIPAA Eligible Services to store, process, and transmit PHI.

ElastiCache for Redis uses Advanced Encryption Standard (AES)-512 symmetric keys to encrypt data on disk. The Redis backups stored in Amazon S3 are encrypted with server-side encryption (SSE) using AES-256 symmetric keys. ElastiCache for Redis uses Transport Layer Security (TLS) to encrypt data in transit. It uses the Redis AUTH token that you provide at the time of Redis cluster creation to authenticate the Redis commands coming from clients. The AUTH token is encrypted using AWS Key Management Service.

There is no additional charge for using ElastiCache for Redis clusters with HIPAA eligibility. To get started, see HIPAA Compliance for Amazon ElastiCache for Redis.

– Manan

AWS HIPAA Eligibility Update (October 2017) – Sixteen Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-post-update-october-2017-sixteen-additional-services/

Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

Sixteen More Services
In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:

Amazon Aurora with PostgreSQL Compatibility – This brand-new addition to Amazon Aurora allows you to encrypt your relational databases using keys that you create and manage through AWS Key Management Service (KMS). When you enable encryption for an Amazon Aurora database, the underlying storage is encrypted, as are automated backups, read replicas, and snapshots. Read New – Encryption at Rest for Amazon Aurora to learn more.

Amazon CloudWatch Logs – You can use the logs to monitor and troubleshoot your systems and applications. You can monitor your existing system, application, and custom log files in near real-time, watching for specific phrases, values, or patterns. Log data can be stored durably and at low cost, for as long as needed. To learn more, read Store and Monitor OS & Application Log Files with Amazon CloudWatch and Improvements to CloudWatch Logs and Dashboards.

Amazon Connect – This self-service, cloud-based contact center makes it easy for you to deliver better customer service at a lower cost. You can use the visual designer to set up your contact flows, manage agents, and track performance, all without specialized skills. Read Amazon Connect – Customer Contact Center in the Cloud and New – Amazon Connect and Amazon Lex Integration to learn more.

Amazon ElastiCache for Redis – This service lets you deploy, operate, and scale an in-memory data store or cache that you can use to improve the performance of your applications. Each ElastiCache for Redis cluster publishes key performance metrics to Amazon CloudWatch. To learn more, read Caching in the Cloud with Amazon ElastiCache and Amazon ElastiCache – Now With a Dash of Redis.

Amazon Kinesis Streams – This service allows you to build applications that process or analyze streaming data such as website clickstreams, financial transactions, social media feeds, and location-tracking events. To learn more, read Amazon Kinesis – Real-Time Processing of Streaming Big Data and New: Server-Side Encryption for Amazon Kinesis Streams.

Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.

Amazon RDS SQL Server – This service lets you set up scalable, managed Microsoft SQL Server instances in minutes, and also offers high performance, high availability, and a simplified security model. To learn more, read Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk and Amazon RDS for Microsoft SQL Server – Transparent Data Encryption (TDE) to learn more.

Amazon Route 53 – This is a highly available Domain Name Server. It translates names like www.example.com into IP addresses. To learn more, read Moving Ahead with Amazon Route 53.

AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.

AWS CloudHSM – A cloud-based Hardware Security Module (HSM) for key storage and management at cloud scale. Designed for sensitive workloads, CloudHSM lets you manage your own keys using FIPS 140-2 Level 3 validated HSMs. To learn more, read AWS CloudHSM – Secure Key Storage and Cryptographic Operations and AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads.

AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.

AWS Lambda – This service lets you run event-driven application or backend code without thinking about or managing servers. To learn more, read AWS Lambda – Run Code in the Cloud, AWS Lambda – A Look Back at 2016, and AWS Lambda – In Full Production with New Features for Mobile Devs.

[email protected] – You can use this new feature of AWS Lambda to run Node.js functions across the global network of AWS locations without having to provision or manager servers, in order to deliver rich, personalized content to your users with low latency. Read [email protected] – Intelligent Processing of HTTP Requests at the Edge to learn more.

AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.

AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).

AWS Storage Gateway – This hybrid storage service lets your on-premises applications use AWS cloud storage (Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Elastic File System) in a simple and seamless way, with storage for volumes, files, and virtual tapes. To learn more, read The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage and File Interface to AWS Storage Gateway.

And there you go! Check out my earlier post for a list of resources that will help you to build applications that comply with HIPAA and HITECH.

Jeff;

 

Google Login Security for High-Risk Users

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/google_log-in_s.html

Google has a new login service for high-risk users. It’s good, but unforgiving.

Logging in from a desktop will require a special USB key, while accessing your data from a mobile device will similarly require a Bluetooth dongle. All non-Google services and apps will be exiled from reaching into your Gmail or Google Drive. Google’s malware scanners will use a more intensive process to quarantine and analyze incoming documents. And if you forget your password, or lose your hardware login keys, you’ll have to jump through more hoops than ever to regain access, the better to foil any intruders who would abuse that process to circumvent all of Google’s other safeguards.

It’s called Advanced Protection.

Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using [email protected]

Post Syndicated from Ronnie Eichler original https://aws.amazon.com/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/

With the recent launch of [email protected], it’s now possible for you to provide even more robust functionality to your static websites. Amazon CloudFront is a content distribution network service. In this post, I show how you can use [email protected] along with the CloudFront origin access identity (OAI) for Amazon S3 and still provide simple URLs (such as www.example.com/about/ instead of www.example.com/about/index.html).

Background

Amazon S3 is a great platform for hosting a static website. You don’t need to worry about managing servers or underlying infrastructure—you just publish your static to content to an S3 bucket. S3 provides a DNS name such as <bucket-name>.s3-website-<AWS-region>.amazonaws.com. Use this name for your website by creating a CNAME record in your domain’s DNS environment (or Amazon Route 53) as follows:

www.example.com -> <bucket-name>.s3-website-<AWS-region>.amazonaws.com

You can also put CloudFront in front of S3 to further scale the performance of your site and cache the content closer to your users. CloudFront can enable HTTPS-hosted sites, by either using a custom Secure Sockets Layer (SSL) certificate or a managed certificate from AWS Certificate Manager. In addition, CloudFront also offers integration with AWS WAF, a web application firewall. As you can see, it’s possible to achieve some robust functionality by using S3, CloudFront, and other managed services and not have to worry about maintaining underlying infrastructure.

One of the key concerns that you might have when implementing any type of WAF or CDN is that you want to force your users to go through the CDN. If you implement CloudFront in front of S3, you can achieve this by using an OAI. However, in order to do this, you cannot use the HTTP endpoint that is exposed by S3’s static website hosting feature. Instead, CloudFront must use the S3 REST endpoint to fetch content from your origin so that the request can be authenticated using the OAI. This presents some challenges in that the REST endpoint does not support redirection to a default index page.

CloudFront does allow you to specify a default root object (index.html), but it only works on the root of the website (such as http://www.example.com > http://www.example.com/index.html). It does not work on any subdirectory (such as http://www.example.com/about/). If you were to attempt to request this URL through CloudFront, CloudFront would do a S3 GetObject API call against a key that does not exist.

Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of [email protected], you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin.

Solution

In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’.

When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

To get started, create an S3 bucket to be the origin for CloudFront:

Create bucket

On the other screens, you can just accept the defaults for the purposes of this walkthrough. If this were a production implementation, I would recommend enabling bucket logging and specifying an existing S3 bucket as the destination for access logs. These logs can be useful if you need to troubleshoot issues with your S3 access.

Now, put some content into your S3 bucket. For this walkthrough, create two simple webpages to demonstrate the functionality:  A page that resides at the website root, and another that is in a subdirectory.

<s3bucketname>/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>

<s3bucketname>/subdirectory/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>

When uploading the files into S3, you can accept the defaults. You add a bucket policy as part of the CloudFront distribution creation that allows CloudFront to access the S3 origin. You should now have an S3 bucket that looks like the following:

Root of bucket

Subdirectory in bucket

Next, create a CloudFront distribution that your users will use to access the content. Open the CloudFront console, and choose Create Distribution. For Select a delivery method for your content, under Web, choose Get Started.

On the next screen, you set up the distribution. Below are the options to configure:

  • Origin Domain Name:  Select the S3 bucket that you created earlier.
  • Restrict Bucket Access: Choose Yes.
  • Origin Access Identity: Create a new identity.
  • Grant Read Permissions on Bucket: Choose Yes, Update Bucket Policy.
  • Object Caching: Choose Customize (I am changing the behavior to avoid having CloudFront cache objects, as this could affect your ability to troubleshoot while implementing the Lambda code).
    • Minimum TTL: 0
    • Maximum TTL: 0
    • Default TTL: 0

You can accept all of the other defaults. Again, this is a proof-of-concept exercise. After you are comfortable that the CloudFront distribution is working properly with the origin and Lambda code, you can re-visit the preceding values and make changes before implementing it in production.

CloudFront distributions can take several minutes to deploy (because the changes have to propagate out to all of the edge locations). After that’s done, test the functionality of the S3-backed static website. Looking at the distribution, you can see that CloudFront assigns a domain name:

CloudFront Distribution Settings

Try to access the website using a combination of various URLs:

http://<domainname>/:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET / HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "cb7e2634fe66c1fd395cf868087dd3b9"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: -D2FSRwzfcwyKZKFZr6DqYFkIf4t7HdGw2MkUF5sE6YFDxRJgi0R1g==
< Content-Length: 209
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:16 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This is because CloudFront is configured to request a default root object (index.html) from the origin.

http://<domainname>/subdirectory/:  Doesn’t work

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< x-amz-server-side-encryption: AES256
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: Iqf0Gy8hJLiW-9tOAdSFPkL7vCWBrgm3-1ly5tBeY_izU82ftipodA==
< Content-Length: 0
< Content-Type: application/x-directory
< Last-Modified: Wed, 19 Jul 2017 19:21:24 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

If you use a tool such like cURL to test this, you notice that CloudFront and S3 are returning a blank response. The reason for this is that the subdirectory does exist, but it does not resolve to an S3 object. Keep in mind that S3 is an object store, so there are no real directories. User interfaces such as the S3 console present a hierarchical view of a bucket with folders based on the presence of forward slashes, but behind the scenes the bucket is just a collection of keys that represent stored objects.

http://<domainname>/subdirectory/index.html:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/index.html
*   Trying 54.192.192.130...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.130) port 80 (#0)
> GET /subdirectory/index.html HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 20:35:15 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: RefreshHit from cloudfront
< X-Amz-Cf-Id: bkh6opXdpw8pUomqG3Qr3UcjnZL8axxOH82Lh0OOcx48uJKc_Dc3Cg==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3f2788d309d30f41de96da6f931d4ede.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This request works as expected because you are referencing the object directly. Now, you implement the [email protected] function to return the default index.html page for any subdirectory. Looking at the example JavaScript code, here’s where the magic happens:

var newuri = olduri.replace(/\/$/, '\/index.html');

You are going to use a JavaScript regular expression to match any ‘/’ that occurs at the end of the URI and replace it with ‘/index.html’. This is the equivalent to what S3 does on its own with static website hosting. However, as I mentioned earlier, you can’t rely on this if you want to use a policy on the bucket to restrict it so that users must access the bucket through CloudFront. That way, all requests to the S3 bucket must be authenticated using the S3 REST API. Because of this, you implement a [email protected] function that takes any client request ending in ‘/’ and append a default ‘index.html’ to the request before requesting the object from the origin.

In the Lambda console, choose Create function. On the next screen, skip the blueprint selection and choose Author from scratch, as you’ll use the sample code provided.

Next, configure the trigger. Choosing the empty box shows a list of available triggers. Choose CloudFront and select your CloudFront distribution ID (created earlier). For this example, leave Cache Behavior as * and CloudFront Event as Origin Request. Select the Enable trigger and replicate box and choose Next.

Lambda Trigger

Next, give the function a name and a description. Then, copy and paste the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

Next, define a role that grants permissions to the Lambda function. For this example, choose Create new role from template, Basic Edge Lambda permissions. This creates a new IAM role for the Lambda function and grants the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

In a nutshell, these are the permissions that the function needs to create the necessary CloudWatch log group and log stream, and to put the log events so that the function is able to write logs when it executes.

After the function has been created, you can go back to the browser (or cURL) and re-run the test for the subdirectory request that failed previously:

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.202...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.202) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 21:18:44 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: rwFN7yHE70bT9xckBpceTsAPcmaadqWB9omPBv2P6WkIfQqdjTk_4w==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3572de112011f1b625bb77410b0c5cca.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

You have now configured a way for CloudFront to return a default index page for subdirectories in S3!

Summary

In this post, you used [email protected] to be able to use CloudFront with an S3 origin access identity and serve a default root object on subdirectory URLs. To find out some more about this use-case, see [email protected] integration with CloudFront in our documentation.

If you have questions or suggestions, feel free to comment below. For troubleshooting or implementation help, check out the Lambda forum.

Want to Learn More About AWS CloudHSM and Hardware Key Management? Register for and Attend this October 25 Tech Talk: “CloudHSM – Secure, Scalable Key Storage in AWS”

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/want-to-learn-more-about-aws-cloudhsm-and-hardware-key-management-register-for-and-attend-this-october-25-tech-talk-cloudhsm-secure-scalable-key-storage-in-aws/

AWS Online Tech Talks banner

As part of the AWS Online Tech Talks series, AWS will present CloudHSM – Secure, Scalable Key Storage in AWS on Wednesday, October 25. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.

Applications handling confidential or sensitive data are subject to corporate or regulatory requirements and therefore need validated control of encryption keys and cryptographic operations. AWS CloudHSM brings to your AWS resources the security and control of traditional HSMs. This Tech Talk will show how you can leverage CloudHSM to build scalable, reliable applications without sacrificing either security or performance. Attend this Tech Talk to learn how you can use CloudHSM to quickly and easily build secure, compliant, fast, and flexible applications.

You also will:

  • Learn about the challenges CloudHSM can help you address.
  • Understand how CloudHSM can secure your workloads and data.
  • Learn how to transfer and modernize workloads.

This tech talk is free. Register today.

– Craig

Security Flaw in Infineon Smart Cards and TPMs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/security_flaw_i_1.html

A security flaw in Infineon smart cards and TPMs allows an attacker to recover private keys from the public keys. Basically, the key generation algorithm sometimes creates public keys that are vulnerable to Coppersmith’s attack:

While all keys generated with the library are much weaker than they should be, it’s not currently practical to factorize all of them. For example, 3072-bit and 4096-bit keys aren’t practically factorable. But oddly enough, the theoretically stronger, longer 4096-bit key is much weaker than the 3072-bit key and may fall within the reach of a practical (although costly) factorization if the researchers’ method improves.

To spare time and cost, attackers can first test a public key to see if it’s vulnerable to the attack. The test is inexpensive, requires less than 1 millisecond, and its creators believe it produces practically zero false positives and zero false negatives. The fingerprinting allows attackers to expend effort only on keys that are practically factorizable.

This is the flaw in the Estonian national ID card we learned about last month.

The paper isn’t online yet. I’ll post it when it is.

Ouch. This is a bad vulnerability, and it’s in systems — like the Estonian national ID card — that are critical.