Tag Archives: security

Welcome to Crypto Week 2019

Post Syndicated from Nick Sullivan original https://blog.cloudflare.com/welcome-to-crypto-week-2019/

Welcome to Crypto Week 2019

Welcome to Crypto Week 2019

The Internet is an extraordinarily complex and evolving ecosystem. Its constituent protocols range from the ancient and archaic (hello FTP) to the modern and sleek (meet WireGuard), with a fair bit of everything in between. This evolution is ongoing, and as one of the most connected networks on the Internet, Cloudflare has a duty to be a good steward of this ecosystem. We take this responsibility to heart: Cloudflare’s mission is to help build a better Internet. In this spirit, we are very proud to announce Crypto Week 2019.

Every day this week we’ll announce a new project or service that uses modern cryptography to build a more secure, trustworthy Internet. Everything we release this week will be free and immediately useful. This blog is a fun exploration of the themes of the week.

  • Monday: Coming Soon
  • Tuesday: Coming Soon
  • Wednesday: Coming Soon
  • Thursday: Coming Soon
  • Friday: Coming Soon

The Internet of the Future

Many pieces of the Internet in use today were designed in a different era with different assumptions. The Internet’s success is based on strong foundations that support constant reassessment and improvement. Sometimes these improvements require deploying new protocols.

Performing an upgrade on a system as large and decentralized as the Internet can’t be done by decree;

  • There are too many economic, cultural, political, and technological factors at play.
  • Changes must be compatible with existing systems and protocols to even be considered for adoption.
  • To gain traction, new protocols must provide tangible improvements for users. Nobody wants to install an update that doesn’t improve their experience!

The last time the Internet had a complete reboot and upgrade was during TCP/IP flag day in 1983. Back then, the Internet (called ARPANET) had fewer than ten thousand hosts! To have an Internet-wide flag day today to switch over to a core new protocol is inconceivable; the scale and diversity of the components involved is way too massive. Too much would break. It’s challenging enough to deprecate outmoded functionality. In some ways, the open Internet is a victim of its own success. The bigger a system grows and the longer it stays the same, the harder it is to change. The Internet is like a massive barge: it takes forever to steer in a different direction and it’s carrying a lot of garbage.

Welcome to Crypto Week 2019
ARPANET, 1983 (Computer History Museum)

As you would expect, many of the warts of the early Internet still remain. Both academic security researchers and real-life adversaries are still finding and exploiting vulnerabilities in the system. Many vulnerabilities are due to the fact that most of the protocols in use on the Internet have a weak notion of trust inherited from the early days. With 50 hosts online, it’s relatively easy to trust everyone, but in a world-scale system, that trust breaks down in fascinating ways. The primary tool to scale trust is cryptography, which helps provide some measure of accountability, though it has its own complexities.

In an ideal world, the Internet would provide a trustworthy substrate for human communication and commerce. Some people naïvely assume that this is the natural direction the evolution of the Internet will follow. However, constant improvement is not a given. It’s possible that the Internet of the future will actually be worse than the Internet today: less open, less secure, less private, less trustworthy. There are strong incentives to weaken the Internet on a fundamental level by Governments, by businesses such as ISPs, and even by the financial institutions entrusted with our personal data.

In a system with as many stakeholders as the Internet, real change requires principled commitment from all invested parties. At Cloudflare, we believe everyone is entitled to an Internet built on a solid foundation of trust. Crypto Week is our way of helping nudge the Internet’s evolution in a more trust-oriented direction. Each announcement this week helps bring the Internet of the future to the present in a tangible way.

Ongoing Internet Upgrades

Before we explore the Internet of the future, let’s explore some of the previous and ongoing attempts to upgrade the Internet’s fundamental protocols.

Routing Security

As we highlighted in last year’s Crypto Week one of the weak links on the Internet is routing. Not all networks are directly connected.

To send data from one place to another, you might have to rely on intermediary networks to pass your data along. A packet sent from one host to another may have to be passed through up to a dozen of these intermediary networks. No single network knows the full path the data will have to take to get to its destination, it only knows which network to pass it to next.  The protocol that determines how packets are routed is called the Border Gateway Protocol (BGP.) Generally speaking, networks use BGP to announce to each other which addresses they know how to route packets for and (dependent on a set of complex rules) these networks share what they learn with their neighbors.

Unfortunately, BGP is completely insecure:

  • Any network can announce any set of addresses to any other network, even addresses they don’t control. This leads to a phenomenon called BGP hijacking, where networks are tricked into sending data to the wrong network.
  • A BGP hijack is most often caused by accidental misconfiguration, but can also be the result of malice on the network operator’s part.
  • During a BGP hijack, a network inappropriately announces a set of addresses to other networks, which results in packets destined for the announced addresses to be routed through the illegitimate network.

Understanding the risk

If the packets represent unencrypted data, this can be a big problem as it allows the hijacker to read or even change the data:

Mitigating the risk

The Resource Public Key Infrastructure (RPKI) system helps bring some trust to BGP by enabling networks to utilize cryptography to digitally sign network routes with certificates, making BGP hijacking much more difficult.

  • This enables participants of the network to gain assurances about the authenticity of route advertisements. Certificate Transparency (CT) is a tool that enables additional trust for certificate-based systems. Cloudflare operates the Cirrus CT log to support RPKI.

Since we announced our support of RPKI last year, routing security has made big strides. More routes are signed, more networks validate RPKI, and the software ecosystem has matured, but this work is not complete. Most networks are still vulnerable to BGP hijacking. For example, Pakistan knocked YouTube offline with a BGP hijack back in 2008, and could likely do the same today. Adoption here is driven less by providing a benefit to users, but rather by reducing systemic risk, which is not the strongest motivating factor for adopting a complex new technology. Full routing security on the Internet could take decades.

DNS Security

The Domain Name System (DNS) is the phone book of the Internet. Or, for anyone under 25 who doesn’t remember phone books, it’s the system that takes hostnames (like cloudflare.com or facebook.com) and returns the Internet address where that host can be found. For example, as of this publication, www.cloudflare.com is 104.17.209.9 and 104.17.210.9 (IPv4) and 2606:4700::c629:d7a2, 2606:4700::c629:d6a2 (IPv6). Like BGP, DNS is completely insecure. Queries and responses sent unencrypted over the Internet are modifiable by anyone on the path.

There are many ongoing attempts to add security to DNS, such as:

  • DNSSEC that adds a chain of digital signatures to DNS responses
  • DoT/DoH that wraps DNS queries in the TLS encryption protocol (more on that later)

Both technologies are slowly gaining adoption, but have a long way to go.

Welcome to Crypto Week 2019
DNSSEC-signed responses served by Cloudflare

Welcome to Crypto Week 2019
Cloudflare’s 1.1.1.1 resolver queries are already over 5% DoT/DoH

Just like RPKI, securing DNS comes with a performance cost, making it less attractive to users. However,

The Web

Transport Layer Security (TLS) is a cryptographic protocol that gives two parties the ability to communicate over an encrypted and authenticated channel. TLS protects communications from eavesdroppers even in the event of a BGP hijack. TLS is what puts the “S” in HTTPS. TLS protects web browsing against multiple types of network adversaries.

Welcome to Crypto Week 2019
Requests hop from network to network over the Internet

Welcome to Crypto Week 2019
For unauthenticated protocols, an attacker on the path can impersonate the server

Welcome to Crypto Week 2019
Attackers can use BGP hijacking to change the path so that communication can be intercepted

Welcome to Crypto Week 2019
Authenticated protocols are protected from interception attacks

The adoption of TLS on the web is partially driven by the fact that:

  • It’s easy and free for websites to get an authentication certificate (via Let’s Encrypt, Universal SSL, etc.)
  • Browsers make TLS adoption appealing to website operators by only supporting new web features such as HTTP/2 over HTTPS.

This has led to the rapid adoption of HTTPS over the last five years.

Welcome to Crypto Week 2019
HTTPS adoption curve (from Google Chrome)‌‌

To further that adoption, TLS recently got an upgrade in TLS 1.3, making it faster and more secure (a combination we love). It’s taking over the Internet!

Welcome to Crypto Week 2019
TLS 1.3 adoption over the last 12 months (from Cloudflare’s perspective)

Despite this fantastic progress in the adoption of security for routing, DNS, and the web, there are still gaps in the trust model of the Internet. There are other things needed to help build the Internet of the future. To find and identify these gaps, we lean on research experts.

Research Farm to Table

Cryptographic security on the Internet is a hot topic and there have been many flaws and issues recently pointed out in academic journals. Researchers often study the vulnerabilities of the past and ask:

  • What other critical components of the Internet have the same flaws?
  • What underlying assumptions can subvert trust in these existing systems?

The answers to these questions help us decide what to tackle next. Some recent research  topics we’ve learned about include:

  • Quantum Computing
  • Attacks on Time Synchronization
  • DNS attacks affecting Certificate issuance
  • Scaling distributed trust

Cloudflare keeps abreast of these developments and we do what we can to bring these new ideas to the Internet at large. In this respect, we’re truly standing on the shoulders of giants.

Future-proofing Internet Cryptography

The new protocols we are currently deploying (RPKI, DNSSEC, DoT/DoH, TLS 1.3) use relatively modern cryptographic algorithms published in the 1970s and 1980s.

  • The security of these algorithms is based on hard mathematical problems in the field of number theory, such as factoring and the elliptic curve discrete logarithm problem.
  • If you can solve the hard problem, you can crack the code. Using a bigger key makes the problem harder, making it more difficult to break, but also slows performance.

Modern Internet protocols typically pick keys large enough to make it infeasible to break with classical computers, but no larger. The sweet spot is around 128-bits of security; meaning a computer has to do approximately 2¹²⁸ operations to break it.

Arjen Lenstra and others created a useful measure of security levels by comparing the amount of energy it takes to break a key to the amount of water you can boil using that much energy. You can think of this as the electric bill you’d get if you run a computer long enough to crack the key.

  • 35-bit security is “Teaspoon security” — It takes about the same amount of energy to break a 35-bit key as it does to boil a teaspoon of water (pretty easy).

Welcome to Crypto Week 2019

  • 65 bits gets you up to “Pool security” – The energy needed to boil the average amount of water in a swimming pool.

Welcome to Crypto Week 2019

  • 105 bits is “Sea Security” – The energy needed to boil the Mediterranean Sea.

Welcome to Crypto Week 2019

  • 114-bits is “Global Security” –  The energy needed to boil all water on Earth.

Welcome to Crypto Week 2019

  • 128-bit security is safely beyond that of Global Security – Anything larger is overkill.
  • 256-bit security corresponds to “Universal Security” – The estimated mass-energy of the observable universe. So, if you ever hear someone suggest 256-bit AES, you know they mean business.

Welcome to Crypto Week 2019

Post-Quantum of Solace

As far as we know, the algorithms we use for cryptography are functionally uncrackable with all known algorithms that classical computers can run. Quantum computers change this calculus. Instead of transistors and bits, a quantum computer uses the effects of quantum mechanics to perform calculations that just aren’t possible with classical computers. As you can imagine, quantum computers are very difficult to build. However, despite large-scale quantum computers not existing quite yet, computer scientists have already developed algorithms that can only run efficiently on quantum computers. Surprisingly, it turns out that with a sufficiently powerful quantum computer, most of the hard mathematical problems we rely on for Internet security become easy!

Although there are still quantum-skeptics out there, some experts estimate that within 15-30 years these large quantum computers will exist, which poses a risk to every security protocol online. Progress is moving quickly; every few months a more powerful quantum computer is announced.

Welcome to Crypto Week 2019

Luckily, there are cryptography algorithms that rely on different hard math problems that seem to be resistant to attack from quantum computers. These math problems form the basis of so-called quantum-resistant (or post-quantum) cryptography algorithms that can run on classical computers. These algorithms can be used as substitutes for most of our current quantum-vulnerable algorithms.

  • Some quantum-resistant algorithms (such as McEliece and Lamport Signatures) were invented decades ago, but there’s a reason they aren’t in common use: they lack some of the nice properties of the algorithms we’re currently using, such as key size and efficiency.
  • Some quantum-resistant algorithms require much larger keys to provide 128-bit security
  • Some are very CPU intensive,
  • And some just haven’t been studied enough to know if they’re secure.

It is possible to swap our current set of quantum-vulnerable algorithms with new quantum-resistant algorithms, but it’s a daunting engineering task. With widely deployed protocols, it is hard to make the transition from something fast and small to something slower, bigger or more complicated without providing concrete user benefits. When exploring new quantum-resistant algorithms, minimizing user impact is of utmost importance to encourage adoption. This is a big deal, because almost all the protocols we use to protect the Internet are vulnerable to quantum computers.

Cryptography-breaking quantum computing is still in the distant future, but we must start the transition to ensure that today’s secure communications are safe from tomorrow’s quantum-powered onlookers; however, that’s not the most timely problem with the Internet. We haven’t addressed that…yet.

Attacking time

Just like DNS, BGP, and HTTP, the Network Time Protocol (NTP) is fundamental to how the Internet works. And like these other protocols, it is completely insecure.

  • Last year, Cloudflare introduced Roughtime as a mechanism for computers to access the current time from a trusted server in an authenticated way.
  • Roughtime is powerful because it provides a way to distribute trust among multiple time servers so that if one server attempts to lie about the time, it will be caught.

However, Roughtime is not exactly a secure drop-in replacement for NTP.

  • Roughtime lacks the complex mechanisms of NTP that allow it to compensate for network latency and yet maintain precise time, especially if the time servers are remote. This leads to imprecise time.
  • Roughtime also involves expensive cryptography that can further reduce precision. This lack of precision makes Roughtime useful for browsers and other systems that need coarse time to validate certificates (most certificates are valid for 3 months or more), but some systems (such as those used for financial trading) require precision to the millisecond or below.

With Roughtime we supported the time protocol of the future, but there are things we can do to help improve the health of security online today.

Welcome to Crypto Week 2019

Some academic researchers, including Aanchal Malhotra of Boston University, have demonstrated a variety of attacks against NTP, including BGP hijacking and off-path User Datagram Protocol (UDP) attacks.

  • Some of these attacks can be avoided by connecting to an NTP server that is close to you on the Internet.
  • However, to bring cryptographic trust to time while maintaining precision, we need something in between NTP and Roughtime.
  • To solve this, it’s natural to turn to the same system of trust that enabled us to patch HTTP and DNS: Web PKI.

Attacking the Web PKI

The Web PKI is similar to the RPKI, but is more widely visible since it relates to websites rather than routing tables.

  • If you’ve ever clicked the lock icon on your browser’s address bar, you’ve interacted with it.
  • The PKI relies on a set of trusted organizations called Certificate Authorities (CAs) to issue certificates to websites and web services.
  • Websites use these certificates to authenticate themselves to clients as part of the TLS protocol in HTTPS.

Welcome to Crypto Week 2019
TLS provides encryption and integrity from the client to the server with the help of a digital certificate 

Welcome to Crypto Week 2019
TLS connections are safe against MITM, because the client doesn’t trust the attacker’s certificate

While we were all patting ourselves on the back for moving the web to HTTPS, some researchers managed to find and exploit a weakness in the system: the process for getting HTTPS certificates.

Certificate Authorities (CAs) use a process called domain control validation (DCV) to ensure that they only issue certificates to websites owners who legitimately request them.

  • Some CAs do this validation manually, which is secure, but can’t scale to the total number of websites deployed today.
  • More progressive CAs have automated this validation process, but rely on insecure methods (HTTP and DNS) to validate domain ownership.

Without ubiquitous cryptography in place (DNSSEC may never reach 100% deployment), there is no completely secure way to bootstrap this system. So, let’s look at how to distribute trust using other methods.

One tool at our disposal is the distributed nature of the Cloudflare network.

Cloudflare is global. We have locations all over the world connected to dozens of networks. That means we have different vantage points, resulting in different ways to traverse networks. This diversity can prove an advantage when dealing with BGP hijacking, since an attacker would have to hijack multiple routes from multiple locations to affect all the traffic between Cloudflare and other distributed parts of the Internet. The natural diversity of the network raises the cost of the attacks.

A distributed set of connections to the Internet and using them as a quorum is a mighty paradigm to distribute trust, with or without cryptography.

Distributed Trust

This idea of distributing the source of trust is powerful. Last year we announced the Distributed Web Gateway that

  • Enables users to access content on the InterPlanetary File System (IPFS), a network structured to reduce the trust placed in any single party.
  • Even if a participant of the network is compromised, it can’t be used to distribute compromised content because the network is content-addressed.
  • However, using content-based addressing is not the only way to distribute trust between multiple independent parties.

Another way to distribute trust is to literally split authority between multiple independent parties. We’ve explored this topic before. In the context of Internet services, this means ensuring that no single server can authenticate itself to a client on its own. For example,

  • In HTTPS the server’s private key is the lynchpin of its security. Compromising the owner of the private key (by hook or by crook) gives an attacker the ability to impersonate (spoof) that service. This single point of failure puts services at risk. You can mitigate this risk by distributing the authority to authenticate the service between multiple independently-operated services.

Welcome to Crypto Week 2019
TLS doesn’t protect against server compromise

Welcome to Crypto Week 2019
With distributed trust, multiple parties combine to protect the connection

Welcome to Crypto Week 2019
An attacker that has compromised one of the servers cannot break the security of the system‌‌

The Internet barge is old and slow, and we’ve only been able to improve it through the meticulous process of patching it piece by piece. Another option is to build new secure systems on top of this insecure foundation. IPFS is doing this, and IPFS is not alone in its design. There has been more research into secure systems with decentralized trust in the last ten years than ever before.

The result is radical new protocols and designs that use exotic new algorithms. These protocols do not supplant those at the core of the Internet (like TCP/IP), but instead, they sit on top of the existing Internet infrastructure, enabling new applications, much like HTTP did for the web.

Gaining Traction

Some of the most innovative technical projects were considered failures because they couldn’t attract users. New technology has to bring tangible benefits to users to sustain it: useful functionality, content, and a decent user experience. Distributed projects, such as IPFS and others, are gaining popularity, but have not found mass adoption. This is a chicken-and-egg problem. New protocols have a high barrier to entryusers have to install new softwareand because of the small audience, there is less incentive to create compelling content. Decentralization and distributed trust are nice security features to have, but they are not products. Users still need to get some benefit out of using the platform.

An example of a system to break this cycle is the web. In 1992 the web was hardly a cornucopia of awesomeness. What helped drive the dominance of the web was its users.

  • The growth of the user base meant more incentive for people to build services, and the availability of more services attracted more users. It was a virtuous cycle.
  • It’s hard for a platform to gain momentum, but once the cycle starts, a flywheel effect kicks in to help the platform grow.

The Distributed Web Gateway project Cloudflare launched last year in Crypto Week is our way of exploring what happens if we try to kickstart that flywheel. By providing a secure, reliable, and fast interface from the classic web with its two billion users to the content on the distributed web, we give the fledgling ecosystem an audience.

  • If the advantages provided by building on the distributed web are appealing to users, then the larger audience will help these services grow in popularity.
  • This is somewhat reminiscent of how IPv6 gained adoption. It started as a niche technology only accessible using IPv4-to-IPv6 translation services.
  • IPv6 adoption has now grown so much that it is becoming a requirement for new services. For example, Apple is requiring that all apps work in IPv6-only contexts.

Eventually, as user-side implementations of distributed web technologies improve, people may move to using the distributed web natively rather than through an HTTP gateway. Or they may not! By leveraging Cloudflare’s global network to give users access to new technologies based on distributed trust, we give these technologies a better chance at gaining adoption.

Happy Crypto Week

At Cloudflare, we always support new technologies that help make the Internet better. Part of helping make a better Internet is scaling the systems of trust that underpin web browsing and protect them from attack. We provide the tools to create better systems of assurance with fewer points of vulnerability. We work with academic researchers of security to get a vision of the future and engineer away vulnerabilities before they can become widespread. It’s a constant journey.

Cloudflare knows that none of this is possible without the work of researchers. From award-winning researcher publishing papers in top journals to the blog posts of clever hobbyists, dedicated and curious people are moving the state of knowledge of the world forward. However, the push to publish new and novel research sometimes holds researchers back from committing enough time and resources to fully realize their ideas. Great research can be powerful on its own, but it can have an even broader impact when combined with practical applications. We relish the opportunity to stand on the shoulders of these giants and use our engineering know-how and global reach to expand on their work to help build a better Internet.

So, to all of you dedicated researchers, thank you for your work! Crypto Week is yours as much as ours. If you’re working on something interesting and you want help to bring the results of your research to the broader Internet, please contact us at [email protected]. We want to help you realize your dream of making the Internet safe and trustworthy.

Security Compliance at Cloudflare

Post Syndicated from Rebecca Rogers original https://blog.cloudflare.com/security-compliance-at-cloudflare/

Security Compliance at Cloudflare

Cloudflare believes trust is fundamental to helping build a better Internet. One way Cloudflare is helping our customers earn their users’ trust is through industry standard security compliance certifications and regulations.

Security compliance certifications are reports created by independent, third-party auditors that validate  and document a company’s commitment to security. These external auditors will conduct a rigorous review of a company’s technical environment and evaluate whether or not there are thorough controls – or safeguards – in place to protect the security, confidentiality, and availability of information stored and processed in the environment. SOC 2 was established by the American Institute of CPAs and is important to many of our U.S. companies, as it is a standardized set of requirements a company must meet in order to comply. Additionally, PCI and ISO 27001 are international standards. Cloudflare cares about achieving certifications because our adherence to these standards creates confidence to customers across the globe that we are committed to security. So, the Security team has been hard at work obtaining these meaningful compliance certifications.

Since the beginning of this year, we have been renewing our PCI DSS certification in February, achieving SOC 2 Type 1 compliance in March, obtaining our ISO 27001 certification in April, and today we are proud to announce we are SOC 2 Type 2 compliant!

Our SOC 2 Journey

SOC 2 is a compliance certification that focuses on internal controls of an organization related to five trust services criteria. These criteria are: Security, Confidentiality, Availability, Processing Integrity, and Privacy. Each criterion presents a set of control standards that are established by the American Institute of Certified Public Accountants (AICPA) and are to be used to implement controls on the information systems of a company.

Cloudflare’s Security team made the decision to evaluate our companies’ controls around three of the five criteria. We determined to pursue our SOC 2 compliance by evaluating our controls around Security, Confidentiality, and Availability across our entire organization. We first worked across the company to design and implement strong controls that meet the requirements set forth by the AICPA. This took effort and collaboration between teams in Engineering, IT, Legal, and HR to create strong controls that also make sense to our environment. Our external auditors then performed an audit of Cloudflare’s controls, and determined our security controls were suitably designed as of January 31, 2019.

Security Compliance at Cloudflare

Three months after obtaining SOC 2 Type 1 compliance, the next step for Cloudflare was to demonstrate the controls we designed were actually operating effectively. Our SOC 2 Type 2 audit tested the operating effectiveness of Cloudflare’s security controls over this three month period. Cloudflare’s SOC 2 Type 2 report can be available upon request and describes the design of Cloudflare’s internal control framework around security, confidentiality and availability and the products and services in-scope for our certification.

What else?

SOC 3

In addition to SOC 2 Type 2, Cloudflare also obtained our SOC 3 report from our independent external auditors. SOC 3 is a report for public consumption on the external auditor’s opinion and a narrative of Cloudflare’s control environment. Cloudflare’s Security team decided on obtaining our SOC 3 report so all customers and prospects could access our auditor’s opinion of our implementation of security, confidentiality, and availability controls.

ISO/IEC 27001: 2013

Prior to Cloudflare’s SOC audit, Cloudflare was working to mature our organizations’ Information Security Management System in order to obtain our ISO/IEC 27001: 2013 certification. ISO 27001 is an international management system standard developed by the International Organization for Standardization (ISO) and is an industry-wide accepted information security certification. Cloudflare’s commitment to achieving ISO/IEC 27001: 2013 certification was to demonstrate to our customers that we are committed to preserving the confidentiality, integrity, and availability of information on a global scale.

The primary focus of ISO 27001:2013 requirements is the focus on implementation of an Information Security Management System (ISMS) and a comprehensive risk management program.  Cloudflare worked across the organization to implement the ISMS to ensure sensitive company information remains secure.

Security Compliance at Cloudflare

Cloudflare’s ISMS was assessed by a third-party auditor, A-LIGN, and we received our ISO 27001: 2013 certification in April 2019. Cloudflare’s ISO 27001:2013 certificate is also available to customers upon request.

PCI DSS v3.2.1

Although Cloudflare has been PCI certified as a Level 1 Service Provider since 2014, our latest certification adheres to the newest security standards. The Payment Card Industry Data Security Standard (PCI DSS) is a global financial information security standards that ensures customers’ credit card data is safe and secure.

Maintaining PCI DSS compliance is important for Cloudflare because not only are we evaluated as a merchant, but we are also a service provider. Cloudflare’s WAF product satisfies PCI requirement 6.6, and may be used by Cloudflare’s customers as a solution to prevent web-based attacks in front of public-facing web applications.

Security Compliance at Cloudflare

Early in 2019, Cloudflare was audited by an independent Qualified Security Assessor to validate our adherence to the PCI DSS security requirements. Cloudflare’s latest PCI Attestation of Compliance (AOC) is available to customers upon request.

Compliance Page on the Website

Cloudflare is committed to helping our customers’ earn their user’s trust by ensuring our products are secure. The Security team is committed to adhering to security compliance certifications and regulations that maintain the security, confidentiality, and availability of company and client information.
In order to help our customers keep track of the latest certifications, Cloudflare has launched our Compliance certification page – www.cloudflare.com/compliance. Today, you can view our status on all compliance certifications and download our SOC 3 report.

A free Argo Tunnel for your next project

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/a-free-argo-tunnel-for-your-next-project/

A free Argo Tunnel for your next project

Argo Tunnel lets you expose a server to the Internet without opening any ports. The service runs a lightweight process on your server that creates outbound tunnels to the Cloudflare network. Instead of managing DNS, network, and firewall complexity, Argo Tunnel helps administrators serve traffic from their origin through Cloudflare with a single command.

We built Argo Tunnel to remove the burden of securing and connecting servers to the Internet. This new model makes it easier to run a service in multi-cloud and hybrid deployments by replacing manual and error-prone work with a process that adds intelligence to the last-mile between Cloudflare and your origins or clusters. However, the service was previously only available to users with Cloudflare accounts. We want to make Argo Tunnel more accessible for any project.

Starting today, any user, even those without a Cloudflare account, can try this new method of connecting their server to the Internet. Argo Tunnel can now be used in a free model that will create a new URL, known only to you, that will proxy traffic to your server. We’re excited to make connecting a server to the Internet more accessible for everyone.

What is Argo Tunnel?

Argo Tunnel replaces legacy models of connecting a server to the Internet with a secure, persistent connection to Cloudflare. Since Cloudflare first launched in 2009, customers have added their site to our platform by changing their name servers at their domain’s registrar to ones managed by Cloudflare. Administrators then create a DNS record in our dashboard that points visitors to their domain to their origin server.

When requests are made for those domains, the queries hit our data centers first. We’re able to use that position to block malicious traffic like DDoS attacks. However, if attackers discovered that origin IP, they could bypass Cloudflare’s security features and attack the server directly. Adding additional protections against that risk introduced more hassle and configuration.

A free Argo Tunnel for your next project

One year ago, Cloudflare launched Argo Tunnel to solve those problems. Argo Tunnel connects your origin server to the Cloudflare network by running a lightweight daemon on your machine that only makes outbound calls. The process generates DNS records in the dashboard for you, removing the need to manually configure records and origin IP addresses.

Most importantly, Argo Tunnel helps shield your origin by simplifying the firewall rules you need to configure. Argo Tunnel makes outbound calls to the Cloudflare network and proxies requests back to your server. You can then disable all ingress to the machine and ensure that Cloudflare’s security features always stand between your server and the rest of the Internet. In addition to secure, we made it fast. The connection uses our Argo Smart Routing technology to find the most performant path from your visitors to your origin.

How can I use the free version?

Argo Tunnel is now available to all users without a Cloudflare account. All that is needed is the Cloudflare daemon, cloudflared, running on your machine. With a single command, cloudflared will generate a random subdomain of “trycloudflare.com” and begin proxying traffic to your server.

  1. Install cloudflared on your web server or laptop; instructions are available here. If you have an older copy, you’ll first need to update your version to the latest (2019.6.0)
  2. Launch a web server.
  3. Run the terminal command below to start a free tunnel. cloudflared will begin proxying requests to your localhost server; no additional flags needed.

$ cloudflared tunnel

The command above will proxy traffic to port 8080 by default, but you can specify a different port with the –url flag

$ cloudflared tunnel --url localhost:7000

cloudflared will generate a random subdomain when connecting to the Cloudflare network and print it in the terminal for you to use. This will make whatever server you are running on your local machine accessible to the world through a public URL only you know. The output will resemble the following:

A free Argo Tunnel for your next project

How can I use it?

  • Run a web server on your laptop to share a project with collaborates on different networks
  • Test mobile browser compatibility for a new site
  • Perform speed tests from different regions

Why is it free?

We want more users to experience the speed and security improvements of Argo Tunnel (and Argo Smart Routing). We hope you’ll feel the same way about those benefits after testing it with the free version and that you’ll start using it for your production sites.

We also don’t guarantee any SLA or up-time of the free service – we plan to test new Argo Tunnel features and improvements on these free tunnels. This provides us with a group of connections to test before we deploy to production customers. Free tunnels are meant to be used for testing and development, not for deploying a production website.

What’s next?

You can read our guide here to start using the free version of Argo Tunnel. Got feedback? Please send it here.

Protecting Project Galileo websites from HTTP attacks

Post Syndicated from Maxime Guerreiro original https://blog.cloudflare.com/protecting-galileo-websites/

Protecting Project Galileo websites from HTTP attacks

Yesterday, we celebrated the fifth anniversary of Project Galileo. More than 550 websites are part of this program, and they have something in common: each and every one of them has been subject to attacks in the last month. In this blog post, we will look at the security events we observed between the 23 April 2019 and 23 May 2019.

Project Galileo sites are protected by the Cloudflare Firewall and Advanced DDoS Protection which contain a number of features that can be used to detect and mitigate different types of attack and suspicious traffic. The following table shows how each of these features contributed to the protection of sites on Project Galileo.

Firewall Feature

Requests Mitigated

Distinct originating IPs

Sites Affected (approx.)

Firewall
Rules

78.7M

396.5K

~ 30

Security
Level

41.7M

1.8M

~ 520

Access
Rules

24.0M

386.9K

~ 200

Browser
Integrity Check

9.4M

32.2K

~ 500

WAF

4.5M

163.8K

~ 200

User-Agent
Blocking

2.3M

1.3K

~ 15

Hotlink
Protection

2.0M

686.7K

~ 40

HTTP
DoS

1.6M

360

1

Rate
Limit

623.5K

6.6K

~ 15

Zone
Lockdown

9.7K

2.8K

~ 10

WAF (Web Application Firewall)

Although not the most impressive in terms of blocked requests, the WAF is the most interesting as it identifies and blocks malicious requests, based on heuristics and rules that are the result of seeing attacks across all of our customers and learning from those. The WAF is available to all of our paying customers, protecting them against 0-days, SQL/XSS exploits and more. For the Project Galileo customers the WAF rules blocked more than 4.5 million requests in the month that we looked at, matching over 130 WAF rules and approximately 150k requests per day.

Protecting Project Galileo websites from HTTP attacks
Heat map showing the attacks seen on customer sites (rows) per day (columns)

This heat map may initially appear confusing but reading one is easy once you know what to expect so bear with us! It is a table where each line is a website on Project Galileo and each column is a day. The color represents the number of requests triggering WAF rules – on a scale from 0 (white) to a lot (dark red). The darker the cell, the more requests were blocked on this day.

We observe malicious traffic on a daily basis for most websites we protect. The average Project Galileo site saw malicious traffic for 27 days in the 1 month observed, and for almost 60% of the sites we noticed daily events.

Fortunately, the vast majority of websites only receive a few malicious requests per day, likely from automated scanners. In some cases, we notice a net increase in attacks against some websites – and a few websites are under a constant influx of attacks.

Protecting Project Galileo websites from HTTP attacks
Heat map showing the attacks blocked for each WAF rule (rows) per day (columns)

This heat map shows the WAF rules that blocked requests by day. At first, it seems some rules are useless as they never match malicious requests, but this plot makes it obvious that some attack vectors become active all of a sudden (isolated dark cells). This is especially true for 0-days, malicious traffic starts once an exploit is published and is very active on the first few days. The dark active lines are the most common malicious requests, and these WAF rules protect against things like XSS and SQL injection attacks.

DoS (Denial of Service)

A DoS attack prevents legitimate visitors from accessing a website by flooding it with bad traffic.  Due to the way Cloudflare works, websites protected by Cloudflare are immune to many DoS vectors, out of the box. We block layer 3 and 4 attacks, which includes SYN floods and UDP amplifications. DNS nameservers, often described as the Internet’s phone book, are fully managed by Cloudflare, and protected – visitors know how to reach the websites.

Protecting Project Galileo websites from HTTP attacks
Line plot – requests per second to a website under DoS attack

Can you spot the attack?

As for layer 7 attacks (for instance, HTTP floods), we rely on Gatebot, an automated tool to detect, analyse and block DoS attacks, so you can sleep. The graph shows the requests per second we received on a zone, and whether or not it reached the origin server. As you can see, the bad traffic was identified automatically by Gatebot, and more than 1.6 million requests were blocked as a result.

Firewall Rules

For websites with specific requirements we provide tools to allow customers to block traffic to precisely fit their needs. Customers can easily implement complex logic using Firewall Rules to filter out specific chunks of traffic, block IPs / Networks / Countries using Access Rules and Project Galileo sites have done just that. Let’s see a few examples.

Firewall Rules allows website owners to challenge or block as much or as little traffic as they desire, and this can be done as a surgical tool “block just this request” or as a general tool “challenge every request”.

For instance, a well-known website used Firewall Rules to prevent twenty IPs from fetching specific pages. 3 of these IPs were then used to send a total of 4.5 million requests over a short period of time, and the following chart shows the requests seen for this website. When this happened Cloudflare, mitigated the traffic ensuring that the website remains available.

Protecting Project Galileo websites from HTTP attacks
Cumulative line plot. Requests per second to a website

Another website, built with WordPress, is using Cloudflare to cache their webpages. As POST requests are not cacheable, they always hit the origin machine and increase load on the origin server – that’s why this website is using firewall rules to block POST requests, except on their administration backend. Smart!

Website owners can also deny or challenge requests based on the visitor’s IP address, Autonomous System Number (ASN) or Country. Dubbed Access Rules, it is enforced on all pages of a website – hassle-free.

For example, a news website is using Cloudflare’s Access Rules to challenge visitors from countries outside of their geographic region who are accessing their website. We enforce the rules globally even for cached resources, and take care of GeoIP database updates for them, so they don’t have to.

The Zone Lockdown utility restricts a specific URL to specific IP addresses. This is useful to protect an internal but public path being accessed by external IP addresses. A non-profit based in the United Kingdom is using Zone Lockdown to restrict access to their WordPress’ admin panel and login page, hardening their website without relying on non official plugins. Although it does not prevent very sophisticated attacks, it shields them against automated attacks and phishing attempts – as even if their credentials are stolen, they can’t be used as easily.

Rate Limiting

Cloudflare acts as a CDN, caching resources and happily serving them, reducing bandwidth used by the origin server … and indirectly the costs. Unfortunately, not all requests can be cached and some requests are very expensive to handle. Malicious users may abuse this to increase load on the server, and website owners can rely on our Rate Limit to help them: they define thresholds, expressed in requests over a time span, and we make sure to enforce this threshold. A non-profit fighting against poverty relies on rate limits to protect their donation page, and we are glad to help!

Security Level

Last but not least, one of Cloudflare’s greatest assets is our threat intelligence. With such a wide lens of the threat landscape, Cloudflare uses our Firewall data, combined with machine learning to curate our IP Reputation databases. This data is provided to all Cloudflare customers, and is configured through our Security Level feature. Customers then may define their threshold sensitivity, ranging  from Essentially Off to I’m Under Attack. For every incoming request, we ask visitors to complete a challenge if the score is above a customer defined threshold. This system alone is responsible for 25% of the requests we mitigated: it’s extremely easy to use, and it constantly learns from the other protections.

Conclusion

When taken together, the Cloudflare Firewall features provide our Project Galileo customers comprehensive and effective security that enables them to ensure their important work is available. The majority of security events were handled automatically, and this is our strength – security that is always on, always available, always learning.

Updates to Serverless Architectural Patterns and Best Practices

Post Syndicated from Drew Dennis original https://aws.amazon.com/blogs/architecture/updates-to-serverless-architectural-patterns-and-best-practices/

As we sail past the halfway point between re:Invent 2018 and re:Invent 2019, I’d like to revisit some of the recent serverless announcements we’ve made. These are all complimentary to the patterns discussed in the re:Invent architecture track’s Serverless Architectural Patterns and Best Practices session.

AWS Event Fork Pipelines

AWS Event Fork Pipelines was announced in March 2019. Many customers use asynchronous event-driven processing in their serverless applications to decouple application components and address high concurrency needs. And in doing so, they often find themselves needing to backup, search, analyze, or replay these asynchronous events. That is exactly what AWS Event Fork Pipelines aims to achieve. You can plug them into a new or existing SNS topic used by your application and immediately address retention and compliance needs, gain new business insights, or even improve your application’s disaster recovery abilities.

AWS Event Fork Pipelines is a suite of three applications. The first application addresses event storage and backup needs by writing all events to an S3 bucket where they can be queried with services like Amazon Athena. The second is a search and analytics pipeline that delivers events to a new or existing Amazon ES domain, enabling search and analysis of your events. Finally, the third application is an event replay pipeline that can be used to reprocess messages should a downstream failure occur in your application. AWS Event Fork Pipelines is available in AWS Serverless Application Model (SAM) templates and are available in the AWS Serverless Application Repository (SAR). Check out our example e-commerce application on GitHub..

Amazon API Gateway Serverless Developer Portal

If you publish APIs for developers allowing them to build new applications and capabilities with your data, you understand the need for a developer portal. Also, in March 2019, we announced some significant upgrades to the API Gateway Serverless Developer Portal. The portal’s front end is written in React and is designed to be fully customizable.

The API Gateway Serverless Developer Portal is also available in GitHub and the AWS SAR. As you can see from the architecture diagram below, it is integrated with Amazon Cognito User Pools to allow developers to sign-up, receive an API Key, and register for one or more of your APIs. You can now also enable administrative scenarios from your developer portal by logging in as users belonging to the portal’s Admin group which is created when the portal is initially deployed to your account. For example, you can control which APIs appear in a customer’s developer portal, enable SDK downloads, solicit developer feedback, and even publish updates for APIs that have been recently revised.

AWS Lambda with Amazon Application Load Balancer (ALB)

Serverless microservices have been built by our customers for quite a while, with AWS Lambda and Amazon API Gateway. At re:Invent 2018 during Dr. Werner Vogel’s keynote, a new approach to serverless microservices was announced, Lambda functions as ALB targets.

ALB’s support for Lambda targets gives customers the ability to deploy serverless code behind an ALB, alongside servers, containers, and IP addresses. With this feature, ALB path and host-based routing can be used to direct incoming requests to Lambda functions. Also, ALB can now provide an entry point for legacy applications to take on new serverless functionality, and enable migration scenarios from monolithic legacy server or container-based applications.

Use cases for Lambda targets for ALB include adding new functionality to an existing application that already sits behind an ALB. This could be request monitoring by sending http headers to Elasticsearch clusters or implementing controls that manage cookies. Check out our demo of this new feature. For additional details, take a look at the feature’s documentation.

Security Overview of AWS Lambda Whitepaper

Finally, I’d be remiss if I didn’t point out the great work many of my colleagues have done in releasing the Security Overview of AWS Lambda Whitepaper. It is a succinct and enlightening read for anyone wishing to better understand the Lambda runtime environment, function isolation, or data paths taken for payloads sent to the Lambda service during synchronous and asynchronous invocations. It also has some great insight into compliance, auditing, monitoring, and configuration management of your Lambda functions. A must read for anyone wishing to better understand the overall security of AWS serverless applications.

I look forward to seeing everyone at re:Invent 2019 for more exciting serverless announcements!

About the author

Drew DennisDrew Dennis is a Global Solutions Architect with AWS based in Dallas, TX. He enjoys all things Serverless and has delivered the Architecture Track’s Serverless Patterns and Best Practices session at re:Invent the past three years. Today, he helps automotive companies with autonomous driving research on AWS, connected car use cases, and electrification.

Technology’s Promise – Highlights from DEF CON China 1.0

Post Syndicated from Claire Tsai original https://blog.cloudflare.com/technologys-promise-def-con-china-1-0-highlights/

Technology's Promise - Highlights from DEF CON China 1.0

Technology's Promise - Highlights from DEF CON China 1.0

DEF CON is one of the largest and oldest security conferences in the world. Last year, it launched a beta event in China in hopes of bringing the local security communities closer together. This year, the organizer made things official by introducing DEF CON China 1.0 with a promise to build a forum for China where everyone can gather, connect, and grow together.

Themed “Technology’s Promise”, DEF CON China kicked off on 5/30 in Beijing and attracted participants of all ages. Watching young participants test, play and tinker with new technologies with such curiosity and excitement absolutely warmed our hearts!

It was a pleasure to participate in DEF CON China 1.0 this year and connect with local communities. Great synergy as we exchanged ideas and learnings on cybersecurity topics. Did I mention we also spoiled ourselves with the warm hospitality, wonderful food, live music, and amazing crowd while in Beijing.

Technology's Promise - Highlights from DEF CON China 1.0
Event Highlights: Cloudflare Team Meets with DEF CON China Visitors and Organizers (DEF CON Founder Jeff Moss and Baidu Security General Manager Jefferey Ma)


Youngest DEF CON China Participant Explores New Technologies on the Eve of International Children’s Day. (Source: Abhinav SP | #BugZee, DEFCON China )


The Iconic DEF CON Badge, Designed by Joe Grand, is a Flexible Printed Circuit Board that Lights up the Interactive “Tree of Promise”.


Technology's Promise - Highlights from DEF CON China 1.0
The Capture The Flag (CTF) Contest is a Continuation of One of the Oldest Contests at DEF CON Dating Back to DEF CON 4 in 1996.


Cloudflare’s Mission is to Help Build a Better Internet

Founded in 2009, Cloudflare is a global company with 180 data centers across 80 countries. Our Performance and Security Services work in conjunction to reduce latency of websites, mobile applications, and APIs end-to-end, while protecting against DDoS attack, abusive bots, and data breach.

We are looking forward to growing our presence in the region and continuing to serve our customers, partners, and prospects. Sign up for a free account now for a faster and safer Internet experience: cloudflare.com/sign-up.

We’re Hiring

We are a team with global vision and local insight committed to building a better Internet. We are hiring in Beijing and globally. Check out the opportunities here: cloudflare.com/careers and join us at Cloudflare today!

Technology's Promise - Highlights from DEF CON China 1.0
The Cloudflare Team from Beijing, Singapore, and San Francisco

Stopping SharePoint’s CVE-2019-0604

Post Syndicated from Georgie Yoxall original https://blog.cloudflare.com/stopping-cve-2019-0604/

Stopping SharePoint’s CVE-2019-0604

On Saturday, 11th May 2019, we got the news of a critical web vulnerability being actively exploited in the wild by advanced persistent threats (APTs), affecting Microsoft’s SharePoint server (versions 2010 through 2019).

This was CVE-2019-0604, a Remote Code Execution vulnerability in Microsoft SharePoint Servers which was not previously known to be exploitable via the web.

Several cyber security centres including the Canadian Centre for Cyber Security and Saudi Arabia’s National Center put out alerts for this threat, indicating it was being exploited to download and execute malicious code which would in turn take complete control of servers.

The affected software versions:

  • Microsoft SharePoint Foundation 2010 Service Pack 2
  • Microsoft SharePoint Foundation 2013 Service Pack 1
  • Microsoft SharePoint Server 2010 Service Pack 2
  • Microsoft SharePoint Server 2013 Service Pack 1
  • Microsoft SharePoint Enterprise Server 2016
  • Microsoft SharePoint Server 2019

Introduction

The vulnerability was initially given a critical CVSS v3 rating of 8.8 on the Zero Day Initiative advisory (however the advisory states authentication is required). This would imply only an insider threat, someone who has authorisation within SharePoint, such as an employee, on the local network could exploit the vulnerability.

We discovered that was not always the case, since there were paths which could be reached without authentication, via external facing websites. Using the NIST NVD calculator, it determines the actual base score to be a 9.8 severity out of 10 without the authentication requirement.

As part of our internal vulnerability scoring process, we decided this was critical enough to require immediate attention. This was for a number of reasons. The first being it was a critical CVE affecting a major software ecosystem, primarily aimed at enterprise businesses. There appeared to be no stable patch available at the time. And, there were several reports of it being actively exploited in the wild by APTs.

We deployed an initial firewall rule the same day, rule 100157. This allowed us to analyse traffic and request frequency before making a decision on the default action. At the same time, it gave our customers the ability to protect their online assets from these attacks in advance of a patch.

We observed the first probes at around 4:47 PM on the 11th of May, which went on until 9:12 PM. We have reason to believe these were not successful attacks, and were simply reconnaissance probes at this point.

The online vulnerable hosts exposed to the web were largely made up of high traffic enterprise businesses, which makes sense based on the below graph from W3Techs.

Stopping SharePoint’s CVE-2019-0604
Figure 1: Depicts SharePoint’s market position (Image attributed to W3Techs)

The publicly accessible proof of concept exploit code found online did not work out of the box. Therefore it was not immediately widely used, since it required weaponisation by a more skilled adversary.

We give customers advance notice of most rule changes. However, in this case, we decided that the risk was high enough that we needed to act upon this, and so made the decision to make an immediate rule release to block this malicious traffic for all of our customers on May 13th.

We were confident enough in going default block here, as the requests we’d analysed so far did not appear to be legitimate. We took several factors into consideration to determine this, some of which are detailed below.

The bulk of requests we’d seen so far, a couple hundred, originated from cloud instances, within the same IP ranges. They were enumerating the subdomains of many websites over a short time period.

This is a fairly common scenario. Malicious actors will perform reconnaissance using various methods in an attempt to find a vulnerable host to attack, before actually exploiting the vulnerability. The query string parameters also appeared suspicious, having only the ones necessary to exploit the vulnerability and nothing more.

The rule was deployed in default block mode protecting our customers, before security researchers discovered how to weaponise the exploit and before a stable patch from Microsoft was widely adopted.

The vulnerability

Zero Day Initiative did a good job in drilling down on the root cause of this vulnerability, and how it could potentially be exploited in practice.

From debugging the .NET executable, they discovered the following functions could eventually reach the deserialisation call, and so may potentially be exploitable.

Stopping SharePoint’s CVE-2019-0604
Figure 2: Depicts the affected function calls (Image attributed to Trend Micro Zero Day Initiative)

The most interesting ones here are the “.Page_Load” and “.OnLoad” methods, as these can be directly accessed by visiting a webpage. However, only one appears to not require authentication, ItemPicker.ValidateEntity which can be reached via the Picker.aspx webpage.

The vulnerability lies in the following function calls:

EntityInstanceIdEncoder.DecodeEntityInstanceId(encodedId);
Microsoft.SharePoint.BusinessData.Infrastructure.EntityInstanceIdEncoder.DecodeEntityInstanceId(pe.Key);

Stopping SharePoint’s CVE-2019-0604
Figure 3: PickerEntity Validation (Image attributed to Trend Micro Zero Day Initiative)

The PickerEntity ValidateEntity function takes “pe” (Picker Entity) as an argument.

After checking pe.Key is not null, and it matches the necessary format: via a call to

Microsoft.SharePoint.BusinessData.Infrastucture.EntityInstanceIdEncoder.IsEncodedIdentifier(pe.Key)

it continues to define an object of identifierValues from the result of

Microsoft.SharePoint.BusinessData.Infrastructure.EntityInstanceIdEncoder.DecodeEntityInstanceId(pe.Key)

where the actual deserialisation takes place.

Otherwise, it will raise an AuthenticationException, which will display an error page to the user.

The affected function call can be seen below. First, there is a conditional check on the encodedId argument which is passed to DecodeEntityInstanceId(), if it begins with __, it will continue onto deserialising the XML Schema with xmlSerializer.Deserialize().

Stopping SharePoint’s CVE-2019-0604
Figure 4: DecodeEntityInstanceId leading to the deserialisation (Image attributed to Trend Micro Zero Day Initiative)

When reached, the encodedId (in the form of an XML serialised payload) would be deserialised, and eventually executed on the system in a SharePoint application pool context, leading to a full system compromise.

One such XML payload which spawns a calculator (calc.exe) instance via a call to command prompt (cmd.exe):

<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:System="clr-namespace:System;assembly=mscorlib"
xmlns:Diag="clr-namespace:System.Diagnostics;assembly=system">
    <ObjectDataProvider x:Key="LaunchCalch" ObjectType="{x:Type Diag:Process}" MethodName="Start">
        <ObjectDataProvider.MethodParameters>
            <System:String>cmd.exe</System:String>
            <System:String>/c calc.exe</System:String>
        </ObjectDataProvider.MethodParameters>
    </ObjectDataProvider>
</ResourceDictionary>

Analysis

When we first deployed the rule in log mode, we did not initially see many hits, other than a hundred probes later that evening.

We believe this was largely due to the unknowns of the vulnerability and its exploitation, as a number of conditions had to be met to craft a working exploit that are not yet widely known.

It wasn’t until after we had set the rule in default drop mode, that we saw the attacks really start to ramp up. On Monday the 13th we observed our first exploit attempts, and on the 14th saw what we believe to be individuals manually attempting to exploit sites for this vulnerability.

Given this was a weekend, it realistically gives you 1 working day to have rolled out a patch across your organisation, before malicious actors started attempting to exploit this vulnerability.

Stopping SharePoint’s CVE-2019-0604
Figure 5: Depicts requests matched, rule 100157 was set as default block early 13th May.

Further into the week, we started seeing smaller spikes for the rule. And on the 16th May, the same day the UK’s NCSC put out an alert reporting of highly successful exploitation attempts against UK organisations, thousands of requests were dropped, primarily launched at larger enterprises and government entities.

This is often the nature of such targeted attacks, malicious actors will try to automate exploits to have the biggest possible impact, and that’s exactly what we saw here.

So far into our analysis, we’ve seen malicious hits for the following paths:

  • /_layouts/15/Picker.aspx
  • /_layouts/Picker.aspx
  • /_layouts/15/downloadexternaldata.aspx

The bulk of attacks we’ve seen have been targeting the unauthenticated Picker.aspx endpoint as one would expect, using the ItemPickerDialog type:

/_layouts/15/picker.aspx?PickerDialogType=Microsoft.SharePoint.Portal.WebControls.ItemPickerDialog

We expect the vulnerability to be exploited more when a complete exploit is publicly available, so it is important to update your systems if you have not already. We also recommend isolating these systems to the internal network in cases they do not need to be external facing, in order to avoid an unnecessary attack surface.

Sometimes it’s not practical to isolate such systems to an internal network, this is usually the case for global organisations, with teams spanning multiple locations. In these cases, we highly recommend putting these systems behind an access management solution, like Cloudflare Access. This gives you granular control over who can access resources, and has the additional benefit of auditing user access.

Microsoft initially released a patch, but it did not address all vulnerable functions, therefore customers were left vulnerable with the only options being to virtually patch their systems or shut their services down entirely until an effective fix became available.

This is a prime example of why firewalls like Cloudflare’s WAF are critical to keeping a business online. Sometimes patching is not an option, and even when it is, it can take time to roll out effectively across an enterprise.

How to share encrypted AMIs across accounts to launch encrypted EC2 instances

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/how-to-share-encrypted-amis-across-accounts-to-launch-encrypted-ec2-instances/

Do you encrypt your Amazon Machine Instances (AMIs) with AWS Key Management Service (AWS KMS) customer master keys (CMKs) for regulatory or compliance reasons? Do you launch instances with encrypted root volumes? Do you create a golden AMI and distribute it to other accounts in your organization for standardizing application-specific Amazon Elastic Compute Cloud (Amazon EC2) instance launches? If so, then we have good news for you!

We’re happy to announce that you can now share AMIs encrypted with customer-managed CMKs across accounts with a single API call. Additionally, you can launch an EC2 instance directly from an encrypted AMI that has been shared with you. Previously, this was possible for unencrypted AMIs only. Extending this capability to encrypted AMIs simplifies your AMI distribution process and reduces the snapshot storage cost associated with the older method of sharing encrypted AMIs that resulted in a copy in each of your accounts.

In this post, we demonstrate how you can share an encrypted AMI between accounts and launch an encrypted, Amazon Elastic Block Store (Amazon EBS) backed EC2 instance from the shared AMI.

Prerequisites for sharing AMIs encrypted with a customer-managed CMK

Before you can begin sharing the encrypted AMI and launching an instance from it, you’ll need to set up your AWS KMS key policy and AWS Identity and Access Management (IAM) policies.

For this walkthrough, you need two AWS accounts:

  1. A source account in which you build a custom AMI and encrypt the associated EBS snapshots.
  2. A target account in which you launch instances using the shared custom AMI with encrypted snapshots.

In this example, I’ll use fictitious account IDs 111111111111 and 999999999999 for the source account and the target account, respectively. In addition, you’ll need to create an AWS KMS customer master key (CMK) in the source account in the source region. For simplicity, I’ve created an AWS KMS CMK with the alias cmkSource under account 111111111111 in us-east-1. I’ll use this cmkSource to encrypt my AMI (ami-1234578) that I’ll share with account 999999999999. As you go through this post, be sure to change the account IDs, source account, AWS KMS CMK, and AMI ID to match your own.

Create the policy setting for the source account

First, an IAM user or role in the source account needs permission to share the AMI (an EC2 ModifyImageAttribute operation). The following JSON policy document shows an example of the permissions an IAM user or role policy needs to have. In order to share ami-12345678, you’ll need to create a policy like this:


    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2: ModifyImageAttribute",
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1::image/<12345678>"
            ]
        }
 ] 
}

Second, the target account needs permission to use cmkSource for re-encrypting the snapshots. The following steps walk you through how to add the target account’s ID to the cmkSource key policy.

  1. From the IAM console, select Encryption Keys in the left pane, and then select the source account’s CMK, cmkSource, as shown in Figure 1:
     
    Figure 1: Select "cmkSource"

    Figure 1: Select “cmkSource”

  2. Look for the External Accounts subsection, type the target account ID (for example, 999999999999), and then select Add External Account.
     
    Figure 2: Enter the target account ID and then select "Add External Account"

    Figure 2: Enter the target account ID and then select “Add External Account”

Create a policy setting for the target account

Once you’ve configured the source account, you need to configure the target account. The IAM user or role in the target account needs to be able to perform the AWS KMS DescribeKey, CreateGrant, ReEncrypt* and Decrypt operations on cmkSource in order to launch an instance from a shared encrypted AMI. The following JSON policy document shows an example of these permissions:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ReEncrypt*",
                "kms:CreateGrant",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:<111111111111>:key/<key-id of cmkSource>"
            ]                                                    
        }
    ]
}

Once you’ve completed the configuration steps in the source and target accounts, then you’re ready to share and launch encrypted AMIs. The actual sharing of the encrypted AMI is no different from sharing an unencrypted AMI. If you want to use the AWS Management Console, then follow the steps described in Sharing an AMI (Console). To use the AWS Command Line Interface (AWS CLI), follow the steps described in Sharing an AMI (AWS CLI).

Below is an example of what the CLI command to share the AMI would look like:


aws ec2 modify-image-attribute --image-id <ami-12345678> --launch-permission "Add=[{UserId=<999999999999>}]"

Launch an instance from the shared encrypted AMI

To launch an AMI that was shared with you, set the AMI ID of the shared AMI in the image-id parameter of Run-Instances API/CLI. Optionally, to re-encrypt the volumes with a custom CMK in your account, you can specify the KmsKeyId in the Block Device Mapping as follows:


    $> aws ec2 run-instances 
    --image-id ami-
    --count 1 
    --instance-type m4.large 
    --region us-east-1
    --subnet-id subnet-aec2fc86 
    --key-name 2016KeyPair 
    --security-group-ids sg-f7dbc78e subnet-id subnet-aec2fc86 
    --block-device-mappings file://mapping.json    

Where the mapping.json contains the following:


[
    {
        "DeviceName": "/dev/xvda",
        "Ebs": {
                "Encrypted": true,
                "KmsKeyId": "arn:aws:kms:us-east-1:<999999999999>:key/<abcd1234-a123-456a-a12b-a123b4cd56ef>"
        }
    },
]

While launching the instance from a shared encrypted AMI, you can specify a CMK of your choice. You may also choose cmkSource to encrypt volumes in your account. However, we recommend that you re-encrypt the volumes using a CMK in the target account. This protects you if the source CMK is compromised, or if the source account revokes permissions, which could cause you to lose access to any encrypted volumes you created using cmkSource.

Summary

In this blog post, we discussed how you can easily share encrypted AMIs across accounts to launch encrypted instances. AWS has extended the same capabilities to snapshots, allowing you to create encrypted EBS volumes from shared encrypted snapshots.

This feature is available through the AWS Management Console, AWS CLI, or AWS SDKs at no extra charge in all commercial AWS regions except China. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nishit Nagar

Nishit is a Senior Product Manager at AWS.

How to quickly launch encrypted EBS-backed EC2 instances from unencrypted AMIs

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/how-to-quickly-launch-encrypted-ebs-backed-ec2-instances-from-unencrypted-amis/

An Amazon Machine Image (AMI) provides the information that you need to launch an instance (a virtual server) in your AWS environment. There are a number of AMIs on the AWS Marketplace (such as Amazon Linux, Red Hat or Ubuntu) that you can use to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance. When you launch an instance from these AMIs, the resulting volumes are unencrypted. However, for regulatory purposes or internal compliance reasons, you might need to launch instances with encrypted root volumes. Previously, this required you to follow a a multi-step process in which you maintained a separate, encrypted copy of the AMI in your account in order to launch instances with encrypted volumes. Now, you no longer need to maintain multiple AMI copies for encryption. To launch an encrypted Amazon Elastic Block Store (Amazon EBS) backed instance from an unencrypted AMI, you can directly specify the encryption properties in your existing workflow (such as with the RunInstances API or the Launch Instance Wizard). This simplifies the process of launching instances with encrypted volumes and reduces your associated AMI storage costs.

In this post, we demonstrate how you can start from an unencrypted AMI and launch an encrypted EBS-backed Amazon EC2 instance. We’ll show you how to do this from both the AWS Management Console, and using the RunInstances API with the AWS Command Line Interface (AWS CLI).

Launching an instance from the AWS Management Console

  1. Sign into the AWS Management Console and open the EC2 console.
  2. Select Launch instance, then follow the prompts of the launch wizard:
    1. In step 1 of the wizard, select the Amazon Machine Image (AMI) that you want to use.
    2. In step 2 of the wizard, select your instance type.
    3. In step 3, provide additional configuration details. For details about configuring your instances, see Launching an Instance.
    4. In step 4, specify your EBS volumes. The encryption properties of the volumes will be inherited from the AMI that you’ve chosen. If you’re using an unencrypted AMI, it will show up as “Not Encrypted.” From the dropdown, you can then select a customer master key (CMK) for encrypting the volume. You may select the same CMK for each volume that you want to create, or you may use a different CMK for each volume.
       
      Figure 1: Specifying your EBS volumes

      Figure 1: Specifying your EBS volumes

  3. Select Review and then Launch. Your instance will launch with an encrypted Amazon EBS volume that uses the CMK you selected. To learn more about the launch wizard, see Launching an Instance with Launch Wizard.

Launching an instance from the RunInstances API

From the RunInstances API/CLI, you can provide the kmsKeyID for encrypting the volumes that will be created from the AMI by specifying encryption in the BlockDeviceMapping (BDM) object. If you don’t specify the kmsKeyID in BDM but set the encryption flag to “true,” then AWS Managed CMK will be used for encrypting the volume.

For example, to launch an encrypted instance from an Amazon Linux AMI with an additional empty 100 GB of data volume (/dev/sdb), the API call would be as follows:


    $> aws ec2 run-instances 
    --image-id ami-009d6802948d06e52
    --count 1 
    --instance-ype m4.large 
    --region us-east-1
    --subnet-id subnet-aec2fc86 
    --key-name 2016KeyPair 
    --security-group-ids sg-f7dbc78e subnet-id subnet-aec2fc86 
    --block-device-mappings file://mapping.json    

Where the mapping.json contains the following:


[
    {
        "DeviceName": "/dev/xvda",
        "Ebs": {
                "Encrypted": true,
                "KmsKeyId": "arn:aws:kms:<us-east-1:012345678910>:key/<Example_Key_ID_12345>"
        }
    },

    {
        "DeviceName": "/dev/sdb",
        "Ebs": {
            "DeleteOnTermination": true,
            "VolumeSize": 100,
            "VolumeType": "gp2",
            "Encrypted": true,
            "KmsKeyId": "arn:aws:kms:<us-east-1:012345678910>:key/<Example_Key_ID_12345>"
        }
    }
]

You may specify different keys for different volumes. Providing a kmsKeyID without the encryption flag will result in an API error.

Conclusion

In this blog post, we demonstrated how you can quickly launch encrypted, EBS-backed instances from an unencrypted AMI in a few steps. You can also use the same process to launch EBS-backed encrypted volumes from unencrypted snapshots. This simplifies the process of launching instances with encrypted volumes and reduces your AMI storage costs.

This feature is available through the AWS Management Console, AWS CLI, or AWS SDKs at no extra charge in all commercial AWS regions except China. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Nishit Nagar

Nishit is a Senior Product Manager at AWS.

The Positive Side-Effects of Blockchain

Post Syndicated from Bozho original https://techblog.bozho.net/the-positive-side-effects-of-blockchain/

Blockchain is a relatively niche technology at the moment, and even thought there’s a lot of hype, its applicability is limited. I’ve been skeptical about its ability to solve all the world’s problems, as many claim, and would rather focus it on solving particular business issues related to trust.

But I’ve been thinking about the positive side-effects and it might actually be one of the best things that have happened to software recently. I don’t like big claims and this sound like one, but bear with me.

Maybe it won’t find its place in much of the business software out there. Maybe in many cases you don’t need a distributed solution because the business case does not lend itself to one. And certainly you won’t be trading virtual coins in unregulated exchanges.

But because of the hype, now everyone knows the basic concepts and building blocks of blockchain. And they are cryptographic – they are hashes, digital signatures, timestamps, merkle trees, hash chains. Every technical and non-technical person in IT has by now at least read a little bit about blockchain to understand what it is.

So as a side effect, most developers and managers are now trust-conscious, and by extension – security conscious. I know it may sound far-fetched, but before blockchain how many developers and managers knew what a digital signature is? Hashes were somewhat more prevalent mostly because of their (sometimes incorrect) use to store passwords, but the PKI was mostly arcane knowledge.

And yes, we all know how TLS certificates work (although, do we?) and that a private key has to be created and used with them, and probably some had a theoretical understanding of digital signatures. And we knew encryption was kind of a good idea at rest and in transit. But putting that in the context of “trust”, “verifiability” and “non-repudiation” was, in my view, something that few people have done mentally.

And now, even by not using blockchain, developers and managers would have the trust concept lurking somewhere in the back of their mind. And my guess would be that more signatures, more hashes and more trusted timestamps will be used just because someone thought “hey, we can make this less prone to manipulation through this cool cryptography that I was reminded about because of blockchain”.

Blockchain won’t be the new internet, but it already has impact on the mode of thinking of people in the software industry. Or at least I hope so.

The post The Positive Side-Effects of Blockchain appeared first on Bozho's tech blog.

AWS Security releases IoT security whitepaper

Post Syndicated from Momena Cheema original https://aws.amazon.com/blogs/security/aws-security-releases-iot-security-whitepaper/

We’ve published a whitepaper, Securing Internet of Things (IoT) with AWS, to help you understand and address data security as it relates to your IoT devices and the data generated by them. The whitepaper is intended for a broad audience who is interested in learning about AWS IoT security capabilities at a service-specific level and for compliance, security, and public policy professionals.

IoT technologies connect devices and people in a multitude of ways and are used across industries. For example, IoT can help manage thermostats remotely across buildings in a city, efficiently control hundreds of wind turbines, or operate autonomous vehicles more safely. With all of the different types of devices and the data they transmit, security is a top concern.

The specific challenges using IoT technologies present has piqued the interest of governments worldwide who are currently assessing what, if any, new regulatory requirements should take shape to keep pace with IoT innovation and the general problem of securing data. This whitepaper uses a specific example to cover recent developments published by the National Institute of Standards and Technology (NIST) and the United Kingdom’s Code of Practice that are specific to IoT.

If you have questions or want to learn more, contact your account executive, or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Momena Cheema

Momena is passionate about evangelizing the security and privacy capabilities of AWS services through the lens of global emerging technology and trends, such as Internet of Things, artificial intelligence, and machine learning through written content, workshops, talks, and educational campaigns. Her goal is to bring the security and privacy benefits of the cloud to customers across industries in both public and private sectors.

Door Pi Plus — door security system for the elderly

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/door-pi-plus-door-security-system-elderly/

13-year-old Freddie from Monmouthshire has gained national attention for his incredible award-winning invention Door Pi Plus.

Freddie – Door Plus Pi

No Description

Door security system

Freddie spent more than twelve months building a door security system for the elderly, inspired by the desire to help his great-aunt feel more secure at home.

The invention keeps the door locked until the camera recognises a face of a family member and makes it possible to open the lock. Freddie used a Raspberry Pi to enable facial recognition technology in his impressive project.

“I’ve been building this project on and off for a year now,” says Freddie. “I started coding at my primary school Code Club, but now I mainly code at home.”

Coolest Projects UK

Freddie took part in this year’s Coolest Projects UK, entering the Hardware category of the world-leading showcase for young innovators who make stuff with technology.

Mark Feltham on Twitter

The amazing Freddie explaing his security system for dementia sufferers at #coolestprojects @Raspberry_Pi facial recognition, PIR and RFID hooked up to lock through relays, coded in #python. He’s 13… #blownaway

Martin O’Hanlon of the Raspberry Pi Foundation, and a judge at Coolest Projects UK, commented “I was blown away by the Door Pi Plus. The motivation to create something which would help others was clear, but the technical aspects of the project also really stood out, integrating lots of different technologies and making skills.

“The project used multiple Raspberry Pis to control an RFID reader, electronic door lock mechanism, cameras, motion sensors, and audio playback. The whole system sent messages to Freddie to ensure that his great-aunt would be safe and that she could get help if she needed it.“

Freddie won his Coolest Projects category to much acclaim, and went on to win the award for Junior Engineer of the Year at the Big Bang Fair and the Siemens Digital Skills Award!

Inspired by his experience making, he is now encouraging other young people to learn to code and start to make their own creations.

“Coding is cool because you can invent cool things to help you and other people around you. I do think more kids should code because lots of the job in the future are probably going to involved coding.”

Coolest Projects International

Freddie will participate in Coolest Projects International next, for which he won a special bursary as part of his award for winning the UK event’s Hardware category.

Not one to shy away from a challenge, Freddie decided to build a new project for the event! It’s called Safe Kids, and it’s a speed camera and ANPR system, to be installed outside primary schools.

He will be showcasing his new creation at Coolest Projects International in the RDS, Dublin on 5 May, alongside hundreds of young coders from around the globe.

Want to share your creation with the world too?

Then register your project idea for Coolest Projects International before the 14 April deadline, and get building for the event.

Participants of all ages and skill levels, and projects using all types of technology and hardware are encouraged!

The post Door Pi Plus — door security system for the elderly appeared first on Raspberry Pi.

Netflix Public Bug Bounty, 1 year later

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/netflix-public-bug-bounty-1-year-later-8f075b49d1cb?source=rss----2615bd06b42e---4

by Astha Singhal (Netflix Application Security)

As Netflix continues to create entertainment people love, the security team continues to keep our members, partners, and employees secure. The security research community has partnered with us to improve the security of the Netflix service for the past few years through our responsible disclosure and bug bounty programs. A year ago, we launched our public bug bounty program to strengthen this partnership and enable researchers across the world to more easily participate.

When we decided to go public with our bug bounty, we revamped our program terms to bring even more targets (including Netflix streaming mobile apps) in scope and set clearer guidelines for researchers that participate in our program. We have always tried to prioritize a good researcher experience in our program to keep the community engaged. For example, we maintain an average triage time of less than 48 hours for issues of all severity. Since the public launch, we have engaged with 657 researchers from around the world. We have collectively rewarded over $100,000 for over 100 valid bugs in that time.

We wanted to share a few interesting submissions that we have received over the last year:

  • We choose to focus our security resources on applications deployed via our infrastructure paved road to be able to scale our services. The bug bounty has been great at shining a light on the parts of our environment that may not be on the paved road. A researcher found an application that ran on an older Windows server that was deployed in a non-standard way making it difficult for our automated visibility services to detect it. The system had significant issues that we were grateful to hear about so we could retire the system.
  • We received a report from a researcher that found a service they didn’t believe should be available on the internet. The initial finding seemed like a low severity issue, but the researcher asked for permission to continue to explore the endpoint to look for more significant issues. We love it when researchers reach out to coordinate testing on an issue. We looked at the endpoint and determined that there was no risk of access to Netflix internal data, so we allowed the researcher to continue testing. After further testing, the researcher found a remote code execution that we then rewarded and remediated with a higher priority.
  • We always perform a full impact analysis to make sure we reward the researcher based on the actual impact vs demonstrated impact of any issue. In one example of this, a researcher found an endpoint that allowed access to an internal resource if the attacker knew a randomly generated identifier. The researcher had found the identifier via another endpoint, but had not explicitly linked the two findings. Given the potential impact, we asked the researcher to stop further testing. As we tested the first endpoint ourselves, we discovered this additional impact and issued a reward in accordance with the higher priority.

Over the past year, we have received various high quality submissions from researchers, and we want to continue to engage with them to improve Netflix security. Todayisnew has been the highest earning researcher in our program over the last year. We recently revisited our overall reward ranges to make sure we are competitive with the market for our risk profile. In 2019, we also started publishing quarterly program updates to highlight new product areas for testing. Our goal is to keep the Netflix program an interesting and fresh target for bug bounty researchers.

Going into next year, our goal is to maintain the quality of the researcher experience in our program. We are also thinking about how to extend our bug bounty coverage to our studio app ecosystem. Last year, we conducted a bug bash specifically for some of our studio apps with researchers across the world. We found some significant issues through that and are exploring extending our program to some of our studio production apps in 2019. We thank all the researchers that have engaged in our program and look forward to continued collaboration with them to secure Netflix.


Netflix Public Bug Bounty, 1 year later was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Guidelines for protecting your AWS account while using programmatic access

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/

One of the most important things you can do as a customer to ensure the security of your resources is to maintain careful control over who has access to them. This is especially true if any of your AWS users have programmatic access. Programmatic access allows you to invoke actions on your AWS resources either through an application that you write or through a third-party tool. You use an access key ID and a secret access key to sign your requests for authorization to AWS. Programmatic access can be quite powerful, so implementing best practices to protect access key IDs and secret access keys is important in order to prevent accidental or malicious account activity. In this post, I’ll highlight some general guidelines to help you protect your account, as well as some of the options you have when you need to provide programmatic access to your AWS resources.

Protect your root account

Your AWS root account—the account that’s created when you initially sign up with AWS—has unrestricted access to all your AWS resources. There’s no way to limit permissions on a root account. For this reason, AWS always recommends that you do not generate access keys for your root account. This would give your users the power to do things like close the entire account—an ability that they probably don’t need. Instead, you should create individual AWS Identity and Access Management (IAM) users, then grant each user permissions based on the principle of least privilege: Grant them only the permissions required to perform their assigned tasks. To more easily manage the permissions of multiple IAM users, you should assign users with the same permissions to an IAM group.

Your root account should always be protected by Multi-Factor Authentication (MFA). This additional layer of security helps protect against unauthorized logins to your account by requiring two factors: something you know (a password) and something you have (for example, an MFA device). AWS supports virtual and hardware MFA devices, U2F security keys, and SMS text message-based MFA.

Decide how to grant access to your AWS account

To allow users access to the AWS Management Console and AWS Command Line Interface (AWS CLI), you have two options. The first one is to create identities and allow users to log in using a username and password managed by the IAM service. The second approach is to use federation
to allow your users to use their existing corporate credentials to log into the AWS console and CLI.

Each approach has its use cases. Federation is generally better for enterprises that have an existing central directory or plan to need more than the current limit of 5,000 IAM users.

Note: Access to all AWS accounts is managed by AWS IAM. Regardless of the approach you choose, make sure to familiarize yourself with and follow IAM best practices.

Decide when to use access keys

Applications running outside of an AWS environment will need access keys for programmatic access to AWS resources. For example, monitoring tools running on-premises and third-party automation tools will need access keys.

However, if the resources that need programmatic access are running inside AWS, the best practice is to use IAM roles instead. An IAM role is a defined set of permissions—it’s not associated with a specific user or group. Instead, any trusted entity can assume the role to perform a specific business task.

By utilizing roles, you can grant a resource access without hardcoding an access key ID and secret access key into the configuration file. For example, you can grant an Amazon Elastic Compute Cloud (EC2) instance access to an Amazon Simple Storage Service (Amazon S3) bucket by attaching a role with a policy that defines this access to the EC2 instance. This approach improves your security, as IAM will dynamically manage the credentials for you with temporary credentials that are rotated automatically.

Grant least privileges to service accounts

If you decided to create service accounts (that is, accounts used for programmatic access by applications running outside of the AWS environment) and generate access keys for them, you should create a dedicated service account for each use case. This will allow you to restrict the associated policy to only the permissions needed for the particular use case, limiting the blast radius if the credentials are compromised. For example, if a monitoring tool and a release management tool both require access to your AWS environment, create two separate service accounts with two separate policies that define the minimum set of permissions for each tool.

In addition to this, it’s also a best practice to add conditions to the policy that further restrict access—such as restricting access to only the source IP address range of your clients.

Below is an example policy that represents least privileges. It grants the needed permissions (PutObject) on to a specific resource (an S3 bucket named “examplebucket”) while adding further conditions (the client must come from IP range 203.0.113.0/24).


{
    "Version": "2012-10-17",
    "Id": "S3PolicyRestrictPut",
    "Statement": [
            {
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::examplebucket/*",
            "Condition": {
                "IpAddress": {"aws:SourceIp": "203.0.113.0/24"}
            } 
        } 
    ]
}

Use temporary credentials from AWS STS

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary credentials for use in your code, CLI, or third-party tools. It allows you to assume an IAM role with which you have a trusted relationship and then generate temporary, time-limited credentials based on the permissions associated with the role. These credentials can only be used during the validity period, which reduces your risk.

There are two ways to generate temporary credentials. You can generate them from the CLI, which is helpful when you need credentials for testing from your local machine or from an on-premises or third-party tool. You can also generate them from code using one of the AWS SDKs. This approach is helpful if you need credentials in your application, or if you have multiple user types that require different permission levels.

Create temporary credentials using the CLI

If you have access to the AWS CLI, you can use it to generate temporary credentials with limited permissions to use in your local testing or with third-party tools. To be able to use this approach, here’s what you need:

  • Access to the AWS CLI through your primary user account or through federation. To learn how to configure CLI access using your IAM credentials, follow this link. If you use federation, you still can use the CLI by following the instructions in this blog post.
  • An IAM role that represents the permissions needed for your test client. In the example below, I use “s3-read”. This role should have a policy attached that grants the least privileges needed for the use case.
  • A trusted relationship between the service role (“s3-read”) and your user account, to allow you to assume the service role and generate temporary credentials. Visit this link for the steps to create this trust relationship.

The example command below will generate a temporary access key ID and secret access key that are valid for 15 minutes, based on permissions associated with the role named “s3-read”. You can replace the values below with your own account number, service role, and duration, then use the secret access key and access key ID in your local clients.


aws sts assume-role --role-arn <arn:aws:iam::AWS-ACCOUNT-NUMBER:role/s3-read> --role-session-name <s3-access> --duration-seconds <900>

Here are my results from running the command:


{ "AssumedRoleUser": 
    { 
        "AssumedRoleId": "AROAIEGLQIIQUSJ2I5XRM:s3-access", 
        "Arn": "arn:aws:sts::AWS-ACCOUNT-NUMBER:assumed-role/s3-read/s3-access" 
    }, 
    "Credentials": { 
        "SecretAccessKey":"wZJph6PX3sn0ZU4g6yfXdkyXp5m+nwkEtdUHwC3w",  
        "SessionToken": "FQoGZXIvYXdzENr//////////<<REST-OF-TOKEN>>",
        "Expiration": "2018-11-02T16:46:23Z",
        "AccessKeyId": "ASIAXQZXUENECYQBAAQG" 
    } 
  }

Create temporary credentials from your code

If you have an application that already uses the AWS SDK, you can use AWS STS to generate temporary credentials right from the code instead of hard-coding credentials into your configurations. This approach is recommended if you have client-side code that requires credentials, or if you have multiple types of users (for example, admins, power-users, and regular users) since it allows you to avoid hardcoding multiple sets of credentials for each user type.

For more information about using temporary credentials from the AWS SDK, visit this link.

Utilize Access Advisor

The IAM console provides information about when an AWS service was last accessed by different principals. This information is called service last accessed data.

Using this tool, you can view when an IAM user, group, role, or policy last attempted to access services to which they have permissions. Based on this information, you can decide if certain permissions need to be revoked or restricted further.

Make this tool part of your periodic security check. Use it to evaluate the permissions of all your IAM entities and to revoke unused permissions until they’re needed. You can also automate the process of periodic permissions evaluation using Access Advisor APIs. If you want to learn how, this blog post is a good starting point.

Other tools for credentials management

While least privilege access and temporary credentials are important, it’s equally important that your users are managing their credentials properly—from rotation to storage. Below is a set of services and features that can help to securely store, retrieve, and rotate credentials.

AWS Systems Manager Parameter Store

AWS Systems Manager offers a capability called Parameter Store that provides secure, centralized storage for configuration parameters and secrets across your AWS account. You can store plain text or encrypted data like configuration parameters, credentials, and license keys. Once stored, you can configure granular access to specify who can obtain these parameters in your application, adding another layer of security to protect your data.

Parameter store is a good choice for use cases in which you need hierarchical storage for configuration data management across your account. For example, you can store database access credentials (username and password) in parameter store, encrypt them with an encryption key managed by AWS Key Management Service, and grant EC2 instances running your application permissions to read and decrypt those credentials.

For more information on using AWS Systems Manager Parameter Store, visit this link.

AWS Secrets Manager

AWS Secrets Manager is a service that allows you to centrally manage the lifecycle of secrets used in your organization, including rotation, audits, and access control. By enabling you to rotate secrets automatically, Secrets Manager can help you meet your security and compliance requirements. Secrets Manager also offers built-in integration for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS and can be extended to other services.

For more information about using AWS Secrets Manager to store and retrieve secrets, visit this link.

Amazon Cognito

Amazon Cognito lets you add user registration, sign-in, and access management features to your web and mobile applications.

Cognito can be used as an Identity Provider (IdP), where it stores and maintains users and credentials securely for your applications, or it can be integrated with OpenID Connect, SAML, and other popular web identity providers like Amazon.com.

Using Amazon Cognito, you can generate temporary access credentials for your clients to access AWS services, eliminating the need to store long-term credentials in client applications.

To learn more about using Amazon Cognito as an IdP, visit our developer guide to Amazon Cognito User Pools. If you’re interested in information about using Amazon Cognito with a third party IdP, review our guide to Amazon Cognito Identity Pools (Federated Identities).

AWS Trusted Advisor

AWS Trusted Advisor is a service that provides a real-time review of your AWS account and offers guidance on how to optimize your resources to reduce cost, increase performance, expand reliability, and improve security.

The “Security” section of AWS Trusted Advisor should be reviewed on regular basis to evaluate the health of your AWS account. Currently, there are multiple security specific checks that occur—from IAM access keys that haven’t been rotated to insecure security groups. Trusted Advisor is a tool to help you more easily perform a daily or weekly review of your AWS account.

git-secrets

git-secrets
, available from the AWS Labs GitHub account, helps you avoid committing passwords and other sensitive credentials to a git repository. It scans commits, commit messages, and –no-ff merges to prevent your users from inadvertently adding secrets to your repositories.

Conclusion

In this blog post, I’ve introduced some options to replace long-term credentials in your applications with temporary access credentials that can be generated using various tools and services on the AWS platform. Using temporary credentials can reduce the risk of falling victim to a compromised environment, further protecting your business.

I also discussed the concept of least privilege and provided some helpful services and procedures to maintain and audit the permissions given to various identities in your environment.

If you have questions or feedback about this blog post, submit comments in the Comments section below, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.

Author

Joe Chapman

Joe is a Solutions Architect with Amazon Web Services. He primarily serves AWS EdTech customers, providing architectural guidance and best practice recommendations for new and existing workloads. Outside of work, he enjoys spending time with his wife and dog, and finding new adventures while traveling the world.

Add a layer of security for AWS SSO user portal sign-in with context-aware email-based verification

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/add-a-layer-of-security-for-aws-sso-user-portal-sign-in-with-context-aware-email-based-verification/

If you’re an IT administrator of a growing workforce, your users will require access to a growing number of business applications and AWS accounts. You can use AWS Single Sign-On (AWS SSO) to create and manage users centrally and grant access to AWS accounts and business applications, such as such Salesforce, Box, and Slack. When you use AWS SSO, your users sign in to a central portal to access all of their AWS accounts and applications. Today, we launched email-based verification that provides an additional layer of security for users signing in to the AWS SSO user portal. AWS SSO supports a one-time passcode (OTP) sent to users’ email that they then use as a verification code during sign-in. When enabled, AWS SSO prompts users for their user name and password and then to enter a verification code that was sent to their email address. They need all three pieces of information to be able to sign in to the AWS SSO user portal.

You can enable email-based verification in context-aware or always-on mode. We recommend you enable email-based verification in context-aware mode for users created using the default AWS SSO directory. In this mode, users sign in easily with their username and password for most sign-ins, but must provide additional verification when their sign-in context changes, such as when signing in from a new device or an unknown location. Alternatively, if your company requires users to complete verification for every sign-in, you can use always-on mode.

In this post, I demonstrate how to enable verification in context-aware mode for users in your SSO directory using the AWS SSO console. I then demonstrate how to sign into the AWS SSO user portal using email-based verification.

Enable email-based verification in context-aware mode for users in your SSO directory

Before you enable email-based verification, you must ensure that all your users can access their email to retrieve their verification code. If your users require the AWS SSO user portal to access their email, do not enable email-based verification. For example, if you use AWS SSO to access Office 365, then your users may not be able to access their AWS SSO user portal when you enable email-based verification.

Follow these steps to enable email-based verification for users in your SSO directory:

  1. Sign in to the AWS SSO console. In the left navigation pane, select Settings, and then select Configure under the Two-step verification settings.
  2. Select Context-aware under Verification mode, and Email-based verification under Verification method, and then select Save changes.
     
    Figure 1: Select the verification mode and the verification method

    Figure 1: Select the verification mode and the verification method

  3. Before you choose to confirm the changes in the Enable email-based verification window, make sure that all your users can access their email to retrieve the verification code required to sign in to the AWS SSO user portal without signing in using AWS SSO. To confirm your choice, type CONFIRM (case-sensitive) in the text-entry field, and then select Confirm.
     
    Figure 2: The "Enable email-based verification" window

    Figure 2: The “Enable email-based verification” window

You’ll see that you successfully enabled email-based verification in context-aware mode for all users in your AWS SSO directory.
 

Figure 3: Verification of the settings

Figure 3: Verification of the settings

Next, I demonstrate how your users sign into the AWS SSO user portal with email-based verification in addition to their username and password

Sign-in to the AWS SSO user portal with email-based verification

With email-based verification enabled in context-aware mode, users use the verification code sent to their email when there is a change in their sign-in context. Here’s how that works:

  1. Navigate to your AWS SSO user portal.
  2. Enter your email address and password, and then select Sign in.
     
    Figure 4: The "Single Sign-On" window

    Figure 4: The “Single Sign-On” window

  3. If AWS detects a change in your sign-in context, you’ll receive an email with a 6-digit verification code that you will enter in the next step.
     
    Figure 5: Example verification email

    Figure 5: Example verification email

  4. Enter the code in the Verification code box, and then select Sign in. If you haven’t received your verification code, select Resend email with a code to receive a new code, and be sure to check your spam folder. You can select This is a trusted device to mark your device as trusted so you don’t need to enter a verification code unless your sign-in context changes again, such as signing in from a new browser or an unknown location.
     
    Figure 6: Enter the verification code

    Figure 6: Enter the verification code

The user can now access AWS accounts and business applications that the administrator has configured for them.

Summary

In this post, I shared the benefits of using email-based verification in context-aware mode. I demonstrated how you can enable email-based verification for your users through the SSO console. I also showed you how to sign into the AWS SSO user portal with email-based verification. You can also enable email-based verification for SSO users from your connected AD directory by following the process outlined above.

If you have comments, please submit them in the Comments section below. If you have issues enabling email-based verification for your users, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Announcing the First AWS Security Conference: AWS re:Inforce 2019

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/announcing-the-first-aws-security-conference-aws-reinforce-2019/

On the eve of re:Invent 2018, I’m pleased to announce that AWS is launching our first conference dedicated to cloud security: AWS re:Inforce. The event will offer a deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Security is the top priority at AWS, and AWS re:Inforce is emblematic of our commitment to giving direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services.

The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. The cost for a full conference pass will be $1,099.

Over the course of this two-day conference we will offer multiple content tracks designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. Our technical track will offer detailed tactical education to take your security posture from reactive to proactive. We’ll also be offering a business enablement track tailored to assisting with your strategic migration decisions. You’ll find content delivered in a number of formats to meet a diversity of learning preferences, including keynotes, breakout sessions, Q&As, hands-on workshops, simulations, training and certification, as well as our interactive Security Jam. We anticipate 100+ sessions ranging in levels of ability from beginner to expert.

AWS re:Inforce will also feature our AWS Security Competency Partners, each of whom has demonstrated success in building products and solutions on AWS to support customers in multiple domains. With hundreds of industry-leading products, these partners will give you an opportunity to learn how to enhance the security of both on-premises and cloud environments.

Finally, you’ll find sessions built around the Security Pillar of Well Architected and the Security Perspective of our Cloud Adoption Framework (CAF). These will include Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, Security Services, and Incident Response. Our automated reasoning, cryptography researchers and scientists will also be available, as well as our partners in the academic community discussing Provable Security and additional emerging security trends.

If you’d like to sign up to be notified of when registration opens, please visit:

https://reinforce.awsevents.com

Additional information and registration details will be shared in early 2019, we look forward to seeing you all there!

– Steve Schmidt, Chief Information Security Officer
Follow Steve on Twitter.

AWS Security Profiles: Quint Van Deman, Principal Business Development Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-quint-van-deman-principal-business-development-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I joined AWS in August 2014. I spent my first two and a half years in the Professional Services group, where I ran around the world to help some of our largest customers sort through their security and identity implementations. For the last two years, I’ve parleyed that experience into my current role of Business Development Manager for the Identity and Directory Services group. I help the product development team build services and features that address the needs I’ve seen in our customer base. We’re working on a next generation of features that we think will radically simplify the way customers implement and manage identities and permissions within the cloud environment. The other key element of my job is to find and disseminate the most innovative solutions I’m seeing today across the broadest possible set of AWS customers to help them be more successful faster.

How do you explain your job to non-tech friends?

I keep one foot in the AWS service team organizations, where they build features, and one foot in day-to-day customer engagement to understand the real-world experiences of people using AWS. I learn about the similarities and differences between how these two groups operate, and then I help service teams understand these similarities and differences, as well.

You’re a “bar raiser” for the Security Blog. What does that role entail?

The notion of being a bar raiser has a lot of different facets at Amazon. The general concept is that, as we go about certain activities — whether hiring new employees or preparing blog posts — we send things past an outside party with no team biases. As a bar raiser for the Security Blog, I don’t have a lot of incentive to get posts out because of a deadline. My role is to make sure that nothing is published until it successfully addresses a customer need. At Amazon, we put the best customer experience first. As a bar raiser, I work to hold that line, even though it might not be the fastest approach, or the path of least resistance.

What’s the most challenging part of your job?

Ruthless prioritization. One of our leadership principles at Amazon is frugality. Sometimes, that means staying in cheap hotel rooms, but more often it means frugality of resources. In my case, I’ve been given the awesome charter to serve as the Business Development Manager for our suite of Identity and Directory Services. I’m something of a one-man army the world over. But that means a lot of things come past my desk, and I have to prioritize ruthlessly to ensure I’m focusing on the things that will be most impactful for our customers.

What’s your favorite part of your job?

A lot of our customers are doing an awesome job being bar raisers themselves. They’re pushing the envelope in terms of identity-focused solutions in their own AWS environments. One fulfilling part of my work is getting to collaborate with those customers who are on the leading edge: Their AWS field teams will get ahold of me, and then I get to do two really fun things. First, I get to dive in and help these customers succeed at whatever they’re trying to do. Second, I get to learn from them. I get to examine the really amazing ideas they’ve come up with and see if we might be able to generalize their solutions and roll them out to the delight of many more AWS customers that might not have teams mature enough to build them on their own. While my title is Business Development Manager, I’m a technologist through and through. Getting to dive into these thorny technical situations and see them resolve into really great solutions is extremely rewarding.

How did you choose your particular topics for re:Invent 2018?

Over the last year, I’ve talked with lots of customers and AWS field teams. My Mastering Identity at Every Layer of the Cake session was born out of the fact that I noticed a lot of folks doing a lot of work to get identity for AWS right, but other layers of identity that are just as important weren’t getting as much attention. I made it my mission to provide a more holistic understanding of what identity in the cloud means to these customers, and over time I developed ways of articulating the topic which really seemed to resonate. My session is about sharing this understanding more broadly. It’s a 400-level talk, since I want to really dive deep with my audience. I have five embedded demos, all of which are going to show how to combine multiple features, sprinkle in a bit of code, and apply them to near universally applicable customer use cases.

Why use the metaphor of a layer cake?

I’ve found that analogies and metaphors are very effective ways of grounding someone’s mental imagery when you’re trying to relay a complex topic. Last year, my metaphor was bridges. This year, I decided to go with cake: It’s actually very descriptive of the way that our customers need to think about Identity in AWS since there are multiple layers. (Also, who doesn’t like cake? It’s delicious.)

What are you hoping that your audience will take away from the session?

Customers are spending a lot of time getting identity right at the AWS layer. And that’s a ground-level, must-do task. I’m going to put a few new patterns in the audience’s hands to do this more effectively. But as a whole, we aren’t consistently putting as much effort into the infrastructure and application layers. That’s what I’m really hoping to expose people to. We have a wealth of features that really raise the bar in terms of cloud security and identity — from how users authenticate to operating systems or databases, to how they authenticate to the applications and APIs that they put on AWS. I want to expose these capabilities to folks and paint a vivid image for them of the really powerful things that they can do today that they couldn’t have done before.

What do you want your audience to do differently after attending your session?

During the session, I’ll be taking a handful of features that are really interesting in their own right, and combining them in a way that I hope will absolutely delight my audience. For example, I’ll show how you can take AWS CloudFormation macros and AWS Identity and Access Management, layer a little bit of customization on top, and come up with something far more magical than either of the two individually. It’s an advanced use case that, with very little effort, can disproportionately improve your security posture while letting your organization move faster. That’s just one example though, and the session is going to be loaded with them, including a grand finale. I’ve already started the work to open source a lot of what I’m going to show, but even where I can’t open source, I want to paint a very clear, prescriptive blueprint for how to get there. My goal is that my audience goes back to work on Monday and, within a couple of hours, they’ve measurably moved the security bar for their organization.

Any tips for first-time conference attendees?

Be deliberate about going outside of your comfort zone. If you’re not working in Security, come to one of our sessions. If you do work in Security, go to some other tracks, like Dev-Ops or Analytics, to get that cross-pollination of ideas. One of the most amazing things about AWS is how it helps dramatically lower the barrier to entry for unfamiliar technology domains and tools. A developer can ship more secure code faster by being invested in security, and a security expert can disproportionally scale their impact by applying the tools of developers or data scientists. Re:Invent is an amazing place to start exploring that diversity, and if you do, I suspect you’ll find ways to immediately make yourself better at your day job.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

Complexity versus human understanding have always been at odds. I see initiatives across AWS that have all kinds of awesome innovation and computer science behind them. In the coming years, I think these will mature to the point that they will be able to offload much of the natural complexity that comes with securing large scale environments with extremely fine grain permissions. Folks will be able to provide very simple statements or rules of how they want their environment to be, and we should be able to manage the complexity for them, and present them with a nice, clean picture they can easily understand.

What does cloud security mean to you, personally?

I see possibilities today that were herculean tasks before. For example, the process to make sure APIs can properly authenticate and authorize each other used to be an extremely elaborate process at scale. It became such an impossible mess that only the largest of organizations with the best skills, the best technology, and the best automation were really able to achieve it. Everyone else just had to punt or put a band-aid on the problem. But in the world of the cloud, all it takes is attaching an AWS IAM role on one side, and a fairly small resource-based policy to an Amazon API Gateway API on the other. Examples like this show how we’re making security that would once have been extremely difficult for most customers to afford or implement simple to configure, get right, and deploy ubiquitously, and that’s really powerful. It’s what keeps me passionate about my work.

If you had to pick any other job, what would you want to do with your life?

I’ve got all kinds of whacky hobbies. I kiteboard, I surf, work on massive renovation projects at home, hike and camp in the backcountry, and fly small airplanes. It’s an overwhelming set of hobbies that didn’t align with my professional aptitude. But if the world were my oyster and I had to do something else, I would want to combine those hobbies into one single career that’s never before been seen.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Quint Van Deman

Quint is the global business development manager for AWS Identity and Directory services. In this role, he leads the incubation, scaling, and evolution of new and existing identity-based services, as well as field enablement and strategic customer advisement for the same. Before joining the BD team, Quint was an early member of the AWS Professional Services team, where he was a Senior Consultant leading cloud transformation teams at several prominent enterprise customers, and a company-wide subject matter expert on IAM and Identity federation.

AWS Security Profiles: Henrik Johansson, Principal, Office of the CISO

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-henrik-johansson-principal-office-of-the-ciso/

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

As a Principal for the Office of the CISO, I not only get to spend time directly with our customers and their executives and operational teams, I also get to work with our own service teams and other parts of our organization. Additionally, a big part of this role involves spending time with the industry as a whole, in both small and large settings, and trying to raise the bar of the overall industry together with a number of other teams within AWS.

How do you explain your job to non-tech friends?

Whether or not someone understands what the cloud is, I try to focus on the core part of the role: I help people and organizations understand AWS Security and what it means to operate securely on the cloud. And I focus on helping the industry achieve these same goals.

What’s your favorite part of your job?

Helping customers and their executive leadership to understand the benefits of cloud security and how they can improve the overall security posture by using cloud features. Getting to show them how we can help drive road maps and new features and functions that they can use to secure their workloads (based on their valuable feedback) is very rewarding.

Tell us about the open source communities you support. Why they are important to AWS?

The open source community is important to me for a couple of reasons. First, it helps enable and inspire innovation by inviting the community at large to expand on the various use cases our services provide. I also really appreciate how customers enable other customers by not only sharing their own innovations but also inviting others to contribute and further improve their solutions. I have a couple of open source repositories that I maintain, where I put various security automation tools that I’ve built to show various innovative ways that customers can use our services to strengthen their security posture. Even if you don’t use open source in your company, you can still look at the vast number of projects out there, both from customers and from AWS, and learn from them.

What does cloud security mean to you, personally?

For me, it represents the possibility of creating efficient, secure solutions. I’ve been working in various security roles for almost twenty-five years, and the ability we have to protect data and our infrastructure has never been stronger. We have an incredible opportunity to solve challenges that would have been insurmountable before, and this leads to one thing: trust. It allows us to earn trust from customers, trust from users, and trust from the industry. It also enables our customers to earn trust from their users.

In your opinion, what’s the biggest challenge facing cloud security right now?

The opportunities far outweigh the challenges, honestly. The different methods that customers and users have to gain visibility into what they’re actually running is mind-blowing. That visibility is a combination of knowing what you have, knowing what you run, and knowing all the ins and outs of it. I still hear people talking about that server in the corner under someone’s desk that no one else knows about. That simply doesn’t exist in the cloud, where everything is an API call away. If anything, the challenge lies in finding people who want to continue driving the innovation and solving the hard cases with all the technology that’s at our fingertips.

Five years from now, what changes do you think we’ll see across the security/compliance landscape?

One shift we’re already seeing is that compliance is becoming a natural part of the security and innovation conversation. Previously, “compliance” meant that maybe you had a specific workload that needed to be PCI-compliant, or you were under HIPPA requirements. Nowadays, compliance is a more natural part of what we do. Privacy is everywhere. It has to be everywhere, based on requirements like GDPR, but we’re seeing that a lot of these “have to be” requirements turning into “want to be” requirements — we’re not distinguishing between the users that are required to be protected and the “regular” users. More and more, we’re seeing that privacy is always going to have a seat at the table, which is something we’ve always wanted.

At re:Invent 2018, you’re presenting two sessions together with Andrew Krug. How did you choose your topics?

They’re a combination of what I’m passionate about and what I see our customers need. This is the third year I’ve presented my Five New Security Automations Using AWS Security Services & Open Source session. Previously, I’ve also built boot camps and talks around secure automation, DevSecOps, and container security. But we have a big need for open source security talks that demonstrate how people can actually use open source to integrate with our services — not just as a standalone piece, but actually using open source as inspiration for what they can build on their own. That’s not to say that AWS services aren’t extremely important. They’re the driving force here. But the open source piece allows people to adapt solutions to their specific needs, further driving the use cases together with the various AWS security services.

What are you hoping that your audience will take away from your sessions?

I want my audience to walk away feeling that they learned something new, and that they can build something that they didn’t know how to before. They don’t have to take and use the specific open source tools we put out there, but I want them to see our examples as a way to learn how our services work. It doesn’t matter if you just download a sample script or if you run a full project, or a full framework, but it’s important to learn what’s possible with services beyond what you see in the console or in the documentation.

Any tips for first-time conference attendees?

Plan ahead, but be open to ad-hoc changes. And most importantly, wear sneakers or comfortable walking shoes. Your feet will appreciate it.

If you had to pick any other job, what would you want to do with your life?

If I picked another role at Amazon, it would definitely be a position around innovation, thinking big, and building stuff. Even if it was a job somewhere else, I’d still want it to involve building, whether woodshop projects or a robot. Innovation and building are my passions.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Henrik Johansson

Henrik is a Principal in the Office of the CISO at AWS Security. With over 22 years of experience in IT with a focus on security and compliance, he focuses on establishing and driving CISO-level relationships as a trusted cloud security advisor who has a passionate focus on developing services and features for security and compliance at scale.

AWS Security Profiles: Sam Koppes, Senior Product Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-sam-koppes-senior-product-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for a year, and I’m a Senior Product Manager for the AWS CloudTrail team. I’m responsible for product roadmap decisions, customer outreach, and for planning our engineering work.

How do you explain your job to non-tech friends?

I work on a technical product, and for any tech product, responsibility is split in half: We have product managers and engineering managers. Product managers are responsible for what the product does. They’re responsible for figuring out how it behaves, what needs it addresses, and why customers would want it. Engineering managers are responsible for figuring out how to make it. When you look to build a product, there’s always the how and the what. I’m responsible for the what.

What are you currently working on that you’re excited about?

The scale challenges that we’re facing today are extremely interesting. We’re allowing customers to build things at an absolutely unheard-of scale, and bringing security into that mix is a challenge. But it’s also one of the great opportunities for AWS — we can bring a lot of value to customers by making security as turnkey as possible so that it just comes with the additional scale and additional service areas. I want people to sleep easy at night knowing that we’ve got their backs.

What’s your favorite part of your job?

When I deliver a product, I love sending out the What’s New announcement. During our launch calls, I love collecting social media feedback to measure the impact of our products. But really, the best part is the post-launch investigation that we do, which allows us understand whether we hit the mark or not. My team usually does a really good job of making sure that we deliver the kinds of features that our customers need, so seeing the impact we’ve had is very gratifying. It’s a privilege to get to hear about the ways we’re changing people’s lives with the new features we’re building.

How did you choose your particular topic for re:Invent this year?

My session is called Augmenting Security Posture and Improving Operational Health with AWS CloudTrail. As a service, CloudTrail has been around a while. But I’ve found that customers face knowledge gaps in terms of what to do with it. There are a lot of people out there with an impressive depth of experience, but they sometimes lack an additional breadth that would be helpful. We also have a number of new customers who want more guidance. So I’m using the session to do a reboot: I’ll start from the beginning and go through what the service is and all the things it does for you, and then I’ll highlight some of the benefits of CloudTrail that might be a little less obvious. I built the session based on discussions with customers, who frequently tell me they start using the service — and only belatedly realize that they can do much more with it beyond, say, using it as a compliance tool. When you start using CloudTrail, you start amassing a huge pile of information that can be quite valuable. So I’ll spend some time showing customers how they can use this information to enhance their security posture, to increase their operational health, and to simplify their operational troubleshooting.

What are you hoping that your audience will take away from it?

I want people to walk away with two fistfuls of ideas for cool things they can do with CloudTrail. There are some new features we’re going to talk about, so even if you’re a power user, my hope is that you’ll return to work with three or four features you have a burning desire to try out.

What does cloud security mean to you, personally?

I’m very aware of the magnitude of the threats that exist today. It’s an evolving landscape. We have a lot of powerful tools and really smart people who are fighting this battle, but we have to think of it as an ongoing war. To me, the promise you should get from any provider is that of a safe haven — an eye in the storm, if you will — where you have relative calm in the midst of the chaos going on in the industry. Problems will constantly evolve. New penetration techniques will appear. But if we’re really delivering on our promise of security, our customers should feel good about the fact that they have a secure place that allows them to go about their business without spending much mental capacity worrying about it all. People should absolutely remain vigilant and focused, but they don’t have to spend all of their time and energy trying to stay abreast of what’s going on in the security landscape.

What’s the most common misperception you encounter about cloud security and compliance?

Many people think that security is a magic wand: You wave it, and it leads to a binary state of secure or not secure. And that’s just not true. A better way to think of security is as a chain that’s only as strong as its weakest link. You might find yourself in a situation where lots of people have worked very hard to build a very secure environment — but then one person comes in and builds on top of it without thinking about security, and the whole thing blows wide open. All it takes is one little hole somewhere. People need to understand that everyone has to participate in security.

In your opinion, what’s the biggest challenge that people face as they move to the cloud?

At AWS, we follow this thing called the Shared Responsibility Model: AWS is responsible for securing everything from the virtualization layer down, and customers are responsible for building secure applications. One of the biggest challenges that people face lies in understanding what it means to be secure while doing application development. Companies like AWS have invested hugely in understanding different attack vectors and learning how to lock down our systems when it comes to the foundational service we offer. But when customers build on a platform that is fundamentally very secure, we still need to make sure that we’re educating them about the kinds of things that they need to do, or not do, to ensure that they stay within this secure footprint.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think we’ll see a tremendous amount of growth in the application of machine learning and artificial intelligence. Historically, we’ve approached security in a very binary way: rules-based security systems in which things are either okay or not okay. And we’ve built complex systems that define “okay” based on a number of criteria. But we’ve always lacked the ability to apply a pseudo-human level of intelligence to threat detection and remediation, and today, we’re seeing that start to change. I think we’re in the early stages of a world where machine learning and artificial intelligence become a foundational, indispensable part of an effective security perimeter. Right now, we’re in a world where we can build strong defenses against known threats, and we can build effective hedging strategies to intercept things we consider risky. Beyond that, we have no real way of dynamically detecting and adapting to threat vectors as they evolve — but that’s what we’ll start to see as machine learning and artificial intelligence enter the picture.

If you had to pick any other job, what would you want to do with your life?

I have a heavy engineering background, so I could see myself becoming a very vocal and customer-obsessed engineering manager. For a more drastic career change, I’d write novels—an ability that I’ve been developing in my free time.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Sam Koppes

Sam is a Senior Product Manager at Amazon Web Services. He currently works on AWS CloudTrail and has worked on AWS CloudFormation, as well. He has extensive experience in both the product management and engineering disciplines, and is passionate about making complex technical offerings easy to understand for customers.