re:Invent 2020 Liveblog: Werner Vogels Keynote

Post Syndicated from AWS News Blog Team original https://aws.amazon.com/blogs/aws/reinvent-2020-liveblog-werner-vogels-keynote/

Join us Tuesday, Dec. 15 for Dr. Werner Vogels’ Keynote as he shares how Amazon is solving today’s hardest technology problems. Jeff Barr, Martin Beeby, Steve Roberts and Channy Yun will liveblog the event, sharing all the highlights, insights and major announcements from this final keynote of re:Invent 2020.

See you here Tuesday, 7:30-10:00 AM (PST)!


How to bulk import users and groups from CSV into AWS SSO

Post Syndicated from Darryn Hendricks original https://aws.amazon.com/blogs/security/how-to-bulk-import-users-and-groups-from-csv-into-aws-sso/

When you connect an external identity provider (IdP) to AWS Single Sign-On (SSO) using Security Assertion Markup Language (SAML) 2.0 standard, you must create all users and groups into AWS SSO before you can make any assignments to AWS accounts or applications. If your IdP supports user and group provisioning by way of the System for Cross-Domain Identity Management (SCIM), we strongly recommend using SCIM to simplify ongoing lifecycle management for your users and groups in AWS SSO.

If your IdP doesn’t yet support automatic provisioning, you will need to create your users and groups manually in AWS SSO. Although manual creation of users and groups is the least complicated option to get started, it can be tedious and prone to errors.

In this post, we show you how to use a comma-separated values (CSV) file to bulk create users and groups in AWS SSO.

How it works

AWS SSO supports automatic provisioning of user and group information from an external IdP into AWS SSO using the SCIM protocol. For this solution, you use a PowerShell script to simulate a SCIM server, to provision users and groups from a CSV file into AWS SSO. You create and populate the CSV file with your user and group information that is then used by the PowerShell script. Next, on your Windows, Linux, or macOS system with PowerShell Core installed, you run the PowerShell script. The PowerShell script reads users and groups from the CSV file and then programmatically creates the users and groups in AWS SSO using your SCIM configuration for AWS SSO.

Assumptions

In this blog post, we assume the following:

  • You already have an AWS SSO-enabled account (free). For more information, see Enable AWS SSO.
  • You have the permissions needed to add users and groups in AWS SSO.
  • You configured a SAML IdP with AWS SSO, as described in How to Configure SAML 2.0 for AWS Single Sign-On.
  • You’re using a Windows, MacOS, or Linux system with PowerShell Core installed.
  • If you’re not using a system with PowerShell Core installed, you’re using a Windows 7 or later system, with PowerShell 4.0 or later installed.

Note: This article was authored and the code tested on a Microsoft Windows Server 2019 system with PowerShell installed.

Enable automatic provisioning

In this step, you enable automatic provisioning in AWS SSO. You use the automatic provisioning endpoints for AWS SSO to connect and create users and groups in AWS SSO.

To enable automatic provisioning in AWS SSO

    1. On the AWS SSO Console, go to the Single Sign-On page and then go to Settings.
    2. Change the provisioning from Manual to SCIM by selecting Enable automatic provisioning.
Figure 1: Enable automatic provisioning

Figure 1: Enable automatic provisioning

    1. Copy the SCIM endpoint and the Access token (you can have up to two access token IDs). You use these values later.
Figure 2: Copy the SCIM endpoint and access token

Figure 2: Copy the SCIM endpoint and access token

Bulk create users and groups into AWS SSO

In this section, you create your users and groups from a CSV file into AWS SSO. To do this, you create a CSV file with your users’ profile information (for example: first name, last name, display name, and other values.). You also create a PowerShell script to connect to AWS SSO and create the users and groups from the CSV file in AWS SSO.

To bulk create your users from a CSV file

    1. Create a file called csv-example-users.csv with the following column headings: firstName, lastName, userName, displayName, emailAddress, and memberOf.

Note: The memberOf column will include all the groups you want to add the user to in AWS SSO. If the group you plan to add a user to isn’t in AWS SSO, the script automatically creates the group for you. If you want to add a user to multiple groups, you can add the group names separated by semicolons in the memberOf column.

    1. Populate the CSV file csv-example-users.csv with the users you want to create in AWS SSO.

Note: Before you populate the CSV file, take note of the existing users, groups, and group membership in AWS SSO. Make sure that none of the users or groups in the CSV file already exists in AWS SSO.

Note: For this to work, every user in the csv-example-users.csv must have a firstName, lastName, userName, displayName, and emailAddress value specified. If any of these values are missing, that user isn’t created. The userName and emailAddress values must not contain any spaces.

Figure 3: Create the CSV file and populate it with the users to create in AWS SSO

Figure 3: Create the CSV file and populate it with the users to create in AWS SSO

  1. Next, create a create_users.ps1 file and copy the following PowerShell code to it. Use a text editor like Notepad or TextEdit to edit the create_users.ps1 file.
    • Replace <SCIMENDPOINT> with the SCIM endpoint value you copied earlier.
    • Replace <BEARERTOKEN> with the Access token value you copied earlier.
    • Replace <CSVLOCATION> with the location of your CSV file (for example, C:\Users\testuser\Downloads\csv-example-users.csv. Relative paths are also accepted).
    #Input SCIM configuration and CSV file location
    $Url = "<SCIMENDPOINT>"
    $Bearertoken = "<BEARERTOKEN>"
    $CSVfile = "<CSVLOCATION>"
    $Headers = @{ Authorization = "Bearer $Bearertoken" }
    
    #Get users from CSV file and store in variable
    $Users = Import-Csv -Delimiter "," -Path "$CSVfile"
    
     #Read groups in CSV and groups in AWS SSO
        
        $Groups = $Users.memberOf -split ";"
        $Groups = $Groups | Sort-Object -Unique | where {$_ -ne ""}
    
        foreach($Group in $Groups){
             $SSOgroup = @{
                "displayName" = $Group.trim()
                }
    
        #Store group attribute in json format
    
        $Groupjson = $SSOgroup | ConvertTo-Json
    
        #Create groups in AWS SSO
    
        try {
        
            $Response = Invoke-RestMethod -ContentType application/json -Uri "$Url/Groups" -Method POST -Headers $Headers -Body $Groupjson -UseBasicParsing
            Write-Host "Create group: The group $($Group) has been created successfully." -foregroundcolor green
    
        }
        catch 
        {
        
          $ErrorMessage = $_.Exception.Message
    
           if ($ErrorMessage -eq "The remote server returned an error: (409) Conflict.")
           {
             Write-Host "Error creating group: A group with the name $($Group) already exists." -foregroundcolor yellow
           }
           
           else 
           {       
             Write-Host "Error has occurred: $($ErrorMessage)" -foregroundcolor Red
           }
        }
        }
    
    #Loop through each user
    foreach ($User in $Users)
    {
    
        #Get user attributes from each field
        $SSOuser = @{
                name = @{ familyName = $User.lastName.trim(); givenName = $User.firstName.trim() }
                displayName = $User.displayName.trim()
                userName = $User.userName
                emails = @(@{ value = $User.emailAddress; type = "work"; primary = "true" })
                active = "true"
                }
    
        #Store user attributes in json format
        $Userjson = $SSOuser | ConvertTo-Json
    
        #Create users in AWS SSO
    
        try {
        $Response = Invoke-RestMethod -ContentType application/json -Uri "$Url/Users" -Method POST -Headers $Headers -Body $Userjson -UseBasicParsing
        Write-Host "Create user: The user $($User.userName) has been created successfully." -foregroundcolor green
    
        }
        catch 
        {
        
          $ErrorMessage = $_.Exception.Message
    
           if ($ErrorMessage -eq "The remote server returned an error: (409) Conflict.")
           {
             Write-Host "Error creating user: A user with the same username $($User.userName) already exist" -foregroundcolor yellow
           }
           
           else 
           {       
             Write-Host "Error has occurred: $($ErrorMessage)" -foregroundcolor Red
           }
        }   
    
    #Get user information
        $UserName = $User.userName
        $UserId = (Invoke-RestMethod -ContentType application/json -Uri "$Url/Users`?filter=userName%20eq%20%22$UserName%22" -Method GET -Headers $Headers).Resources.id
        $Groups = $User.memberOf -split ";"
    
    #Loop through each group and add user to group
        foreach($Group in $Groups){
    
    If (-not [string]::IsNullOrWhiteSpace($Group)) 
    {
    #Get the GroupName and GroupId
        $GroupName = $Group.trim()
        $GroupId = (Invoke-RestMethod -ContentType application/json -Uri "$Url/Groups`?filter=displayName%20eq%20%22$GroupName%22" -Method GET -Headers $Headers).Resources.id
    
    #Store group membership in variable. 
        $AddUserToGroup = @{
                Operations = @(@{ op = "add"; path = "members"; value = @(@{ value = $UserId })})
                }
                
        #Convert to json format
        $AddUsertoGroupjson = $AddUserToGroup | ConvertTo-Json -Depth 4
    
        #Add users to group in AWS SSO
        
            try {
        $Responses = Invoke-RestMethod -ContentType application/json -Uri "$Url/Groups/$GroupId" -Method PATCH -Headers $Headers -Body $AddUsertoGroupjson -UseBasicParsing
        Write-Host "Add user to group: The user $($User.userName) has been added successfully to group $($GroupName)." -foregroundcolor green
    
        }
        catch 
        {
        
          $ErrorMessage = $_.Exception.Message
    
    	if ($ErrorMessage -eq "The remote server returned an error: (409) Conflict.")
           {
             Write-Host "Error adding user to group: The user $($User.userName) is already added to group $($GroupName)." -foregroundcolor yellow
           }
           
           else 
           {       
             Write-Host "Error has occurred: $($ErrorMessage)" -foregroundcolor Red
           }
        }
       }        
      }
    }
    

  2. Use Windows PowerShell to run the script create_users.ps1, as shown in the following figure.

    Figure 4: Run PowerShell script to create users from CSV in AWS SSO

    Figure 4: Run PowerShell script to create users from CSV in AWS SSO

  3. Use the AWS SSO console to verify that the users and groups were successfully created. In the AWS SSO console, select Users from the left menu, as shown in figure 5.

    Figure 5: View the newly created users in AWS SSO console

    Figure 5: View the newly created users in AWS SSO console

  4. Use the AWS SSO console to verify that the groups were successfully created. In the AWS SSO console, select Groups from the left menu, as shown in figure 6.

    Figure 6: View the newly created groups in AWS SSO console

    Figure 6: View the newly created groups in AWS SSO console

Your users, groups, and group memberships have been created in AWS SSO. You can now manage access for your identities in AWS SSO across your own applications, third-party applications (SaaS), and Amazon Web Services (AWS) environments.

How to run the PowerShell scripts on Linux and macOS

While this post focuses on running the PowerShell script on a Windows system. You can also run the PowerShell script on a Linux or macOS system that has PowerShell Core installed. You can then follow the steps in this post to create the required CSV files for creating a user and group and adding a user to a group. Then, on your Linux or macOS system, you can run the PowerShell script using the following command.

pwsh -File <Path to PowerShell Script>

Conclusion

In this post, we showed you how to programmatically create users and groups from a CSV file into AWS SSO. This solution isn’t a replacement for automatic provisioning. However, it can help you to quickly get up and running with AWS SSO by reducing the administration burden of manually creating users in AWS SSO.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Darryn Hendricks

Darryn is a Senior Cloud Support Engineer for AWS Single Sign-On (SSO) based in Seattle, Washington. He is passionate about Cloud computing, identities, automation and helping customers leverage these key building blocks when moving to the Cloud. Outside of work, he loves spending time with his wife and daughter.

Author

Jose Ruiz

Jose is a Senior Solutions Architect – Security Specialist at AWS. He often enjoys “the road less traveled” and knows each technology has a security story often not spoken of. He takes this perspective when working with customers on highly complex solutions and driving security at the beginning of each build.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/840110/rss

Security updates have been issued by Debian (lxml, openexr, openssl, and openssl1.0), Fedora (libpri, libxls, mediawiki, nodejs, opensc, php-wikimedia-assert, php-zordius-lightncandy, squeezelite, and wireshark), openSUSE (curl, openssh, openssl-1_0_0, python-urllib3, and rpmlint), Red Hat (libexif, libpq, and thunderbird), Slackware (p11), SUSE (kernel, Kubernetes, etcd, helm, openssl, openssl-1_0_0, and python), and Ubuntu (linux, linux-aws, linux-aws-5.4, linux-azure, linux-azure-5.4, linux-gcp, linux-gcp-5.4, linux-hwe-5.4, linux-kvm, linux-oracle, linux-oracle-5.4, linux-raspi, linux-raspi-5.4, linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-gcp, linux-gcp-4.15, linux-gke-4.15, linux-hwe, linux-kvm, linux-oracle, linux-snapdragon, and linux, linux-aws, linux-azure, linux-gcp, linux-kvm, linux-oracle, linux-raspi).

Improving Cloudflare’s products and services, one feature request at a time

Post Syndicated from Mona Hadidi original https://blog.cloudflare.com/improving-cloudflare-products-and-services-one-feature-request-at-a-time/

Improving Cloudflare’s products and services, one feature request at a time

Improving Cloudflare’s products and services, one feature request at a time

I started at Cloudflare in April 2018. I was excited to join an innovative company that operates with integrity and takes customer needs into account when planning product roadmaps. After 2.5 years at Cloudflare, this excitement has only grown, as it has become even clearer that our customers’ feedback is essential to our business. At an all-hands meeting this November, Michelle Zatlyn, our co-founder and COO, said that “every time we see things and approach problems from the lens of a customer, we make better decisions.” One of the ways we make these decisions is through Customer Success Managers funneling our customers’ feedback to our product and engineering teams.

As a Strategic Customer Success Manager, I meet regularly with my customers to better understand their experience with Cloudflare and work cross-functionally with our internal teams to continually improve it. One thing my customers often mention to me, regardless of industry or size, is their appreciation that their feedback is not only heard but understood and actioned. We are an engineering-driven company that remains agile enough to incorporate customer feedback into our product roadmap and development cycle when that feedback aligns with our business priorities. In fact, for us, this customer feedback loop is a priority in and of itself.

Customer Success Managers, along with Solutions Engineers and Account Executives, convert customer feedback raised in Quarterly Business Reviews or other touchpoints into feature requests routed directly to Cloudflare’s Product and Engineering teams. Here’s how it works:

Improving Cloudflare’s products and services, one feature request at a time

  • A feature request is submitted in our internal CRM on behalf of Cloudflare customers. It includes a description of the request, details on the desired solution, any current or potential workarounds, and level of urgency.
  • All feature requests are then evaluated by our Solutions Engineering Subject Matter Experts to ensure they have the necessary data and are properly classified.
  • Product Managers then review the feature requests and connect them with our internal tracking systems.
    • Often, our Product and Engineering teams already have many of these features planned as part of our roadmap, but customer requests can influence when we take action on these items and/or how we build these products. Factors that can impact these decisions include:
      • How critical the requests are,
      • The volume of customer requests per product or feature,
      • Partnerships with customers and promises we’ve made to these customers, and
      • Strategic direction from Cloudflare leadership
  • After these feature requests are filed on behalf of our customers, our Product team may reach out to Customer Success Managers to schedule meetings with our customers to ensure they understand their specific use cases and incorporate these requirements into product development.

Let’s illustrate this process with a real-life example. One of my customers, a large financial institution (Customer A), uses Cloudflare for Secondary DNS. Secondary DNS allows an organization to use multiple providers to host and relay its DNS information. It is traditionally used as a synchronized backup for an organization’s primary DNS infrastructure. Secondary DNS offers redundancy across many different nameservers, all of which are synchronized and thus respond to queries with the same answers. Using a Secondary DNS configuration allows for resiliency, availability, and a better overall end-user experience.

This particular customer was evaluating a multi-vendor approach to DNS and HTTP services including DDoS mitigation, WAF, and CDN, potentially utilizing Cloudflare at all levels for certain web applications. Cloudflare and many other HTTP proxy services can be provisioned and enabled over DNS, responding to DNS queries with our own IP space, attracting customer traffic, and performing the required functions. Only then is this HTTP traffic sent upstream to a customer’s infrastructure. Because an organization’s Secondary DNS nameservers should all respond with the same synchronized answers, it means any given customer using “standard” Secondary DNS cannot also use their Secondary DNS providers for HTTP proxy services (Layer 7 DDoS mitigation, WAF, CDN, etc). Customer A wanted to leverage our proxy services while simultaneously relying on Cloudflare’s global scale and redundancy as a (Secondary) DNS provider.

Another customer interested in this feature (Customer B) was an organization whose on-premise DNS servers had logic that automatically updated their records. They wanted a single Secondary DNS provider that could receive their automated DNS record transfers and respond to queries at scale, while also allowing them to choose which records to proxy. This would let them benefit from Cloudflare’s DNS and proxy services without having to re-architect or migrate their entire DNS infrastructure.

I filed feature requests, and our DNS team reached out to schedule time with these customers to better understand their use cases and ensure the feature we were building would support their desired configuration.

Enter Secondary DNS Override: rather than responding to every DNS query with the answer pre-defined by a customer’s DNS master, the customer instructs Cloudflare, as Secondary DNS vendor, to, in some cases, respond to queries with Cloudflare’s own IP space, which is what enables our HTTP proxy services.

We created Secondary DNS Override to proxy traffic for Customer A’s web apps utilizing multi-CDN, allowing them to benefit from Cloudflare’s security and performance features, despite having Secondary DNS already in use. Once Secondary DNS override was implemented, any of their end-users receiving DNS responses from Cloudflare (remember, only one DNS provider of many) now experienced the benefit of Cloudflare’s HTTP proxy services. Customer B simply enabled the automatic transfer of zone files to Cloudflare and set up their on-premise infrastructure as “hidden primary”; they could now utilize Cloudflare proxy services as they had requested.

While building the feature, our DNS team and designated account teams remained in close contact with both of these customers and more to keep them updated every step of the way.

We shipped our Secondary DNS Override feature in November 2019, API-only at first, and UI feature parity followed the next quarter. Customers were able to take advantage of Secondary DNS Override via API as soon as it was available, while simultaneously giving feedback on what they hoped to see in Cloudflare’s UI. They were delighted with the consultative approach we took while building out their desired features, as we demonstrated commitment to a strong partnership.

This feature is one example of the countless requests that have resulted in products and features shipped by Cloudflare’s Product and Engineering teams. Most feature requests originate as customer feedback provided in Quarterly Business Reviews, led by Customer Success Managers as part of our Premium Success offerings, and at the annual health check as part of our Standard Success offering.

Maintaining a close relationship with our customers and ensuring they are deriving the most value from our products is of the utmost importance to Cloudflare CSMs. Tell your CSM today how Cloudflare can help you mitigate risk, increase ROI, and achieve your business objectives. To learn more about Secondary DNS Override specifically or Cloudflare in general, please visit this link and a member of our team will reach out to you!

Authentication Failure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/authentication-failure.html

This is a weird story of a building owner commissioning an artist to paint a mural on the side of his building — except that he wasn’t actually the building’s owner.

The fake landlord met Hawkins in person the day after Thanksgiving, supplying the paint and half the promised fee. They met again a couple of days later for lunch, when the job was mostly done. Hawkins showed him photographs. The patron seemed happy. He sent Hawkins the rest of the (sorry) dough.

But when Hawkins invited him down to see the final result, his client didn’t answer the phone. Hawkins called again. No answer. Hawkins emailed. Again, no answer.

[…]

Two days later, Hawkins got a call from the real Comte. And that Comte was not happy.

Comte says that he doesn’t believe Hawkins’s story, but I don’t think I would have demanded to see a photo ID before taking the commission.

Life of a Netflix Partner Engineer — The case of extra 40 ms

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/life-of-a-netflix-partner-engineer-the-case-of-extra-40-ms-b4c2dd278513

Life of a Netflix Partner Engineer — The case of the extra 40 ms

By: John Blair, Netflix Partner Engineering

The Netflix application runs on hundreds of smart TVs, streaming sticks and pay TV set top boxes. The role of a Partner Engineer at Netflix is to help device manufacturers launch the Netflix application on their devices. In this article we talk about one particularly difficult issue that blocked the launch of a device in Europe.

The mystery begins

Towards the end of 2017, I was on a conference call to discuss an issue with the Netflix application on a new set top box. The box was a new Android TV device with 4k playback, based on Android Open Source Project (AOSP) version 5.0, aka “Lollipop”. I had been at Netflix for a few years, and had shipped multiple devices, but this was my first Android TV device.

All four players involved in the device were on the call: there was the large European pay TV company (the operator) launching the device, the contractor integrating the set-top-box firmware (the integrator), the system-on-a-chip provider (the chip vendor), and myself (Netflix).

The integrator and Netflix had already completed the rigorous Netflix certification process, but during the TV operator’s internal trial an executive at the company reported a serious issue: Netflix playback on his device was “stuttering.”, i.e. video would play for a very short time, then pause, then start again, then pause. It didn’t happen all the time, but would reliably start to happen within a few days of powering on the box. They supplied a video and it looked terrible.

The device integrator had found a way to reproduce the problem: repeatedly start Netflix, start playback, then return to the device UI. They supplied a script to automate the process. Sometimes it took as long as five minutes, but the script would always reliably reproduce the bug.

Meanwhile, a field engineer for the chip vendor had diagnosed the root cause: Netflix’s Android TV application, called Ninja, was not delivering audio data quickly enough. The stuttering was caused by buffer starvation in the device audio pipeline. Playback stopped when the decoder waited for Ninja to deliver more of the audio stream, then resumed once more data arrived. The integrator, the chip vendor and the operator all thought the issue was identified and their message to me was clear: Netflix, you have a bug in your application, and you need to fix it. I could hear the stress in the voices from the operator. Their device was late and running over budget and they expected results from me.

The investigation

I was skeptical. The same Ninja application runs on millions of Android TV devices, including smart TVs and other set top boxes. If there was a bug in Ninja, why is it only happening on this device?

I started by reproducing the issue myself using the script provided by the integrator. I contacted my counterpart at the chip vendor, asked if he’d seen anything like this before (he hadn’t). Next I started reading the Ninja source code. I wanted to find the precise code that delivers the audio data. I recognized a lot, but I started to lose the plot in the playback code and I needed help.

I walked upstairs and found the engineer who wrote the audio and video pipeline in Ninja, and he gave me a guided tour of the code. I spent some quality time with the source code myself to understand its working parts, adding my own logging to confirm my understanding. The Netflix application is complex, but at its simplest it streams data from a Netflix server, buffers several seconds worth of video and audio data on the device, then delivers video and audio frames one-at-a-time to the device’s playback hardware.

A diagram showing content downloaded to a device into a streaming buffer, then copied into the device decode buffer.
Figure 1: Device Playback Pipeline (simplified)

Let’s take a moment to talk about the audio/video pipeline in the Netflix application. Everything up until the “decoder buffer” is the same on every set top box and smart TV, but moving the A/V data into the device’s decoder buffer is a device-specific routine running in its own thread. This routine’s job is to keep the decoder buffer full by calling a Netflix provided API which provides the next frame of audio or video data. In Ninja, this job is performed by an Android Thread. There is a simple state machine and some logic to handle different play states, but under normal playback the thread copies one frame of data into the Android playback API, then tells the thread scheduler to wait 15 ms and invoke the handler again. When you create an Android thread, you can request that the thread be run repeatedly, as if in a loop, but it is the Android Thread scheduler that calls the handler, not your own application.

To play a 60fps video, the highest frame rate available in the Netflix catalog, the device must render a new frame every 16.66 ms, so checking for a new sample every 15ms is just fast enough to stay ahead of any video stream Netflix can provide. Because the integrator had identified the audio stream as the problem, I zeroed in on the specific thread handler that was delivering audio samples to the Android audio service.

I wanted to answer this question: where is the extra time? I assumed some function invoked by the handler would be the culprit, so I sprinkled log messages throughout the handler, assuming the guilty code would be apparent. What was soon apparent was that there was nothing in the handler that was misbehaving, and the handler was running in a few milliseconds even when playback was stuttering.

Aha, Insight

In the end, I focused on three numbers: the rate of data transfer, the time when the handler was invoked and the time when the handler passed control back to Android. I wrote a script to parse the log output, and made the graph below which gave me the answer.

A graph showing time spent in the thread handler and audio data throughput.
Figure 2: Visualizing Audio Throughput and Thread Handler Timing

The orange line is the rate that data moved from the streaming buffer into the Android audio system, in bytes/millisecond. You can see three distinct behaviors in this chart:

  1. The two, tall spiky parts where the data rate reaches 500 bytes/ms. This phase is buffering, before playback starts. The handler is copying data as fast as it can.
  2. The region in the middle is normal playback. Audio data is moved at about 45 bytes/ms.
  3. The stuttering region is on the right, when audio data is moving at closer to 10 bytes/ms. This is not fast enough to maintain playback.

The unavoidable conclusion: the orange line confirms what the chip vendor’s engineer reported: Ninja is not delivering audio data quickly enough.

To understand why, let’s see what story the yellow and grey lines tell.

The yellow line shows the time spent in the handler routine itself, calculated from timestamps recorded at the top and the bottom of the handler. In both normal and stutter playback regions, the time spent in the handler was the same: about 2 ms. The spikes show instances when the runtime was slower due to time spent on other tasks on the device.

The real root cause

The grey line, the time between calls invoking the handler, tells a different story. In the normal playback case you can see the handler is invoked about every 15 ms. In the stutter case, on the right, the handler is invoked approximately every 55 ms. There are an extra 40 ms between invocations, and there’s no way that can keep up with playback. But why?

I reported my discovery to the integrator and the chip vendor (look, it’s the Android Thread scheduler!), but they continued to push back on the Netflix behavior. Why don’t you just copy more data each time the handler is called? This was a fair criticism, but changing this behavior involved deeper changes than I was prepared to make, and I continued my search for the root cause. I dove into the Android source code, and learned that Android Threads are a userspace construct, and the thread scheduler uses the epoll() system call for timing. I knew epoll() performance isn’t guaranteed, so I suspected something was affecting epoll() in a systematic way.

At this point I was saved by another engineer at the chip supplier, who discovered a bug that had already been fixed in the next version of Android, named Marshmallow. The Android thread scheduler changes the behavior of threads depending whether or not an application is running in the foreground or the background. Threads in the background are assigned an extra 40 ms (40000000 ns) of wait time.

A bug deep in the plumbing of Android itself meant this extra timer value was retained when the thread moved to the foreground. Usually the audio handler thread was created while the application was in the foreground, but sometimes the thread was created a little sooner, while Ninja was still in the background. When this happened, playback would stutter.

Lessons learned

This wasn’t the last bug we fixed on this platform, but it was the hardest to track down. It was outside of the Netflix application, in a part of the system that was outside of the playback pipeline, and all of the initial data pointed to a bug in the Netflix application itself.

This story really exemplifies an aspect of my job I love: I can’t predict all of the issues that our partners will throw at me, and I know that to fix them I have to understand multiple systems, work with great colleagues, and constantly push myself to learn more. What I do has a direct impact on real people and their enjoyment of a great product. I know when people enjoy Netflix in their living room, I’m an essential part of the team that made it happen.


Life of a Netflix Partner Engineer — The case of extra 40 ms was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Four new products: IQaudio is now Raspberry Pi

Post Syndicated from Roger Thornton original https://www.raspberrypi.org/blog/iqaudio-is-now-raspberry-pi/

We’re delighted to round off 2020 by welcoming four of the most popular IQaudio products to the Raspberry Pi fold. DAC+, DAC Pro, DigiAMP+, and Codec Zero will all be available to buy via our network of Raspberry Pi Approved Resellers.

We’ve had a busy 2020 here at Raspberry Pi. From the High Quality Camera to 8GB Raspberry Pi 4 to Compute Module 4 and Raspberry Pi 400, this year’s products have been under development for several years, and bringing them to market required us to build new capabilities in the engineering team. Building capabilities, rather than money or engineer time, is the real rate-limiting step for introducing new Raspberry Pi products.

One market we’ve never explored is hi-fi audio; this is a world unto itself, with a very demanding customer base, and we’ve never felt we had the capabilities needed to offer something distinctive. Over time, third parties have stepped in with a variety of audio I/O devices, amplifiers, and other accessories.

IQaudio

Founded by Gordon and Sharon Garrity together with Andrew Rankin in 2015, IQaudio was one of the first companies to recognise the potential of Raspberry Pi as a platform for hi-fi audio. IQaudio products are widely used by hobbyists and businesses (in-store audio streaming being a particularly popular use case). So when the opportunity arose to acquire IQaudio’s brand and product line late last year, we jumped at it.

Today we’re relaunching four of the most popular IQaudio products, at new affordable price points, via our network of Raspberry Pi Approved Resellers.

IQaudio DAC+

Priced at just $20, DAC+ is our lowest-cost audio output HAT, supporting 24‑bit 192kHz high-resolution digital audio. It uses a Texas Instruments PCM5122 DAC to deliver stereo analogue audio to a pair of phono connectors, and also provides a dedicated headphone amplifier.

IQaudio DAC+ HAT

IQaudio DAC Pro

Priced at $25, DAC Pro is our highest-fidelity audio output HAT. It supports the same audio input formats and output connectors as DAC+, but uses a Texas Instruments PCM5242 DAC, providing an even higher signal-to-noise ratio.

IQaudio DAC Pro HAT

In combination with an optional daughter board (due for relaunch in the first quarter of 2021), DAC Pro can support balanced output from a pair of XLR connectors.

IQaudio DigiAMP+

Where DAC+ and DAC Pro are designed to be used with an external amplifier, DigiAMP+ integrates a Texas Instruments TAS5756M digital-input amplifier directly onto the HAT, allowing you to drive a pair of passive speakers at up to 35W per channel. Combined with a Raspberry Pi board, it’s a complete hi-fi that’s the size of a deck of cards.

IQaudio DigiAMP+ HAT

DigiAMP+ is priced at $30, and requires an external 12-21V 3A DC power supply, sold separately. XP Power’s VEC65US19, available here and here, is a suitable supply.

IQaudio Codec Zero

Codec Zero is a $20 audio I/O HAT, designed to fit within the Raspberry Pi Zero footprint. It is built around a Dialog Semiconductor DA7212 codec and supports a range of input and output devices, from the built-in MEMS microphone to external mono electret microphones and 1.2W, 8 ohm mono speakers.

IQaudio Codec Zero HAT

Unlike the other three products, which are in stock with our Approved Resellers now, Codec Zero will ship early in the New Year.

So there you have it. Four (nearly) new Raspberry Pi accessories, just in time for Christmas – hop over and buy yours now. This is the first time we’ve brought third-party products into our line-up like this; we’d like to thank the team at IQaudio for their help in making the transition.

The post Four new products: IQaudio is now Raspberry Pi appeared first on Raspberry Pi.

Hostopia Australia Signs Deal With Equinix, Launches New Private And Hybrid Cloud Offering

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/12/hostopia-australia-signs-deal-with-equinix-launches-new-private-and-hybrid-cloud-offering/

Sydney, Australia – 02 December , 2020 – Cloud services and hosting provider Hostopia Australia today announced a long term deal with the world’s digital infrastructure company Equinix, Inc. (Nasdaq: EQIX).

Hostopia Australia will consolidate its current footprint of five separate data centres across Sydney and Melbourne into Equinix’s state of the art SY5 and ME2 International Business ExchangeTM (IBX®) data centres launched in late 2019 and early 2020 respectively. The Equinix facilities are designed, built and operated with high energy efficiency standards. The SY5 facility is Equinix’s largest data center in Australia.

The deployment will enable Hostopia to overcome the limitations of its current data centres spread across different locations, by leveraging Equinix’s world class infrastructure and network service capabilities, thus improving the service delivery and connectivity for over 50,000 of Hostopia’s customers across ANZ.

Ross Krumbeck, CTO for Hostopia Australia commented: “This collaboration with Equinix will enable us to consolidate and reduce the number of disparate providers we were working with. Beyond a simple data centre agreement, this collaboration will allow our customers to benefit from best of breed data centre facilities with high security and compliance standards, robust power and cooling redundancy as well as abundant connectivity options.”

The deployment with Equinix will allow Hostopia Australia’s managed cloud brand, Anchor, to provide customers with superior, best-of-breed equipment and high-speed connectivity. It also means Anchor will be able to provide more tailored data centre solutions, especially for those customers with very specialised needs and requirements, as well as do co-selling and build side-by-side services on Platform Equinix®.

Hostopia’s Anchor customers will also be able to access the Equinix Fabric™ – formerly known as Equinix Cloud Exchange Fabric, which allows businesses to set up on-demand and secure connections to more than 2,300 participants across all regions around the world.

Guy Danskine, Managing Director, Equinix Australia said: “Businesses across Australia are increasingly turning to hybrid cloud architectures as part of their digital transformation journeys. By deploying on Platform Equinix, Hostopia will give its customers access to our robust cloud ecosystem and benefit from improved service delivery and connectivity.  

“Beyond just supporting our existing and future customers across ANZ, Equinix’s global footprint means we’ll be able to expand our reach and support both ANZ customers looking to grow internationally, and overseas companies looking to start operations in our region. This is of utmost importance as we are just launching our private and hybrid cloud offering and expect to grow significantly in 2021”, added Krumbeck.  

Launching a new private and hybrid cloud offering, supported by Equinix

Platform Equinix will provide Hostopia with extra capabilities to support the launch of its new private and hybrid cloud offering which will trade under the Anchor name.

As of today, Anchor is offering ANZ customers the opportunity to build tailored private and hybrid cloud environments in VMWare, one of Anchor’s key partners, at Equinix SY5 and ME2 facilities.

This new offering has been built to support organisations which are not ready to operate in full public cloud environments yet. Anchor’s new private and hybrid cloud offering comes with extra managed and professional services layer:

Anchor’s cloud experts will be able to create 100% bespoke solutions based on each customer’s unique needs and requirements, and accompany them step by step in their cloud journey whether they want to stay in a private cloud environment, or move toward hybrid or public cloud.

For Darryn McCoskery, General Manager for Hostopia Australia:As a result of the COVID-19 pandemic we’ve seen a rush of organisations either expanding their cloud footprint, or looking into kick starting their cloud journeys. It’s important we can accompany those organisations every step of the way, no matter where they are at with their cloud journeys”. 

“What 2020 has proven is that cloud is not an option anymore, and as we enter 2021 it will be even more paramount to build resilience and stay relevant amid a constantly, rapidly changing economic environment”.

Hostopia is the largest hosting company in Australia and an emerging leader in Cloud Engineering Services. Hostopia provides fast, secure and scalable solutions, enabling digital success for thousands of businesses around the world. With more than 20 years of experience in the industry, Hostopia has built and procured a versatile portfolio of cloud and hosting brands. Learn more at www.hostopia.com.au.

The post Hostopia Australia Signs Deal With Equinix, Launches New Private And Hybrid Cloud Offering appeared first on AWS Managed Services by Anchor.

The 5.10 kernel has been released

Post Syndicated from original https://lwn.net/Articles/840017/rss

Linus has released the 5.10 kernel.
I pretty much always wish that the last week was even calmer than it
was, and that’s true here too. There’s a fair amount of fixes in here,
including a few last-minute reverts for things that didn’t get fixed,
but nothing makes me go ‘we need another week’. Things look fairly
normal.

Significant changes in this release include
support for the Arm memory tagging
extension,
restricted rings for io_uring,
sleepable BPF programs,
the process_madvise()
system call
,
ext4 “fast commits”,
and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 5.10 page
for more details.

Privacy and Compliance Reading List

Post Syndicated from Val Vesa original https://blog.cloudflare.com/privacy-and-compliance-reading-list/

Privacy and Compliance Reading List

Privacy and Compliance Reading List

Privacy matters. Privacy and Compliance are at the heart of Cloudflare’s products and solutions. We are committed to providing built-in data protection and privacy throughout our global network and for every product in our portfolio. This is why we have dedicated a whole week to highlight important aspects of how we are working to make sure privacy will stay at the core of all we do as a business.

In case you missed any of the blog posts this week addressing the topics of Privacy and Compliance, you’ll find a summary below.

Welcome to Privacy & Compliance Week: Reflecting Values at Cloudflare’s Core

We started the week with this introduction by Matthew Prince. The blog post summarizes the early decisions that the founding team made to make sure customer data is kept private, that we do not sell or rent this data to third parties, and why trust is the foundation of our business. > Read the full blog post.

Introducing the Cloudflare Data Localization Suite

Cloudflare’s network is private and compliant by design. Preserving end-user privacy is core to our mission of helping to build a better Internet; we’ve never sold personal data about customers or end-users of our network. We comply with laws like GDPR and maintain certifications such as ISO-27001. In a blog post by John Graham-Cumming, we announced the Data Localization Suite, which helps businesses get the performance and security benefits of Cloudflare’s global network while making it easy to set rules and controls at the edge about where their data is stored and protected. The Data Localization Suite is available now as an add-on for Enterprise customers. > Read the full blog post.

Privacy needs to be built into the Internet

John also reflected upon three phases of the evolution of the Internet: from its invention to the mid-1990s the race was on for expansion and connectivity. Then, as more devices and networks became interconnected, the focus shifted with the introduction of SSL in 1994 to a second phase where security became paramount. We’re now in the full swing of phase 3, where privacy is becoming more and more important than ever. > Read the full blog post.

Helping build the next generation of privacy-preserving protocols

The Internet is growing in terms of its capacity and the number of people using it and evolving in terms of its design and functionality. As a player in the Internet ecosystem, Cloudflare has a responsibility to help the Internet grow in a way that respects and provides value for its users. In this blog post, Nick Sullivan summarizes several announcements on improving Internet protocols with respect to something important to our customers and Internet users worldwide: privacy. These initiatives are focussed around: fixing one of the last information leaks in HTTPS through Encrypted Client Hello (ECH), which supersedes Encrypted SNI; making DNS even more private by supporting Oblivious DNS-over-HTTPS (ODoH); developing a superior protocol for password authentication, OPAQUE, that makes password breaches less likely to occur.  > Read the full blog post.

OPAQUE: The Best Passwords Never Leave your Device

Passwords are a problem. They are a problem for reasons that are familiar to most readers. For us at Cloudflare, the problem lies much deeper and broader. Most readers will immediately acknowledge that passwords are hard to remember and manage, especially as password requirements grow increasingly complex. Luckily there are great software packages and browser add-ons to help manage passwords. Unfortunately, the greater underlying problem is beyond the reaches of software to solve. Today’s deep-dive blog post by Tatiana Bradley, into OPAQUE, is one possible answer. OPAQUE is one among many examples of systems that enable a password to be useful without it ever leaving your possession. No one likes passwords, but as long they’re in use, at least we can ensure they are never given away.  > Read the full blog post.

Good-bye ESNI, hello ECH!

In this post Christopher Patton dives into Encrypted Client Hello (ECH), a new extension for TLS that promises to significantly enhance the privacy of this critical Internet protocol. Today, a number of privacy-sensitive parameters of the TLS connection are negotiated in the clear. This leaves a trove of metadata available to network observers, including the endpoints’ identities, how they use the connection, and so on. > Read the full blog post.

Improving DNS Privacy with Oblivious DoH in 1.1.1.1

Tanya Verma and Sudheesh Singanamalla wrote this blog post for our announcement of support for a new proposed DNS standard — co-authored by engineers from Cloudflare, Apple, and Fastly — that separates IP addresses from queries, so that no single entity can see both at the same time. Even better, we’ve made source code available, so anyone can try out ODoH, or run their own ODoH service! > Read the full blog post.

Cloudflare never tracks end-users across sites or sells their personal data. However, we didn’t want there to be any questions about our cookie use, and we don’t want any customer to think they need a cookie banner because of what we do. Therefore we’ve announced that Cloudflare is deprecating the __cfduid cookie. Starting on 10 May 2021, we will stop adding a “Set-Cookie” header on all HTTP responses. The last __cfduid cookies will expire 30 days after that. So why did we use the __cfduid cookie before, and why can we remove it now? Read the full blog post by Sergi Isasi to find out.

Cloudflare’s privacy-first Web Analytics is now available for everyone

In September, we announced that we’re building a new, free Web Analytics product for the whole web. In this blog post by Jon Levine, we’re announcing that anyone can now sign up to use our new Web Analytics — even without changing your DNS settings. In other words, Cloudflare Web Analytics can now be deployed by adding an HTML snippet (in the same way many other popular web analytics tools are) making it easier than ever to use privacy-first tools to understand visitor behavior.

Announcing Workplace Records for Cloudflare for Teams

As businesses worldwide have shifted to remote work, many employees have been working from “home” — wherever that may be. Some employees have taken this opportunity to venture further from where they usually are, sometimes crossing state and national borders. Businesses worldwide pay employment taxes based on where their employees do work. For most businesses and in normal times, where employees do work has been relatively easy to determine: it’s where they come into the office. But 2020 has made everything more complicated, even taxes. In this blog post by Matthew Prince and Sam Rhea, we’re announcing the beta of a new feature for Cloudflare for Teams to help solve this problem: Workplace Records. Cloudflare for Teams uses Access and Gateway logs to provide the state and country from which employees are working. Workplace Records can be used to help finance, legal, and HR departments determine where payroll taxes are due and provide a record to defend those decisions.

Securing the post-quantum world

Quantum computing will change the face of Internet security forever — particularly in the realm of cryptography, which is the way communications and information are secured across channels like the Internet. Cryptography is critical to almost every aspect of modern life, from banking to cellular communications to connected refrigerators and systems that keep subways running on time. This ultra-powerful, highly sophisticated new generation of computing has the potential to unravel decades of work that have been put into developing the cryptographic algorithms and standards we use today. When will a quantum computer be built that is powerful enough to break all modern cryptography? By some estimates, it may take 10 to 15 years. This makes deploying post-quantum cryptography as soon as possible a pressing privacy concern. Cloudflare is taking steps to accelerate this transition. Read the full blog post by Nick Sullivan to find out more.

How to Build a Global Network that Complies with Local Law

Governments around the world have long had an interest in getting access to online records. Sometimes law enforcement is looking for evidence relevant to criminal investigations. Sometimes intelligence agencies are looking to learn more about what foreign governments or actors are doing. And online service providers of all kinds often serve as an access point for those electronic records.

For service providers like Cloudflare, though, those requests can be fraught. The work that law enforcement and other government authorities do is important. At the same time, the data that law enforcement and other government authorities are seeking does not belong to us. By using our services, our customers have put us in a position of trust over that data. Maintaining that trust is fundamental to our business and our values. Alissa Stark details in her blog post how Cloudflare works to ensure compliance with laws like GDPR, particularly in the face of legal orders that might put us in the difficult position of being required to violate it and that requires involving the courts.

Encrypting your WAF Payloads with Hybrid Public Key Encryption (HPKE)

The Cloudflare Web Application Firewall (WAF) blocks more than 72B malicious requests per day from reaching our customers’ applications. Typically, our users can easily confirm these requests were not legitimate by checking the URL, the query parameters, or other metadata that Cloudflare provides as part of the security event log in the dashboard. Request headers may contain cookies and POST payloads may contain username and password pairs submitted during a login attempt among other sensitive data.

We recognize that providing clear visibility in any security event is a core feature of a firewall, as this allows users to better fine-tune their rules. To accomplish this, while ensuring end-user privacy, we built encrypted WAF matched payload logging. This feature will log only the specific component of the request the WAF has deemed malicious — and it is encrypted using a customer-provided key to ensure that no Cloudflare employee can examine the data. Michael Tremante goes over this in full detail, explaining how only application owners who also have access to the Cloudflare dashboard as Super Administrators will be able to configure encrypted matched payload logging.

Supporting Jurisdictional Restrictions for Durable Objects

Durable Objects, currently in limited beta, already make it easy for customers to manage state on Cloudflare Workers without worrying about provisioning infrastructure. Greg McKeon announces in this blog post the upcoming launch of Jurisdictional Restrictions for Durable Objects, which ensure that a Durable Object only stores and processes data in a given geographical region. Jurisdictional Restrictions make it easy for developers to build serverless, stateful applications that not only comply with today’s regulations but can handle new and updated policies as new regulations are added. Head over to the blog post to read more and also request an invite to the beta.

I want my Cloudflare TV

We have also had a full week of CloudflareTV segments focussed on privacy and compliance and you can get the full list and more details on our dedicated Privacy Week page.

As always, we welcome your feedback and comments and we stay committed to putting the privacy and safety of your data at the core of everything we do.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close