New Microsoft Azure NVIDIA GB200 Systems Shown as Two-Thirds Cooling

Post Syndicated from Cliff Robinson original https://www.servethehome.com/new-microsoft-azure-nvidia-gb200-systems-shown/

Microsoft Azure NVIDIA GB200 systems are huge, with two thirds of the aisle space being dedicated to cooling the NVIDIA rack

The post New Microsoft Azure NVIDIA GB200 Systems Shown as Two-Thirds Cooling appeared first on ServeTheHome.

Security updates for Tuesday

Post Syndicated from corbet original https://lwn.net/Articles/993276/

Security updates have been issued by Debian (kernel), Fedora (webkitgtk), Mageia (cups), Oracle (e2fsprogs, kernel, and kernel-container), Red Hat (buildah, container-tools:rhel8, containernetworking-plugins, git-lfs, go-toolset:rhel8, golang, grafana-pcp, podman, and skopeo), SUSE (Mesa, mozjs115, podofo, and redis7), and Ubuntu (cups and cups-filters).

Improve security incident response times by using AWS Service Catalog to decentralize security notifications

Post Syndicated from Cheng Wang original https://aws.amazon.com/blogs/security/improve-security-incident-response-times-by-using-aws-service-catalog-to-decentralize-security-notifications/

Many organizations continuously receive security-related findings that highlight resources that aren’t configured according to the organization’s security policies. The findings can come from threat detection services like Amazon GuardDuty, or from cloud security posture management (CSPM) services like AWS Security Hub, or other sources. An important question to ask is: How, and how soon, are your teams notified of these findings?

Often, security-related findings are streamed to a single centralized security team or Security Operations Center (SOC). Although it’s a best practice to capture logs, findings, and metrics in standardized locations, the centralized team might not be the best equipped to make configuration changes in response to an incident. Involving the owners or developers of the impacted applications and resources is key because they have the context required to respond appropriately. Security teams often have manual processes for locating and contacting workload owners, but they might not be up to date on the current owners of a workload. Delays in notifying workload owners can increase the time to resolve a security incident or a resource misconfiguration.

This post outlines a decentralized approach to security notifications, using a self-service mechanism powered by AWS Service Catalog to enhance response times. With this mechanism, workload owners can subscribe to receive near real-time Security Hub notifications for their AWS accounts or workloads through email. The notifications include those from Security Hub product integrations like GuardDuty, AWS Health, Amazon Inspector, and third-party products, as well as notifications of non-compliance with security standards. These notifications can better equip your teams to configure AWS resources properly and reduce the exposure time of unsecured resources.

End-user experience

After you deploy the solution in this post, users in assigned groups can access a least-privilege AWS IAM Identity Center permission set, called SubscribeToSecurityNotifications, for their AWS accounts (Figure 1). The solution can also work with existing permission sets or federated IAM roles without IAM Identity Center.

Figure 1: IAM Identity Center portal with the permission set to subscribe to security notifications

Figure 1: IAM Identity Center portal with the permission set to subscribe to security notifications

After the user chooses SubscribeToSecurityNotifications, they are redirected to an AWS Service Catalog product for subscribing to security notifications and can see instructions on how to proceed (Figure 2).

Figure 2: AWS Service Catalog product view

Figure 2: AWS Service Catalog product view

The user can then choose the Launch product utton and enter one or more email addresses and the minimum severity level for notifications (Critical, High, Medium, or Low). If the AWS account has multiple workloads, they can choose to receive only the notifications related to the applications they own by specifying the resource tags. They can also choose to restrict security notifications to include or exclude specific security products (Figure 3).

Figure 3: Service Catalog security notifications product parameters

Figure 3: Service Catalog security notifications product parameters

You can update the Service Catalog product configurations after provisioning by doing the following:

  1. In the Service Catalog console, in the left navigation menu, choose Provisioned products.
  2. Select the provisioned product, choose Actions, and then choose Update.
  3. Update the parameters you want to change.

For accounts that have multiple applications, each application owner can set up their own notifications by provisioning an additional Service Catalog product. You can use the Filter findings by tag parameters to receive notifications only for a specific application. The example shown in Figure 3 specifies that the user will receive notifications only from resources with the tag key app and the tag value BigApp1 or AnotherApp.

After confirming the subscription, the user starts to receive email notifications for new Security Hub findings in near real-time. Each email contains a summary of the finding in the subject line, the account details, the finding details, recommendations (if any), the list of resources affected with their tags, and an IAM Identity Center shortcut link to the Security Hub finding (Figure 4). The email ends with the raw JSON of the finding.

Figure 4: Sample email showing details of the security notification

Figure 4: Sample email showing details of the security notification

Choosing the link in the email takes the user directly to the AWS account and the finding in Security Hub, where they can see more details and search for related findings (Figure 5).

Figure 5: Security Hub finding detail page, linked from the notification email

Figure 5: Security Hub finding detail page, linked from the notification email

Solution overview

We’ve provided two deployment options for this solution; a simpler option and one that is more advanced.

Figure 6 shows the simpler deployment option of using the requesting user’s IAM permissions to create the resources required for notifications.

Figure 6: Architecture diagram of the simpler configuration of the solution

Figure 6: Architecture diagram of the simpler configuration of the solution

The solution involves the following steps:

  1. Create a central Subscribe to AWS Security Hub notifications Service Catalog product in an AWS account which is shared with the entire organization in AWS Organizations or with specific organizational units (OUs). Configure the product with the names of IAM roles or IAM Identity Center permission sets that can launch the product.
  2. Users who sign in through the designated IAM roles or permission sets can access the shared Service Catalog product from the AWS Management Console and enter the required parameters such as their email address and the minimum severity level for notifications.
  3. The Service Catalog product creates an AWS CloudFormation stack, which creates an Amazon Simple Notification Service (Amazon SNS) topic and an Amazon EventBridge rule that filters new Security Hub finding events that match the user’s parameters, such as minimum severity level. The rule then formats the Security Hub JSON event message to make it human-readable by using native EventBridge input transformers. The formatted message is then sent to SNS, which emails the user.

We also provide a more advanced and recommended deployment option, shown in Figure 7. This option involves using an AWS Lambda function to enhance messages by doing conversions from UTC to your selected time zone, setting the email subject to the finding summary, and including an IAM Identity Center shortcut link to the finding. To not require your users to have permissions for creating Lambda functions and IAM roles, a Service Catalog launch role is used to create resources on behalf of the user, and this role is restricted by using IAM permissions boundaries.

Figure 7: Architecture diagram of the solution when using the calling user’s permissions

Figure 7: Architecture diagram of the solution when using the calling user’s permissions

The architecture is similar to the previous option, but with the following changes:

  1. Create a CloudFormation StackSet in advance to pre-create an IAM role and an IAM permissions boundary policy in every AWS account. The IAM role is used by the Service Catalog product as a launch role. It has permissions to create CloudFormation resources such as SNS topics, as well as to create IAM roles that are restricted by the IAM permissions boundary policy that allows only publishing SNS messages and writing to Amazon CloudWatch Logs.
  2. Users who want to subscribe to security notifications require only minimal permissions; just enough to access Service Catalog and to pass the pre-created role (from the preceding step) to Service Catalog. This solution provides a sample AWS Identity Center permission set with these minimal permissions.
  3. The Service Catalog product uses a Lambda function to format the message to make it human-readable. The stack creates an IAM role, limited by the permissions boundary, and the role is assumed by the Lambda function to publish the SNS message.

Prerequisites

The solution installation requires the following:

  1. Administrator-level access to AWS Organizations. AWS Organizations must have all features
  2. Security Hub enabled in the accounts you are monitoring.
  3. An AWS account to host this solution, for example the Security Hub administrator account or a shared services account. This cannot be the management account.
  4. One or more AWS accounts to consume the Service Catalog product.
  5. Authentication that uses AWS IAM Identity Center or federated IAM role names in every AWS account for users accessing the Service Catalog product.
  6. (Optional, only required when you opt to use Service Catalog launch roles) CloudFormation StackSet creation access to either the management account or a CloudFormation delegated administrator account.
  7. This solution supports notifications coming from multiple AWS Regions. If you are operating Security Hub in multiple Regions, for a simplified deployment evaluate the Security Hub cross-Region aggregation feature and enable it for the applicable Regions.

Walkthrough

There are four steps to deploy this solution:

  1. Configure AWS Organizations to allow Service Catalog product sharing.
  2. (Optional, recommended) Use CloudFormation StackSets to deploy the Service Catalog launch IAM role across accounts.
  3. Service Catalog product creation to allow users to subscribe to Security Hub notifications. This needs to be deployed in the specific Region you want to monitor your Security Hub findings in, or where you enabled cross-Region aggregation.
  4. (Optional, recommended) Provision least-privileged IAM Identity Center permission sets.

Step 1: Configure AWS Organizations

Service Catalog organizations sharing in AWS Organizations must be enabled, and the account that is hosting the solution must be one of the delegated administrators for Service Catalog. This allows the Service Catalog product to be shared to other AWS accounts in the organization.

To enable this configuration, sign in to the AWS Management Console in the management AWS account, launch the AWS CloudShell service, and enter the following commands. Replace the <Account ID> variable with the ID of the account that will host the Service Catalog product.

# Enable AWS Organizations integration in Service Catalog
aws servicecatalog enable-aws-organizations-access

# Nominate the account to be one of the delegated administrators for Service Catalog
aws organizations register-delegated-administrator --account-id <Account ID> --service-principal servicecatalog.amazonaws.com

Step 2: (Optional, recommended) Deploy IAM roles across accounts with CloudFormation StackSets

The following steps create a CloudFormation StackSet to deploy a Service Catalog launch role and permissions boundary across your accounts. This is highly recommended if you plan to enable Lambda formatting, because if you skip this step, only users who have permissions to create IAM roles will be able to subscribe to security notifications.

To deploy IAM roles with StackSets

  1. Sign in to the AWS Management Console from the management AWS account, or from a CloudFormation delegated administrator
  2. Download the CloudFormation template for creating the StackSet.
  3. Navigate to the AWS CloudFormation page.
  4. Choose Create stack, and then choose With new resources (standard).
  5. Choose Upload a template file and upload the CloudFormation template that you downloaded earlier:SecurityHub_notifications_IAM_role_stackset.yaml. Then choose Next.
  6. Enter the stack name SecurityNotifications-IAM-roles-StackSet.
  7. Enter the following values for the parameters:
    1. AWS Organization ID: Start AWS CloudShell and enter the command provided in the parameter description to get the organization ID.
    2. Organization root ID or OU ID(s): To deploy the IAM role and permissions boundary to every account, enter the organization root ID using CloudShell and the command in the parameter description. To deploy to specific OUs, enter a comma-separated list of OU IDs. Make sure that you include the OU of the account that is hosting the solution.
    3. Current Account Type: Choose either Management account or Delegated administrator account, as needed.
    4. Formatting method: Indicate whether you plan to use the Lambda formatter for Security Hub notifications, or native EventBridge formatting with no Lambda functions. If you’re unsure, choose Lambda.
  8. Choose Next, and then optionally enter tags and choose Submit. Wait for the stack creation to finish.

Step 3: Create Service Catalog product

Next, run the included installation script that creates the CloudFormation templates that are required to deploy the Service Catalog product and portfolio.

To run the installation script

  1. Sign in to the console of the AWS account and Region that will host the solution, and start the AWS CloudShell service.
  2. In the terminal, enter the following commands:
    git clone https://github.com/aws-samples/improving-security-incident-response-times-by-decentralizing-notifications.git
    
    cd improving-security-incident-response-times-by-decentralizing-notifications
    
    ./install.sh

The script will ask for the following information:

  • Whether you will be using the Lambda formatter (as opposed to the native EventBridge formatter).
  • The timezone to use for displaying dates and times in the email notifications, for example Australia/Melbourne. The default is UTC.
  • The Service Catalog provider display name, which can be your company, organization, or team name.
  • The Service Catalog product version, which defaults to v1. Increment this value if you make a change in the product CloudFormation template file.
  • Whether you deployed the IAM role StackSet in Step 2, earlier.
  • The principal type that will use the Service Catalog product. If you are using IAM Identity Center, enter IAM_Identity_Center_Permission_Set. If you have federated IAM roles configured, enter IAM role name.
  • If you entered IAM_Identity_Center_Permission_Set in the previous step, enter the IAM Identity Center URL subdomain. This is used for creating a shortcut URL link to Security Hub in the email. For example, if your URL looks like this: https://d-abcd1234.awsapps.com/start/#/, then enter d-abcd1234.
  • The principals that will have access to the Service Catalog product across the AWS accounts. If you’re using IAM Identity Center, this will be a permission set name. If you plan to deploy the provided permission set in the next step (Step 4), press enter to accept the default value SubscribeToSecurityNotifications. Otherwise, enter an appropriate permission set name (for example AWSPowerUserAccess) or IAM role name that users use.

The script creates the following CloudFormation stacks:

After the script finishes the installation, it outputs the Service Catalog Product ID, which you will need in the next step. The script then asks whether it should automatically share this Service Catalog portfolio with the entire organization or a specific account, or whether you will configure sharing to specific OUs manually.

(Optional) To manually configure sharing with an OU

  1. In the Service Catalog console, choose Portfolios.
  2. Choose Subscribe to AWS Security Hub notifications.
  3. On the Share tab, choose Add a share.
  4. Choose AWS Organization, and then select the OU. The product will be shared to the accounts and child OUs within the selected OU.
  5. Select Principal sharing, and then choose Share.

To expand this solution across Regions, enable Security Hub cross-Region aggregation. This results in the email notifications coming from the linked Regions that are configured in Security Hub, even though the Service Catalog product is instantiated in a single Region. If cross-Region aggregation isn’t enabled and you want to monitor multiple Regions, you must repeat the preceding steps in all the Regions you are monitoring.

Step 4: (Optional, recommended) Provision IAM Identity Center permission sets

This step requires you to have completed Step 2 (Deploy IAM roles across accounts with CloudFormation StackSets).

If you’re using IAM Identity Center, the following steps create a custom permission set, SubscribeToSecurityNotifications, that provides least-privileged access for users to subscribe to security notifications. The permission set redirects to the Service Catalog page to launch the product.

To provision Identity Center permission sets

  1. Sign in to the AWS Management Console from the management AWS account, or from an IAM Identity Center delegated administrator
  2. Download the CloudFormation template for creating the permission set.
  3. Navigate to the AWS CloudFormation page.
  4. Choose Create stack, and then choose With new resources (standard).
  5. Choose Upload a template file and upload the CloudFormation template you downloaded earlier: SecurityHub_notifications_PermissionSets.yaml. Then choose Next.
  6. Enter the stack name SecurityNotifications-PermissionSet.
  7. Enter the following values for the parameters:
    1. AWS IAM Identity Center Instance ARN: Use the AWS CloudShell command in the parameter description to get the IAM Identity Center ARN.
    2. Permission set name: Use the default value SubscribeToSecurityNotifications.
    3. Service Catalog product ID: Use the last output line of the install.sh script in Step 3, or alternatively get the product ID from the Service Catalog console for the product account.
  8. Choose Next. Then optionally enter tags and choose Next Wait for the stack creation to finish.

Next, go to the IAM Identity Center console, select your AWS accounts, and assign access to the SubscribeToSecurityNotifications permission set for your users or groups.

Testing

To test the solution, sign in to an AWS account, making sure to sign in with the designated IAM Identity Center permission set or IAM role. Launch the product in Service Catalog to subscribe to Security Hub security notifications.

Wait for a Security Hub notification. For example, if you have the AWS Foundational Security Best Practices (FSBP) standard enabled, creating an S3 bucket with no server access logging enabled should generate a notification within a few minutes.

Additional considerations

Keep in mind the following:

Cleanup

To remove unneeded resources after testing the solution, follow these steps:

  1. In the workload account or accounts where the product was launched:
    1. Go to the Service Catalog provisioned products page and terminate each associated provisioned product. This stops security notifications from being sent to the email address associated with the product.
  2. In the AWS account that is hosting the directory:
    1. In the Service Catalog console, choose Portfolios, and then choose Subscribe to AWS Security Hub notifications. On the Share tab, select the items in the list and choose Actions, then choose Unshare.
    2. In the CloudFormation console, delete the SecurityNotifications-Service-Catalog stack.
    3. In the Amazon S3 console, for the two buckets starting with securitynotifications-sc-bucket, select the bucket and choose Empty to empty the bucket.
    4. In the CloudFormation console, delete the SecurityNotifications-SC-Bucket stack.
  3. If applicable, go to the management account or the CloudFormation delegated administrator account and delete the SecurityNotifications-IAM-roles-StackSet stack.
  4. If applicable, go to the management account or the IAM Identity Center delegated administrator account and delete the SecurityNotifications-PermissionSet stack.

Conclusion

This solution described in this blog post enables you to set up a self-service standardized mechanism that application or workload owners can use to get security notifications within minutes through email, as opposed to being contacted by a security team later. This can help to improve your security posture by reducing the incident resolution time, which reduces the time that a security issue remains active.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
 

Cheng Wang
Cheng Wang

Cheng is a Solutions Architect at AWS in Melbourne, Australia. With a strong consulting background, he leverages his cloud infrastructure expertise to support enterprise customers in designing and deploying cloud architectures that drive efficiency and innovation.
Karthikeyan KM
Karthikeyan KM

KM is a Senior Technical Account Manager who supports enterprise users at AWS. He has over 18 years of IT experience, and he enjoys building reliable, scalable, and efficient solutions.
Randy Patrick
Randy Patrick

Randy is a Senior Technical Account Manager who supports enterprise customers at AWS. He has 20 years of IT experience, which he uses to build secure, resilient, and optimized solutions. In his spare time, Randy enjoys tinkering in home theater, playing guitar, and hiking trails at the park.

Contributor

Special thanks to Rizvi Rahim, who made significant contributions to this post.

Leveraging Kubernetes virtual machines at Cloudflare with KubeVirt

Post Syndicated from Justin Cichra original https://blog.cloudflare.com/leveraging-kubernetes-virtual-machines-with-kubevirt

Cloudflare runs several multi-tenant Kubernetes clusters across our core data centers. These general-purpose clusters run on bare metal and power our control plane, analytics, and various engineering tools such as build infrastructure and continuous integration.

Kubernetes is a container orchestration platform. It enables software engineers to deploy containerized applications to a cluster of machines. This enables teams to build highly-available software on a scalable and resilient platform.

In this blog post we discuss our Kubernetes architecture, why we needed virtualization, and how we’re using it today.

Multi-tenant clusters

Multi-tenancy is a concept where one system can share its resources among a wide range of customers. This model allows us to build and manage a small number of general purpose Kubernetes clusters for our internal application teams. Keeping the number of clusters small reduces our operational toil. This model shrinks costs and increases computational efficiency by sharing hardware. Multi-tenancy also allows us to scale more efficiently. Scaling is done at either a cluster or application level. Cluster operators scale the platform by adding more hardware. Teams scale their applications by updating their Kubernetes manifests. They can scale vertically by increasing their resource requests or horizontally by increasing the number of replicas.

All of our Kubernetes clusters are multi-tenant with various components enabled for a secure and resilient platform.

Pods are secured using the latest standards recommended by the Kubernetes project. We use Pod Security Admission (PSA) and Pod Security Standards to ensure all workloads are following best practices. By default, all namespaces use the most restrictive profile, and only a few Kubernetes control plane namespaces are granted privileged access. For additional policies not covered by PSA, we built custom Validating Webhooks on top of the controller-runtime framework. PSA and our custom policies ensure clusters are secure and workloads are isolated.

Our need for virtualization

A select number of teams needed tight integration with the Linux kernel. Examples include Docker daemons for build infrastructure and the ability to simulate servers running the software and configuration of our global network. With our pod security requirements, these workloads are not permitted to interface with the host kernel at a deep level (e.g. no iptables or sysctls). Doing so may disrupt other tenants sharing the node and open additional attack vectors if an application was compromised. A virtualization platform would enable these workloads to interact with their own kernel within a secured Kubernetes cluster.

We considered various different virtualization solutions. Running a separate virtualization platform outside of Kubernetes would have worked, but would not tightly integrate containerized workloads with virtual machines. It would also be an additional operational burden on our team, as backups, alerting, and fleet management would have to exist for both our Kubernetes and virtual machine clusters.

We then looked for solutions that run virtual machines within Kubernetes. Teams could already manually deploy QEMU pods, but this was not an elegant solution. We needed a better way. There were several other options, but KubeVirt was the tool that met the majority of our requirements. Other solutions required a privileged container to run a virtual machine, but KubeVirt did not – this was a crucial requirement in our goal of creating a more secure multi-tenant cluster. KubeVirt also uses a feature of the Kubernetes API called Custom Resource Definitions (CRDs), which extends the Kubernetes API with new objects, increasing the flexibility of Kubernetes beyond its built-in types. For KubeVirt, this includes objects such as VirtualMachine and VirtualMachineInstanceReplicaSet. We felt the use of CRDs would allow KubeVirt to grow as more features were added.

What is KubeVirt?

KubeVirt is a virtualization platform that enables users to run virtual machines within Kubernetes. With KubeVirt, virtual machines run alongside containerized workloads on the same platform. Kubernetes primitives such as network policies, configmaps, and services all integrate with virtual machines. KubeVirt scales with our needs and is successfully running hundreds of virtual machines across several clusters. We frequently remediate Kubernetes nodes, so virtual machines and pods are always exercising their startup/shutdown processes.

How Cloudflare uses KubeVirt

There are a number of internal projects leveraging virtual machines at Cloudflare. We’ll touch on a few of our more popular use cases:

  1. Kubernetes scalability testing

  2. Development environments

  3. Kernel and iPXE testing

  4. Build pipelines

Kubernetes scalability testing

Setup process

Our staging clusters are much smaller than our largest production clusters. They also run on bare metal and mirror the configuration we have for each production cluster. This is extremely useful when rolling out new software, operating systems, or kernel changes; however, they miss bugs that only surface at scale. We use KubeVirt to bridge this gap and virtualize Kubernetes clusters with hundreds of nodes and thousands of pods.

The setup process for virtualized clusters differs from our bare metal provisioning steps. For bare metal, we use Salt to provision clusters from start to finish. For our virtualized clusters we use Ansible and kubeadm.  Our bare metal staging clusters are responsible for testing and validating our Salt configuration. The virtualized clusters give us a vanilla Kubernetes environment without any Cloudflare customizations. Having a stock environment in addition to our Salt environment helps us isolate bugs down to a Kubernetes change, a kernel change, or a Cloudflare-specific configuration change.

Our virtualized clusters consist of a KubeVirt VirtualMachine object per node. We create three control-plane nodes and any number of worker nodes. Each virtual machine starts out as a vanilla Debian generic cloud image. Using KubeVirt’s cloud-init support, the virtual machine downloads an internal Ansible playbook which installs a recent kernel, cri-o (the container runtime we use), and kubeadm.

- name: Add the Kubernetes gpg key
  apt_key:
    url: https://pkgs.k8s.io/core:/stable:/{{ kube_version }}/deb/Release.key
    keyring: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
    state: present

- name: Add the Kubernetes repository
  shell: echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{ kube_version }}/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list

- name: Add the CRI-O gpg key
  apt_key:
    url: https://pkgs.k8s.io/addons:/cri-o:/{{ crio_version }}/deb/Release.key
    keyring: /etc/apt/keyrings/cri-o-apt-keyring.gpg
    state: present

- name: Add the CRI-O repository
  shell: echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/{{ crio_version }}/deb/ /" | tee /etc/apt/sources.list.d/cri-o.list

- name: Install CRI-O and Kubernetes packages
  apt:
    name:
      - cri-o
      - kubelet
      - kubeadm
      - kubectl
    update_cache: yes
    state: present

- name: Enable and start CRI-O service
  service:
    state: started
    enabled: yes
    name: crio.service

Ansible playbook steps to download and install Kubernetes tooling

Once each node has completed its individual playbook, we can initialize and join nodes to the cluster using another playbook that runs kubeadm. From there the cluster can be accessed by logging into a control plane node using kubectl.

Simulating at scale

When losing 10s or 100s of nodes at once, Kubernetes needs to act quickly to minimize downtime. The sooner it recognizes node failure, the faster it can reroute traffic to healthy pods.

Using Kubernetes in KubeVirt we are able to simulate a large cluster undergoing a network cut and observe how Kubernetes reacts. The KubeVirt Kubernetes cluster allows us to rapidly iterate on configuration changes and code patches.

The following Ansible playbook task simulates a network segmentation failure where only the control-plane nodes remain online.

- name: Disable network interfaces on all workers
  command: ifconfig enp1s0 down
  async: 5
  poll: 0
  ignore_errors: yes
  when: inventory_hostname in groups['kube-node']

An Ansible role which disables the network on all worker nodes simultaneously.

This framework allows us to exercise the code in controller-manager, Kubernetes’s daemon that reconciles the fundamental state of the system (Nodes, Pods, etc). Our simulation platform helped us drastically shorten full traffic recovery time when a large number of Kubernetes nodes become unreachable. We upstreamed our changes to Kubernetes and more controller-manager speed improvements are coming soon.

Development environments

Compiling code on your laptop can be slow. Perhaps you’re working on a patch for a large open-source project (e.g. V8 or Clickhouse) or need more bandwidth to upload and download containers. With KubeVirt, we enable our developers to rapidly iterate on software development and testing on powerful server hardware. KubeVirt integrates with Kubernetes Persistent Volumes, which enables teams to persist their development environment across restarts.

There are a number of teams at Cloudflare using KubeVirt for a variety of development and testing environments. Most notably is a project called Edge Test Fleet, which emulates a physical server and all the software that runs Cloudflare’s global network. Teams can test their code and configuration changes against the entire software stack without reserving dedicated hardware. Cloudflare uses Salt to provision systems. It can be difficult to iterate and test Salt changes without a complete virtual environment. Edge Test Fleet makes iterating on Salt easier, ensuring states compile and render the right output. With Edge Test Fleet, new developers can better understand how Cloudflare’s global network works without touching staging or production.

Additionally, one Cloudflare team developed a framework that allows users to build and test changes to Clickhouse using a VSCode environment. This framework is generally applicable to all teams requiring a development environment. Once a template environment is provisioned, CSI Volume Cloning can duplicate a golden volume, separating persistent environments for each developer.

apiVersion: v1
kind: PersistentVolumeClaim
name: devspace-jcichra-rootfs
namespace: dev-clickhouse-vms
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: rook-ceph-nvme
  dataSource:
    kind: PersistentVolumeClaim
    name: dev-rootfs
  resources:
    requests:
      storage: 500Gi

A PersistentVolumeClaim that clones data from another volume using CSI Volume Cloning

Kernel and iPXE testing

Unlike user space software development, when a kernel crashes, the entire system crashes. The kernel team uses KubeVirt for development. KubeVirt gives all kernel engineers, regardless of laptop OS or architecture, the same x86 environment and hypervisor. Virtual machines on server hardware can be scaled up to more cores and memory than on laptops. The Cloudflare kernel team has also found low-level issues which only surface in environments with many CPUs.

To make testing fast and easy, the kernel team serves iPXE images via an nginx Pod and Service adjacent to the virtual machine. A recent kernel and Debian image are copied to the nginx pod via kubectl cp. The iPXE file can then be referenced in the KubeVirt virtual machine definition via the DNS name for the Kubernetes Service.

interfaces:
  name: default
  masquerade: {}
  model: e1000e
  ports:
    - port: 22
      dhcpOptions:
        bootFileName: http://httpboot.u-$K8S_USER.svc.cluster.local/boot.ipxe

When the virtual machine boots, it will get an IP address on the default interface behind NAT due to our masquerade setting. Then it will download boot.ipxe, which describes what additional files should be downloaded to start the system. In this case, the kernel (vmlinuz-amd64), Debian (baseimg-amd64.img) and additional kernel modules (modules-amd64.img) are downloaded.


UEFI iPXE boot connecting and downloading files from nginx pod in user’s namespace

Once the system is booted, a developer can log in to the system for testing:

linux login: root
Password: 
Linux linux 6.6.35-cloudflare-2024.6.7 #1 SMP PREEMPT_DYNAMIC Mon Sep 27 00:00:00 UTC 2010 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@linux:~# 

Custom kernels can be copied to the nginx pod via kubectl cp. Restarting the virtual machine will load that new kernel for testing. When a kernel panic occurs, the virtual machine can quickly be restarted with virtctl restart linux and it will go through the iPXE boot process again.

Build pipelines

Cloudflare leverages KubeVirt to build a majority of software at Cloudflare. Virtual machines give build system users full control over their pipeline. For example, Debian packages can easily be installed and separate container daemons (such as Docker) can run all within a Kubernetes namespace using the restricted Pod Security Standard. KubeVirt’s VirtualMachineReplicaSet concept allows us to quickly scale up and down the number of build agents to match demand. We can roll out different sets of virtual machines with varying sizes, kernels, and operating systems.

To scale efficiently, we leverage container disks to store our agent virtual machine images. Container disks allow us to store the virtual machine image (for example, a qcow image) in our container registry. This strategy works well when the state in virtual machines is ephemeral. Liveness probes detect unhealthy or broken agents, shutting down the virtual machine and replacing them with a fresh instance. Other automation limits virtual machine uptime, capping it to 3–4 hours to keep build agents fresh.

Next steps

We’re excited to expand our use of KubeVirt and unlock new capabilities for our internal users. KubeVirt’s Linux ARM64 support will allow us to build ARM64 packages in-cluster and simulate ARM64 systems.

Projects like KubeVirt CDI (Containerized Data Importer) will streamline our user’s virtual machine experience. Instead of users manually building container disks, we can provide a catalog of virtual machine images. It also allows us to copy virtual machine disks between namespaces.

Conclusion

KubeVirt has proven to be a great tool for virtualization in our Kubernetes-first environment. We’ve unlocked the ability to support more workloads with our multi-tenant model. The KubeVirt platform allows us to offer a single compute platform supporting containers and virtual machines. Managing it has been simple, and upgrades have been straightforward and non-disruptive. We’re exploring additional features KubeVirt offers to improve the experience for our users.

Finally, our team is expanding! We’re looking for more people passionate about Kubernetes to join our team and help us push Kubernetes to the next level.

Cloudflare acquires Kivera to add simple, preventive cloud security to Cloudflare One

Post Syndicated from Noelle Kagan original https://blog.cloudflare.com/cloudflare-acquires-kivera

We’re excited to announce that Kivera, a cloud security, data protection, and compliance company, has joined Cloudflare. This acquisition extends our SASE portfolio to incorporate inline cloud app controls, empowering Cloudflare One customers with preventative security controls for all their cloud services.

In today’s digital landscape, cloud services and SaaS (software as a service) apps have become indispensable for the daily operation of organizations. At the same time, the amount of data flowing between organizations and their cloud providers has ballooned, increasing the chances of data leakage, compliance issues, and worse, opportunities for attackers. Additionally, many companies — especially at enterprise scale — are working directly with multiple cloud providers for flexibility based on the strengths, resiliency against outages or errors, and cost efficiencies of different clouds. 

Security teams that rely on Cloud Security Posture Management (CSPM) or similar tools for monitoring cloud configurations and permissions and Infrastructure as code (IaC) scanning are falling short due to detecting issues only after misconfigurations occur with an overwhelming volume of alerts. The combination of Kivera and Cloudflare One puts preventive controls directly into the deployment process, or ‘inline’, blocking errors before they happen. This offers a proactive approach essential to protecting cloud infrastructure from evolving cyber threats, maintaining data security, and accelerating compliance. 

An early warning system for cloud security risks 

In a significant leap forward in cloud security, the combination of Kivera’s technology and Cloudflare One adds preventive, inline controls to enforce secure configurations for cloud resources. By inspecting cloud API traffic, these new capabilities equip organizations with enhanced visibility and granular controls, allowing for a proactive approach in mitigating risks, managing cloud security posture, and embracing a streamlined DevOps process when deploying cloud infrastructure.

Kivera will add the following capabilities to Cloudflare’s SASE platform:

  • One-click security: Customers benefit from immediate prevention of the most common cloud breaches caused by misconfigurations, such as accidentally allowing public access or policy inconsistencies.

  • Enforced cloud tenant control: Companies can easily draw boundaries around their cloud resources and tenants to ensure that sensitive data stays within their organization. 

  • Prevent data exfiltration: Easily set rules to prevent data being sent to unauthorized locations.

  • Reduce ‘shadow’ cloud infrastructure: Ensure that every interaction between a customer and their cloud provider is in line with preset standards. 

  • Streamline cloud security compliance: Customers can automatically assess and enforce compliance against the most common regulatory frameworks.

  • Flexible DevOps model: Enforce bespoke controls independent of public cloud setup and deployment tools, minimizing the layers of lock-in between an organization and a cloud provider.

  • Complementing other cloud security tools: Create a first line of defense for cloud deployment errors, reducing the volume of alerts for customers also using CSPM tools or Cloud Native Application Protection Platforms (CNAPPs). 


An intelligent proxy that uses a policy-based approach to
enforce secure configuration of cloud resources.

Better together with Cloudflare One

As a SASE platform, Cloudflare One ensures safe access and provides data controls for cloud and SaaS apps. This integration broadens the scope of Cloudflare’s SASE platform beyond user-facing applications to incorporate increased cloud security through proactive configuration management of infrastructure services, beyond what CSPM and CASB solutions provide. With the addition of Kivera to Cloudflare One, customers now have a unified platform for all their inline protections, including cloud control, access management, and threat and data protection. All of these features are available with single-pass inspection, which is 50% faster than Secure Web Gateway (SWG) alternatives.  

With the earlier acquisition of BastionZero, a Zero Trust infrastructure access company, Cloudflare One expanded the scope of its VPN replacement solution to cover infrastructure resources as easily as it does apps and networks. Together Kivera and BastionZero enable centralized security management across hybrid IT environments, and provide a modern DevOps-friendly way to help enterprises connect and protect their hybrid infrastructure with Zero Trust best practices.

Beyond its SASE capabilities, Cloudflare One is integral to Cloudflare’s connectivity cloud, enabling organizations to consolidate IT security tools on a single platform. This simplifies secure access to resources, from developer privileged access to technical infrastructure and expanding cloud services. As Forrester echoes, “Cloudflare is a good choice for enterprise prospects seeking a high-performance, low-maintenance, DevOps-oriented solution.”

The growing threat of cloud misconfigurations

The cloud has become a prime target for cyberattacks. According to the 2023 Cloud Risk Report, CrowdStrike observed a 95% increase in cloud exploitation from 2021 to 2022, with a staggering 288% jump in cases involving threat actors directly targeting the cloud.

Misconfigurations in cloud infrastructure settings, such as improperly set security parameters and default access controls, provide adversaries with an easy path to infiltrate the cloud. According to the 2023 Thales Global Cloud Security Study, which surveyed nearly 3,000 IT and security professionals from 18 countries, 44% of respondents reported experiencing a data breach, with misconfigurations and human error identified as the leading cause, accounting for 31% of the incidents.

Further, according to Gartner, “Through 2027, 99% of records compromised in cloud environments will be the result of user misconfigurations and account compromise, not the result of an issue with the cloud provider.”1

Several factors contribute to the rise of cloud misconfigurations:

  • Rapid adoption of cloud services: Leaders are often driven by the scalability, cost-efficiency, and ability to support remote work and real-time collaboration that cloud services offer. These factors enable rapid adoption of cloud services which can lead to unintentional misconfigurations as IT teams struggle to keep up with the pace and complexity of these services. 

  • Complexity of cloud environments: Cloud infrastructure can be highly complex with multiple services and configurations to manage. For example, AWS alone offers 373 services with 15,617 actions and 140,000+ parameters, making it challenging for IT teams to manage settings accurately. 

  • Decentralized management: In large organizations, cloud infrastructure resources are often managed by multiple teams or departments. Without centralized oversight, inconsistent security policies and configurations can arise, increasing the risk of misconfigurations.

  • Continuous Integration and Continuous Deployment (CI/CD): CI/CD pipelines promote the ability to rapidly deploy, change and frequently update infrastructure. With this velocity comes the increased risk of misconfigurations when changes are not properly managed and reviewed.

  • Insufficient training and awareness: Employees may lack the cross-functional skills needed for cloud security, such as understanding networks, identity, and service configurations. This knowledge gap can lead to mistakes and increases the risk of misconfigurations that compromise security.

Common exploitation methods 

Threat actors exploit cloud services through various means, including targeting misconfigurations, abusing privileges, and bypassing encryption. Misconfigurations such as exposed storage buckets or improperly secured APIs offer attackers easy access to sensitive data and resources. Privilege abuse occurs when attackers gain unauthorized access through compromised credentials or poorly managed identity and access management (IAM) policies, allowing them to escalate their access and move laterally within the cloud environment. Additionally, unencrypted data enables attackers to intercept and decrypt data in transit or at rest, further compromising the integrity and confidentiality of sensitive information.

Here are some other vulnerabilities that organizations should address: 

  • Unrestricted access to cloud tenants: Allowing unrestricted access exposes cloud platforms to data exfiltration by malicious actors. Limiting access to approved tenants with specific IP addresses and service destinations helps prevent unauthorized access.

  • Exposed access keys: Exposed access keys can be exploited by unauthorized parties to steal or delete data. Requiring encryption for the access keys and restricting their usage can mitigate this risk.

  • Excessive account permissions: Granting excessive privileges to cloud accounts increases the potential impact of security breaches. Limiting permissions to necessary operations helps prevent lateral movement and privilege escalation by threat actors.

  • Inadequate network segmentation: Poorly managed network security groups and insufficient segmentation practices can allow attackers to move freely within cloud environments. Drawing boundaries around your cloud resources and tenants ensures that data stays within your organization.

  • Improper public access configuration: Incorrectly exposing critical services or storage resources to the internet increases the likelihood of unauthorized access and data compromise. Preventing public access drastically reduces risk.

  • Shadow cloud infrastructure: Abandoned or neglected cloud instances are often left vulnerable to exploitation, providing attackers with opportunities to access sensitive data left behind. Preventing untagged or unapproved cloud resources to be created can reduce the risk of exposure.

Limitations of existing tools 

Many organizations turn to CSPM tools to give them more visibility into cloud misconfigurations. These tools often alert teams after an issue occurs, putting security teams in a reactive mode. Remediation efforts require collaboration between security teams and developers to implement changes, which can be time-consuming and resource-intensive. This approach not only delays issue resolution but also exposes companies to compliance and legal risks, while failing to train employees on secure cloud practices. On average, it takes 207 days to identify these breaches and an additional 70 days to contain them. 

Addressing the growing threat of cloud misconfigurations requires proactive security measures and continuous monitoring. Organizations must adopt proactive security solutions that not only detect and alert but also prevent misconfigurations from occuring in the first place and enforce best practices. Creating a first line of defense for cloud deployment errors reduces the volume of alerts for customers, especially those also using CSPM tools or CNAPPs. 

By implementing these proactive strategies, organizations can safeguard their cloud environments against the evolving landscape of cyber threats, ensuring robust security and compliance while minimizing risks and operational disruptions.

What’s next for Kivera

The Kivera product will not be a point solution add-on. We’re making it a core part of our Cloudflare One offering because integrating features from products like our Secure Web Gateway give customers a comprehensive solution that works better together.

We’re excited to welcome Kivera to the Cloudflare team. Through the end of 2024 and into early 2025, Kivera’s team will focus on integrating their preventive inline cloud app controls directly into Cloudflare One. We are looking for early access testers and teams to provide feedback about what they would like to see. If you’d like early access, please join the waitlist.

[1] Source: Outcome-Driven Metrics You Can Use to Evaluate Cloud Security Controls, Gartner, Charlie Winckless, Paul Proctor, Manuel Acosta, 09/28/2023 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

China Possibly Hacking US “Lawful Access” Backdoor

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/10/china-possibly-hacking-us-lawful-access-backdoor.html

The Wall Street Journal is reporting that Chinese hackers (Salt Typhoon) penetrated the networks of US broadband providers, and might have accessed the backdoors that the federal government uses to execute court-authorized wiretap requests. Those backdoors have been mandated by law—CALEA—since 1994.

It’s a weird story. The first line of the article is: “A cyberattack tied to the Chinese government penetrated the networks of a swath of U.S. broadband providers.” This implies that the attack wasn’t against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers.

For years, the security community has pushed back against these backdoors, pointing out that the technical capability cannot differentiate between good guys and bad guys. And here is one more example of a backdoor access mechanism being targeted by the “wrong” eavesdroppers.

Other news stories.

Ada Computer Science: A year in review

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/ada-computer-science-a-year-in-review/

With the new academic year fully under way in many parts of the world, it’s the perfect time to reflect on the growth and innovations we’ve achieved with the Ada Computer Science platform. Your feedback has helped us make improvements to better support teachers and students — here’s a look back at some of the key developments for Ada from the past 12 months.

Teachers in discussion at a table.
Teachers in discussion at a Raspberry Pi Foundation teacher training event.

Supporting students through personalised learning, new resources, and new questions

We made significant improvements throughout the year to support students with exam preparation and personalised learning. We introduced over 145 new self-marking questions and updated 50 existing ones, bringing the total to more than 1000. A new type of question was also launched to help students practise writing longer responses: they label parts of a sample answer and apply a mark scheme, simulating a peer review process. You can read more about this work in the AI section below.

We updated the question finder tool with an intuitive new design. Instead of seeing ten questions at random, students can now see all the questions we have on any given topic, and can use the filters to refine their searches by qualification and difficulty level. This enables students to better personalise their revision and progress tracking

“Ada Computer Science has been very effective for my revision. I like how it provides hints and pointers if you answer a question incorrectly.” 

– Ada Computer Science student

The ‘Representation of sound’ topic received a major update, with clearer explanations, new diagrams, and improved feedback to support students as they tackle common misconceptions in sound physics. We also refreshed the ‘Representation of numbers’ topic, adding new content and interactive quizzes to support teachers in assessing students’ understanding more effectively. 

We introduced a new database scenario titled ‘Repair & Reform’, offering an entity relationship diagram, a data dictionary, and a new SQL editor and question set to help students prepare for project-based assessments. We’ve further expanded this scenario into a full project covering all stages of development, including requirements analysis and evaluation. 

April was dedicated to gearing up for the exam season, with the introduction of revision flashcards and ready-made quizzes on key topics like bitmapped graphics and sorting algorithms. We also launched a student revision challenge, which ran from April to June and attracted over 600 participants.

“Ada Computer Science is an excellent resource to help support teachers and students. The explanations are clear and relevant, and the questions help students test their knowledge and understanding in a structured way, providing links to help them reconcile any discrepancies or misunderstandings.” 

– Patrick Kennedy, Computer Science teacher

Supporting teachers  

We expanded our efforts to support new computer science teachers with the launch of a teacher mentoring programme that offers free online drop-in sessions. We also hosted a teacher training event at the Raspberry Pi Foundation office in Cambridge (as seen in the picture below), where educators saw previews of upcoming content on AI and machine learning and contributed their own questions to the platform.

Group photo featuring computer science teachers and colleagues from the Raspberry PI Foundation.

AI content and AI features

We continued our focus on AI and machine learning, releasing new learning resources that explore the ethical and social implications of AI alongside the practical applications of AI and machine learning models. 

To expand the Ada platform’s features, we also made considerable progress in integrating a large language model (LLM) to mark free-text responses. Our research showed that, as of June, LLM marks matched real teachers’ marks 82% of the time. In July, we received ethics approval from the University of Cambridge to add LLM-marked questions to the Ada platform. 

Computer science education in Scotland

We made significant strides towards supporting Scottish teachers and students with resources tailored to the SQA Computing Science curriculum. From September to November last year, we piloted a new set of materials specifically designed for Scottish teachers, receiving valuable feedback that we’ve used in 2024 to develop new content. More than half of the theory content for the National 5 and Higher specifications is now available on the platform. 

Teacher, in the middle of a computing lesson.

Our ‘Reform & Repair’ database scenario and project align with both SQA Higher and A level standards, providing a comprehensive resource for students preparing for project-based assessments.

Looking ahead: New resources for September and beyond

We have big plans for Ada for the next 12 months. Our focus will remain on continuously improving our resources and supporting the needs of both educators and students. 

After the positive response to our ‘Repair & Reform’ database project, our content experts are planning additional practical projects to support students and teachers. The next one will be a web project that covers HTML, CSS, JavaScript, and PHP, supporting students taking SQA qualifications in Scotland or undertaking the non-examined assessment (NEA) at A level.

We’ll be working on a number of teacher-focused improvements to the platform, which you’ll also see on Ada’s sibling site, Isaac Physics. These will include an overhaul of the markbook to make it more user-friendly, and updates to the ‘Assignments’ tool so assignments better meet the needs of teachers in schools.

We’ll be welcoming the next cohort of computer science students to the STEM SMART programme in January 2025 where, in partnership with the University of Cambridge, we’ll offer free, complementary teaching and support to UK students at state schools. Applications are now open.

Thank you to every teacher and student who has given their time in the last year to share feedback about Ada Computer Science — your insights are invaluable as we work to make high-quality computer science materials easily accessible. Here’s to another fantastic year of learning and growth!

The post Ada Computer Science: A year in review appeared first on Raspberry Pi Foundation.

Дигитализация на строителния процес

Post Syndicated from Bozho original https://blog.bozho.net/blog/4403

Вчера беше денят на архитектурата. На откриването на „улица 26“ снощи при мен дойде български архитект, който цял живот е живял в Швеция и разказа как е там и защо не е работил в България.

Подходящ контекст за т.5 от приоритетите: „Пълна дигитализация на процесите по инвестиционно проектиране и устройствено планиране – край на папките в кашони и печати, които се изискват за всяка построена сграда.“

В 49-тия парламент подготвихме и внесохме изменения в Закона за устройство на територията, с който целият процес, необходим за получаване на разрешение за строеж, за одобрение на промени в подробните устройствени планове и т.н. стават електронни.

Навсякъде премахваме думите „хартиен носител“ и „печат“, като се облекчават всички участници, намалява се административната тежест и се ограничават корупционни практики. И няма нужда да гледаме към Швеция – в Сърбия и Северна Македония отдавна това е електронно.

МРРБ трябва вече да изгражда единния регистър по устройство на територията, за който осигурихме финансиране по плана за възстановяване преди над 2 години и с който ще може да бъде реализиран в пълнота закона.

Материалът Дигитализация на строителния процес е публикуван за пръв път на БЛОГодаря.

OpenBSD 7.6 released

Post Syndicated from jzb original https://lwn.net/Articles/993203/

OpenBSD 7.6 has been released. Notable new
features include work to improve suspend/resume on modern hardware,
support for the arm64 Qualcomm Snapdragon X Elite laptops, as well as many
improvements in hardware support and driver bug fixes.

With this release all files that existed in the first commit
in the OpenBSD source repository have been updated,
modified or replaced at some point in time, reaching OpenBSD of Theseus.

See the changelog
for all changes between OpenBSD 7.5 and 7.6.

Supermicro MegaDC ARS-211M-NR Review An Ampere AmpereOne Arm Server is Here

Post Syndicated from Patrick Kennedy original https://www.servethehome.com/supermicro-megadc-ars-211m-nr-review-an-ampere-ampereone-arm-server-broadcom-nvidia-kioxia/

In our Supermicro MegaDC ARS-211M-NR review, we see how this Ampere AmpereOne 2U server with 192 cores performs and see the cool new features

The post Supermicro MegaDC ARS-211M-NR Review An Ampere AmpereOne Arm Server is Here appeared first on ServeTheHome.

[$] ClassicPress: WordPress without the block editor

Post Syndicated from jzb original https://lwn.net/Articles/992219/

The recent WordPress
controversy
is not the first time there’s been tension between the
WordPress community, the interests of Automattic as a business, and Matt
Mullenweg’s leadership as WordPress’s benevolent dictator for
life (BDFL). In particular, Mullenweg’s focus on pushing WordPress to use a new
“editing experience” called Gutenberg caused significant
friction—and led to the ClassicPress fork. Users who
want to preserve the “classic” WordPress experience without straying
too far from the WordPress fold may want to look into ClassicPress.

AWS Weekly Roundup: HIPAA eligible with Amazon Q Business, Amazon DCV, AWS re:Post Agent, and more (Oct 07, 2024)

Post Syndicated from Betty Zheng (郑予彬) original https://aws.amazon.com/blogs/aws/aws-weekly-roundup-amazon-q-business-is-hipaa-eligible-amazon-dcv-aws-repost-agent-and-more-oct-07-2024/

Last Friday, I had the privilege of attending China Engineer’s Day 2024(CED 2024) in Hangzhou as the Amazon Web Services (AWS) speaker. The event was organized by the China Computer Federation (CCF), one of the most influential professional developer communities in China.

At CED 2024, I spoke about how AI development tools can improve developer productivity. I was honored to receive a certificate of excellence from CCF, and Amazon Q garnered significant attention from the attendees.

Now, let’s turn to other exciting news in the AWS universe from last week.

Last week’s launches
Here are some launches that got my attention:

Amazon Q Business is now HIPAA eligible Amazon Q business has received Health Insurance Portability and Accountability Act (HIPAA) certification. This means healthcare and life sciences organizations such as health insurance companies and healthcare providers can now use Amazon Q Business to run sensitive workloads regulated under the US HIPAA law.

NICE DCV renames to Amazon DCV – NICE DCV is rebranded to Amazon DCV. This high performance remote display protocol allows secure delivery of remote desktops and application streaming from any cloud or data center to any device, even over varying network conditions. Amazon DCV supports both Windows and major Linux distributions on the server side. Clients can use native DCV client for Windows, Linux, or macOS, as well as web browsers, to receive desktops and application streamings. The DCV server and client only transfer encrypted pixels, not data, ensuring no confidential information is downloaded. When using Amazon DCV on AWS with Amazon Elastic Compute Cloud (Amazon EC2), you can take advantage of the AWS 108 Availability Zones across the 33 geographic Regions and 31 local zones. The 2024.0 release now supports the latest Ubuntu 24.04 LTS. For more details, check out Sébastien Stormacq’s new launch blog post.

AWS re:Post launches re:Post AgentAWS re:Post provides access to curated knowledge and a vibrant community that helps users become even more successful on AWS. re:Post Agent is a generative AI assistant designed to provide rapid, intelligent responses to questions in the re:Post community. It expands the available AWS knowledge base, and community experts will earn reputation points by reviewing the AI-generated answers.

Advanced configuration with Amazon Timestream for InfluxDB – This new launch introduces a feature that allows uses to monitor instance CPU, memory, and disk utilization metrics directly from the AWS Management Console.

A new stop ingestion API of Amazon Bedrock Knowledge Bases – This new API allows users to halt ongoing ingestion jobs at will. Providing greater control over data ingestion workflows, users can quickly stop accidental or unwanted ingestion processes without waiting for completion. By using the new StopIngestionJob API, you can respond rapidly to evolving needs and potentially reduce costs. This capability is available across all AWS Regions where Amazon Bedrock Knowledge Bases are offered.

Higher storage limit of Amazon AppStream 2.0Amazon AppStream 2.0 has expanded the default size limit for application settings persistence from 1 GB to 5 GB. This increase allows end users to store more application data and settings without manual intervention and without affecting performance or session setup time.

There were over 40 launches and releases last week. It was difficult for me to select the important ones. In addition to those already mentioned, here’s a list of potentially important feature updates:

For a full list of AWS announcements, be sure to keep an eye on AWS’s What’s New Feed page.

Other AWS news
Here are some other noteworthy items from last week.

Amazon WorkSpaces Thin Client – Amazon WorkSpaces Thin Client inventory is now available to purchase in the UK on Amazon Business, in addition to the US, France, Germany, Italy, and Spain. It’s a sleek, cost-effective device that brings secure access to AWS end user computing services right to your fingertips. This nifty gadget is like a digital fortress, preventing unauthorized data storage and applications, while giving IT admins the tools to manage and monitor their fleet of thin clients with ease.

Helping communities impacted by Hurricane HeleneAWS Disaster Response team is working closely with local partners and humanitarian organizations to deliver critical supplies to those in need in the Southeast. We’re also deploying AWS technology to help with re-connectivity, aid relief operations on the ground, and support food distribution needs in the region.

The life of a prescription at Amazon Pharmacy – Read the Amazon Pharmacy AI use case to remove the complexity of the process of dispensing medications and improve patients’ experiences. The system transcribes raw prescription data into standardized formats, transforms medical abbreviations into full-text equivalents, and validates medication details against an industry database. This automated process, followed by pharmacist review, has reduced potential medication errors by 50 percent and improved processing speed by up to 90 percent, allowing pharmacists to focus on critical tasks and personalized care.

A thought leadership article on generative AI in the WIRED magazine – Read Antje‘s news column in Wired. It discusses how AWS opens the transformative power of AI to organizations of any size and level of experience. I recommend it to all AI enthusiasts and business innovators. AWS is on a mission to bring generative AI magic to businesses of all sizes, offering a buffet of AI tools for tech wizards and newcomers alike. Whether you’re a startup with big dreams or a corporate giant looking to stay ahead, AWS is rolling out the red carpet to the AI revolution. Don’t miss this chance to turn your wildest tech fantasies into reality!

Upcoming AWS events
Check your calendars and sign up for these AWS events:

AWS re:Invent 2024 Registration is now open for the annual tech extravaganza, taking place December 2 – 6 in Las Vegas. I’m eager to learn about the new launches and excited to contribute to two chalk talks focusing on security topics (Dev311 – Enhance code security with generative AI and SEC228 – Navigate multi-level protection scheme compliance in AWS China Regions).

AWS Innovate Migrate, Modernize, and Build Whether you are new to the cloud or an experienced user, you will learn something new at AWS Innovate. This is a free online conference. Register at a time and region convenient to North America (October 15), or Europe, Middle East & Africa (October 24).

AWS Community Days Join community-led conferences featuring technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world. Don’t miss out on the AWS Community Days happening on October 12 in Sofia and October 19 in Vadodara, Spain, and Guatemala.

Browse more upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Betty

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

[$] In search of the AOSP community

Post Syndicated from corbet original https://lwn.net/Articles/992992/

The core of the Android operating system, as represented by the Android Open Source Project (AOSP),
can only be considered one of the most successful open-source initiatives
ever created; its user count is measured in the billions. But few would
consider it to be a truly community-oriented project. At the 2024 Linux Plumbers Conference, Chris Simmonds
asked why the AOSP community is so hard to find, and what might be done
about the situation.

The collective thoughts of the interwebz