How useful do teachers find error message explanations generated by AI? Pilot research results

Post Syndicated from Veronica Cucuiat original https://www.raspberrypi.org/blog/error-message-explanations-large-language-models-teachers-views/

As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore the topic of program error messages (PEMs). 

Computer science students at a desktop computer in a classroom.

PEMs can be a significant barrier to learning for novice coders, as they are often confusing and difficult to understand. This can hinder troubleshooting and progress in coding, and lead to frustration. 

Recently, various teams have been exploring how generative AI, specifically large language models (LLMs), can be used to help learners understand PEMs. My research in this area specifically explores secondary teachers’ views of the explanations of PEMs generated by a LLM, as an aid for learning and teaching programming, and I presented some of my results in our ongoing seminar series.

Understanding program error messages is hard at the start

I started the seminar by setting the scene and describing the current background of research on novices’ difficulty in using PEMs to fix their code, and the efforts made to date to improve these. The three main points I made were that:

  1. PEMs are often difficult to decipher, especially by novices, and there’s a whole research area dedicated to identifying ways to improve them.
  2. Recent studies have employed LLMs as a way of enhancing PEMs. However, the evidence on what makes an ‘effective’ PEM for learning is limited, variable, and contradictory.
  3. There is limited research in the context of K–12 programming education, as well as research conducted in collaboration with teachers to better understand the practical and pedagogical implications of integrating LLMs into the classroom more generally.

My pilot study aims to fill this gap directly, by reporting K–12 teachers’ views of the potential use of LLM-generated explanations of PEMs in the classroom, and how their views fit into the wider theoretical paradigm of feedback literacy. 

What did the teachers say?

To conduct the study, I interviewed eight expert secondary computing educators. The interviews were semi-structured activity-based interviews, where the educators got to experiment with a prototype version of the Foundation’s publicly available Code Editor. This version of the Code Editor was adapted to generate LLM explanations when the question mark next to the standard error message is clicked (see Figure 1 for an example of a LLM-generated explanation). The Code Editor version called the OpenAI GPT-3.5 interface to generate explanations based on the following prompt: “You are a teacher talking to a 12-year-old child. Explain the error {error} in the following Python code: {code}”. 

The Foundation’s Python Code Editor with LLM feedback prototype.
Figure 1: The Foundation’s Code Editor with LLM feedback prototype.

Fifteen themes were derived from the educators’ responses and these were split into five groups (Figure 2). Overall, the educators’ views of the LLM feedback were that, for the most part, a sensible explanation of the error messages was produced. However, all educators experienced at least one example of invalid content (LLM “hallucination”). Also, despite not being explicitly requested in the LLM prompt, a possible code solution was always included in the explanation.

Themes and groups derived from teachers’ responses.
Figure 2: Themes and groups derived from teachers’ responses.

Matching the themes to PEM guidelines

Next, I investigated how the teachers’ views correlated to the research conducted to date on enhanced PEMs. I used the guidelines proposed by Brett Becker and colleagues, which consolidate a lot of the research done in this area into ten design guidelines. The guidelines offer best practices on how to enhance PEMs based on cognitive science and educational theory empirical research. For example, they outline that enhanced PEMs should provide scaffolding for the user, increase readability, reduce cognitive load, use a positive tone, and provide context to the error.

Out of the 15 themes identified in my study, 10 of these correlated closely to the guidelines. However, the 10 themes that correlated well were, for the most part, the themes related to the content of the explanations, presentation, and validity (Figure 3). On the other hand, the themes concerning the teaching and learning process did not fit as well to the guidelines.

Correlation between teachers’ responses and enhanced PEM design guidelines.
Figure 3: Correlation between teachers’ responses and enhanced PEM design guidelines.

Does feedback literacy theory fit better?

However, when I looked at feedback literacy theory, I was able to correlate all fifteen themes — the theory fits.

Feedback literacy theory positions the feedback process (which includes explanations) as a social interaction, and accounts for the actors involved in the interaction — the student and the teacher — as well as the relationships between the student, the teacher, and the feedback. We can explain feedback literacy theory using three constructs: feedback types, student feedback literacy, and teacher feedback literacy (Figure 4). 

Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.
Figure 4: Feedback literacy at the intersection between feedback types, student feedback literacy, and teacher feedback literacy.

From the feedback literacy perspective, feedback can be grouped into four types: telling, guiding, developing understanding, and opening up new perspectives. The feedback type depends on the role of the student and teacher when engaging with the feedback (Figure 5). 

Feedback types as formalised by McLean, Bond, & Nicholson.
Figure 5: Feedback types as formalised by McLean, Bond, & Nicholson.

From the student perspective, the competencies and dispositions students need in order to use feedback effectively can be stated as: appreciating the feedback processes, making judgements, taking action, and managing affect. Finally, from a teacher perspective, teachers apply their feedback literacy skills across three dimensions: design, relational, and pragmatic. 

In short, according to feedback literacy theory, effective feedback processes entail well-designed feedback with a clear pedagogical purpose, as well as the competencies students and teachers need in order to make sense of the feedback and use it effectively.

A computer science teacher sits with students at computers in a classroom.

This theory therefore provided a promising lens for analysing the educators’ perspectives in my study. When the educators’ views were correlated to feedback literacy theory, I found that:

  1. Educators prefer the LLM explanations to fulfil a guiding and developing understanding role, rather than telling. For example, educators prefer to either remove or delay the code solution from the explanation, and they like the explanations to include keywords based on concepts they are teaching in the classroom to guide and develop students’ understanding rather than tell.
  1. Related to students’ feedback literacy, educators talked about the ways in which the LLM explanations help or hinder students to make judgements and action the feedback in the explanations. For example, they talked about how detailed, jargon-free explanations can help students make judgments about the feedback, but invalid explanations can hinder this process. Therefore, teachers talked about the need for ways to manage such invalid instances. However, for the most part, the educators didn’t talk about eradicating them altogether. They talked about ways of flagging them, using them as counter-examples, and having visibility of them to be able to address them with students.
  1. Finally, from a teacher feedback literacy perspective, educators discussed the need for professional development to manage feedback processes inclusive of LLM feedback (design) and address issues resulting from reduced opportunities to interact with students (relational and pragmatic). For example, if using LLM explanations results in a reduction in the time teachers spend helping students debug syntax errors from a pragmatic time-saving perspective, then what does that mean for the relationship they have with their students? 

Conclusion from the study

By correlating educators’ views to feedback literacy theory as well as enhanced PEM guidelines, we can take a broader perspective on how LLMs might not only shape the content of the explanations, but the whole social interaction around giving and receiving feedback. Investigating ways of supporting students and teachers to practise their feedback literacy skills matters just as much, if not more, than focusing on the content of PEM explanations. 

This study was a first-step exploration of eight educators’ views on the potential impact of using LLM explanations of PEMs in the classroom. Exactly what the findings of this study mean for classroom practice remains to be investigated, and we also need to examine students’ views on the feedback and its impact on their journey of learning to program. 

If you want to hear more, you can watch my seminar:

You can also read the associated paper, or find out more about the research instruments on this project website.

If any of these ideas resonated with you as an educator, student, or researcher, do reach out — we’d love to hear from you. You can contact me directly at [email protected] or drop us a line in the comments below. 

Join our next seminar

The focus of our ongoing seminar series is on teaching programming with or without AI. Check out the schedule of our upcoming seminars

To take part in the next seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.

You can also catch up on past seminars on our blog and on the previous seminars and recordings page.

The post How useful do teachers find error message explanations generated by AI? Pilot research results appeared first on Raspberry Pi Foundation.

[$] Kernel developers at Cauldron

Post Syndicated from corbet original https://lwn.net/Articles/990379/

A Linux system is made up of a large number of interdependent components,
all of which must support each other well. It can thus be surprising that,
it seems, the developers working on those components do not often speak
with each other. In the hope of improving that situation, efforts have
been made in recent years to attract toolchain developers to the
kernel-heavy Linux Plumbers Conference. This year, though, the opposite
happened as well: the 2024
GNU Tools Cauldron
hosted a discussion where kernel developers were
invited to discuss their needs.

LLVM 19.1.0 released

Post Syndicated from jzb original https://lwn.net/Articles/990706/

Version
19.1.0
of the LLVM compiler suite has been released:

This is the first release in the LLVM 19.x series and represents 6
months of work the LLVM community. During this period 1502 unique
authors contributed 18925 commits (3605729 lines added and 1665792
lines removed) to LLVM.

As usual, there is a long list of changes; see the release notes
for LLVM,
Libc++,
lld,
Clang,
and Extra
Clang Tools
for changes to each.

Security updates for Wednesday

Post Syndicated from jzb original https://lwn.net/Articles/990731/

Security updates have been issued by AlmaLinux (pcs), Debian (expat, galera-4, libreoffice, mariadb-10.5, and php-twig), Fedora (chromium), Red Hat (ghostscript and git), SUSE (gstreamer-plugins-bad, gstreamer-plugins-bad, libvpl, python-dnspython, python3, and python36), and Ubuntu (expat, frr, libxmltok, linux-xilinx-zynqmp, openssl, and quagga).

Get to know Amazon GuardDuty Runtime Monitoring for Amazon EC2

Post Syndicated from Scott Ward original https://aws.amazon.com/blogs/security/get-to-know-amazon-guardduty-runtime-monitoring-for-amazon-ec2/

In this blog post, I take you on a deep dive into Amazon GuardDuty Runtime Monitoring for EC2 instances and key capabilities that are part of the feature. Throughout the post, I provide insights around deployment strategies for Runtime Monitoring and detail how it can deliver security value by detecting threats against your Amazon Elastic Compute Cloud (Amazon EC2) instances and the workloads you run on them. This post builds on the post by Channy Yun that outlines how to enable Runtime Monitoring, how to view the findings that it produces, and how to view the coverage it provides across your EC2 instances.

Amazon Web Services (AWS) launched Amazon GuardDuty at re:Invent 2017 with a focus on providing customers managed threat detection capabilities for their AWS accounts and workloads. When enabled, GuardDuty takes care of consuming and processing the necessary log data. Since its launch, GuardDuty has continued to expand its threat detection capabilities. This expansion has included identifying new threat types that can impact customer environments, identifying new threat tactics and techniques within existing threat types and expanding the log sources consumed by GuardDuty to detect threats across AWS resources. Examples of this expansion include the ability to detect EC2 instance credentials being used to invoke APIs from an IP address that’s owned by a different AWS account than the one that the associated EC2 instance is running in, and the ability to identify threats to Amazon Elastic Kubernetes Services (Amazon EKS) clusters by analyzing Kubernetes audit logs.

GuardDuty has continued to expand its threat detection capabilities beyond AWS log sources, providing a more comprehensive coverage of customers’ AWS resources. Specifically, customers needed more visibility around threats that might occur at the operating system level of their container and compute instances. To address this customer need, GuardDuty released the Runtime Monitoring feature, beginning with support on Amazon Elastic Kubernetes Service (Amazon EKS) workloads. Runtime Monitoring provides operating system insight for GuardDuty to use in detecting potential threats to workloads running on AWS and enabled the operating system visibility that customers were asking for. At re:Invent 2023, GuardDuty expanded Runtime Monitoring to include Amazon Elastic Container Service (Amazon ECS)—including serverless workloads running on AWS Fargate, and previewed support for Amazon EC2, which became generally available earlier this year. The release of EC2 Runtime Monitoring enables comprehensive compute coverage for GuardDuty across containers and EC2 instances, delivering breadth and depth for threat detection in these areas.

Features and functions

GuardDuty EC2 Runtime Monitoring relies on a lightweight security agent that collects operating system events—such as file access, processes, command line arguments, and network connections—from your EC2 instance and sends them to GuardDuty. After the operating system events are received by GuardDuty, they’re evaluated to identify potential threats related to the EC2 instance. In this section, we explore how GuardDuty is evaluating the runtime events it receives and how GuardDuty presents identified threat information.

Command arguments and event correlation

The runtime security agent enables GuardDuty to create findings that can’t be created using the foundational data sources of VPC Flow Logs, DNS logs, and CloudTrail logs. The security agent can collect detailed information about what’s happening at the instance operating system level that the foundational data sources don’t contain.

With the release of EC2 Runtime Monitoring, additional capabilities have been added to the runtime agent and to GuardDuty. The additional capabilities include collecting command arguments and correlation of events for an EC2 instance. These new capabilities help to rule out benign events and more accurately generate findings that are related to activities that are associated with a potential threat to your EC2 instance.

Command arguments

The GuardDuty security agent collects information on operating system runtime commands (curl, systemctl, cron, and so on) and uses this information to generate findings. The security agent now also collects the command arguments that were used as part of running a command. This additional information gives GuardDuty more capabilities to detect threats because of the additional context related to running a command.

For example, the agent will not only identify that systemctl (which is used to manage services on your Linux instance) was run but also which parameters the command was run with (stop, start, disable, and so on) and for which service the command was run. This level of detail helps identify that a threat actor might be changing security or monitoring services to evade detection.

Event correlation

GuardDuty can now also correlate multiple events collected using the runtime agent to identify scenarios that present themselves as a threat to your environment. There might be events that happen on your instance that, on their own, don’t present themselves as a clear threat. These are referred to as weak signals. However, when these weak signals are considered together and the sequence of commands aligns to malicious activity, GuardDuty uses that information to generate a finding. For example, a download of a file would present itself as a weak signal. If that download of a file is then piped to a shell command and the shell command begins to interact with additional operating system files or network configurations, or run known malware executables, then the correlation of all these events together can lead to a GuardDuty finding.

GuardDuty finding types

GuardDuty Runtime Monitoring currently supports 41 finding types to indicate potential threats based on the operating system-level behavior from the hosts and containers in your Amazon EKS clusters on Amazon EC2, Amazon ECS on Fargate and Amazon EC2, and EC2 instances. These findings are based on the event types that the security agent collects and sends to the GuardDuty service.

Five of these finding types take advantage of the new capabilities of the runtime agent and GuardDuty, which were discussed in the previous section of this post. These five new finding types are the following:

Each GuardDuty finding begins with a threat purpose, which is aligned with MITRE ATT&CK tactics. The Execution finding types are focused on observed threats to the actual running of commands or processes that align to malicious activity. The DefenseEvasion finding types are focused on situations where commands are run that are trying to disable defense mechanisms on the instance, which would normally be used to identify or help prevent the activity of a malicious actor on your instance.

In the following sections, I go into more detail about the new Runtime Monitoring finding types and the types of malicious activities that they are identifying.

Identifying suspicious tools and commands

The SuspiciousTool, SuspiciousCommand, and PtraceAntiDebugging finding types are focused on suspicious activities, or those that are used to evade detection. The approach to identify these types of activities is similar. The SuspiciousTool finding type is focused on tools such as backdoor tools, network scanners, and network sniffers. GuardDuty helps to identify the cases where malicious activities related to these tools are occurring on your instance.

The SuspiciousCommand finding type identifies suspicious commands with the threat purposes of DefenseEvasion or Execution. The DefenseEvasion findings are an indicator of an unauthorized user trying to hide their actions. These actions could include disabling a local firewall, modifying local IP tables, or removing crontab entries. The Execution findings identify when a suspicious command has been run on your EC2 instance. The findings related to Execution could be for a single suspicious command or a series of commands, which, when combined with a series of other commands along with additional context, becomes a clearer indicator of suspicious activity. An example of an Execution finding related to combining multiple commands could be when a file is downloaded and is then run in a series of steps that align with a known malicious pattern.

For the PtraceAntiDebugging finding, GuardDuty is looking for cases where a process on your instance has used the ptrace system call with the PTRACE_TRACEME option, which causes an attached debugger to detach from the running process. This is a suspicious activity because it allows a process to evade debugging using ptrace and is a known technique that malware uses to evade detection.

Identifying running malicious files

The updated GuardDuty security agent can also identify when malicious files are run. With the MaliciousFileExecuted finding type, GuardDuty can identify when known malicious files might have been run on your EC2 instance, providing a strong indicator that malware is present on your instance. This capability is especially important because it allows you to identify known malware that might have been introduced since your last malware scan.

Finding details

All of the findings mentioned so far are consumable through the AWS Management Console for GuardDuty, through the GuardDuty APIs, as Amazon EventBridge messages, or through AWS Security Hub. The findings that GuardDuty generates are meant to not only tell you that a suspicious event has been observed on your instance, but also give you enough context to formulate a response to the finding.

The GuardDuty security agent collects a variety of events from the operating system to use for threat detection. When GuardDuty generates a finding based on observed runtime activities, it will include the details of these observed events, which can help with confirmation on what the threat is and provide you a path for possible remediation steps based on the reported threat. The information provided in a GuardDuty runtime finding can be broken down into three main categories:

  • Information about the impacted AWS resource
  • Information about the observed processes that were involved in the activity
  • Context related to the runtime events that were observed

Impacted AWS resources

In each finding that GuardDuty produces, information about the impacted AWS resource will be included. For EC2 Runtime Monitoring, the key information included will be information about the EC2 instance (such as name, instance type, AMI, and AWS Region), tags that are assigned to the instance, network interfaces, and security groups. This information will help guide your response and investigation to the specific instance that’s identified for the observed threat. It’s also useful in assessing key network configurations of the instance that could assist with confirming whether the network configuration of the instance is correct or assessing how the network configuration might factor into the response.

Process details

For each runtime finding, GuardDuty includes the details that were observed about the process attributed to the threat that the finding is for. Common items that you should expect to see include the name of the executable and the path to the executable that resulted in the finding being created, the ID of the operating system process, when the process started, and which operating system user ran the process. Additionally, process lineage is included in the finding. Process lineage helps identify operating system processes that are related to each other and provides insight into the parent processes that were run leading up to the identified process. Understanding this lineage can give you valuable insight into what the root cause of the malicious process identified in the finding might be; for example, being able to identify which other commands were run that ultimately led to the activation of the executable or command identified in the finding. If the process attributed to the finding is running inside a container, the finding also provides container details such as the container ID and image.

Runtime context

Runtime context provides insight on things such as file system type, flags that were used to control the behavior of the event, name of the potentially suspicious tool, path to the script that generated the finding, and the name of the security service that was disabled. The context information in the finding is intended to help you further understand the runtime activity that was identified as a threat and determine its potential impact and what your response might be. For example, a command that is detected by using the systemctl command to disable the apparmor utility would report the process information related to running the systemctl command, and then the runtime context would contain the name of the actual service that was impacted by the systemctl command call and the options used with the command.

See Runtime Monitoring finding details for a full list of the process and context details that might be present in your runtime findings.

Responding to runtime findings

With GuardDuty findings, it’s a best practice to enable an event-based response that can be invoked as soon as the runtime finding is generated. This approach holds true for runtime related findings as well. For every runtime finding that GuardDuty generates, a copy of the finding is sent to EventBridge. If you use Security Hub, a copy of the finding is sent to Security Hub as well. With EventBridge, you can define a rule with a pattern that matches the finding attributes you want to prioritize and respond to. This pattern could be very broad in looking for all runtime-related findings. Or, it could be more specific, only looking for certain finding types, findings of a certain severity, or even certain attributes related to the process or runtime context of a finding.

After the rule pattern is established, you can define a target that the finding should be sent to. This target can be one of over 20 AWS services, which gives you lots of flexibility in routing the finding into the operational tools or processes that are used by your company. The target could be an AWS Lambda function that’s responsible for evaluating the finding, adding some additional data to the finding, and then sending it to a ticketing or chat tool. The target could be an AWS Systems Manager runbook, which would be used on the actual operating system to perform additional forensics or to isolate or disable any processes that are identified in the finding.

Many customers take a stepped approach in their response to GuardDuty findings. The first step might be to make sure that the finding is enriched with as many supporting details as possible and sent to the right individual or team. This helps whoever’s investigating the finding to confirm that the finding is a true positive, further informing the decision on what action to take.

In addition to having an event-based response to GuardDuty findings, you can investigate each GuardDuty runtime finding in the GuardDuty or Security Hub console. Through the console, you can research the details of the finding and use the information to inform the next steps to respond to or remediate the finding.

Speed to detection

With its close proximity to your workloads, the GuardDuty security agent can produce findings more quickly when compared to processing log sources such as VPC Flow Logs and DNS logs. The security agent collects operating system events and forwards them directly to the GuardDuty service, examining events and generating findings more quickly. This helps you to formulate a response sooner so that you can isolate and stop identified threats to your EC2 instances.

Let’s examine a finding type that can be detected by both the runtime security agent and by the foundational log sources of AWS CloudTrail, VPC Flow Logs, and DNS logs. Backdoor:EC2/C&CActivity.B!DNS and Backdoor:Runtime/C&CActivity.B!DNS are the same finding with one coming from DNS logs and one coming from the runtime security agent. While GuardDuty doesn’t have a service-level agreement (SLA) on the time it takes to consume the findings for a log source or the security agent, testing for these finding types reveals that the runtime finding is generated in just a few minutes. Log file-based findings will take around 15 minutes to produce because of the latency of log file delivery and processing. In the end, these two findings mean the same thing, but the runtime finding will arrive faster and with additional process and context information, helping you implement a response to the threat sooner and improve your ability to isolate, contain, and stop the threat.

Runtime data and flow logs data

When exploring the Runtime Monitoring feature and its usefulness for your organization, a key item to understand is the foundational level of protection for your account and workloads. When you enable GuardDuty the foundational data sources of VPC Flow Logs, DNS logs, and CloudTrail logs are also enabled, and those sources cannot be turned off without fully disabling GuardDuty. Runtime Monitoring provides contextual information that allows for more precise findings that can help with targeted remediation compared to the information provided in VPC Flow Logs. When the Runtime Monitoring agent is deployed onto an instance, the GuardDuty service still processes the VPC Flow Logs and DNS logs for that instance. If, at any point in time, an unauthorized user tampers with your security agent or an instance is deployed without the security agent, GuardDuty will continue to use VPC Flow Logs and DNS logs data to monitor for potential threats and suspicious activity, providing you defense in depth to help ensure you have visibility and coverage for detecting threats.

Note: GuardDuty doesn’t charge you for processing VPC Flow Logs while the Runtime Monitoring agent is active on an instance.

Deployment strategies

There are multiple strategies that you can use to install the GuardDuty security agent on an EC2 instance, and it’s important to use the one that fits best based on how you deploy and maintain instances in your environment. The following are agent installation options that cover managed installation, tag-based installation, and manual installation techniques. The managed installation approach is a good fit for most customers, but the manual options are potentially better if you have existing processes that you want to maintain or you want the more fine-grained features provided by agent installation compared to the managed approach.

Note: GuardDuty requires that each VPC, with EC2 instances running the GuardDuty agent, has a VPC endpoint that allows the agent to communicate with the GuardDuty service. You aren’t charged for the cost of these VPC endpoints. When you’re using the GuardDuty managed agent feature, GuardDuty will automatically create and operate these VPC endpoints for you. For the other agent deployment options listed in this section, or other approaches that you take, you must manually configure the VPC endpoint for each VPC where you have EC2 instances that will run the GuardDuty agent. See Creating VPC endpoint manually for additional details.

GuardDuty-managed installation

If you want to use security agents to monitor runtime activity on your EC2 instances but don’t want to manage the installation and lifecycle of the agent on specific instances, then Automated agent configuration is the option for you. For GuardDuty to successfully manage agent installation, each EC2 instance must meet the operating system architectural requirements of the security agent. Additionally, each instance must have the Systems Manager agent installed and configured with the minimal instance permissions that System Manager requires.

In addition to making sure that your instances are configured correctly, you also need to enable automated agent configuration for your EC2 instances in the Runtime Monitoring section of the GuardDuty console. Figure 1 shows what this step looks like.

Figure 1: Enable GuardDuty automated agent configuration for Amazon EC2

Figure 1: Enable GuardDuty automated agent configuration for Amazon EC2

After you have enabled automated agent configuration and have your instances correctly configured, GuardDuty will install and manage the security agent for every instance that is configured.

GuardDuty-managed with explicit tagging

If you want to selectively manage installation of the GuardDuty agent but still want automated deployment and updates, you can use inclusion or exclusion tags to control which instances the agent is installed to.

  • Inclusion tags allow you to specify which EC2 instances the GuardDuty security agent should be installed to without having to enable automated agent configuration. To use inclusion tags, each instance where the security agent should be installed needs to have a key-value pair of GuardDutyManaged/true. While you don’t need to turn on automated agent configuration to use inclusion tags, each instance that you tag for agent installation needs to have the Systems Manager agent installed and the appropriate permissions attached to the instance using an instance role.
  • Exclusion tags allow you to enable automated agent configuration, and then selectively manage which instances the agent shouldn’t be deployed to. To use exclusion tags, each instance that shouldn’t have an instance installed needs to have a key-value pair of GuardDutyManaged/false.

You can use selective installation for a variety of use cases. If you’re doing a proof of concept with EC2 Runtime Monitoring, you might want to deploy the solution to a subset of your instances and then gradually onboard additional instances. At times, you might want to limit agent installation to instances that are deployed into certain environments or applications that are a priority for runtime monitoring. Tagging resources associated with these workloads helps ensure that monitoring is in place for resources that you want to prioritize for runtime monitoring. This strategy gives you more fine-grained control but also requires more work and planning to help ensure that the strategy is implemented correctly.

With a tag-based strategy it is important to understand who is allowed to add or remove tags to your EC2 instances as this influences when security controls are enabled or disabled. A review of your IAM roles and policies for tagging permissions is recommended to help ensure that the appropriate principals have access to this tagging capability. This IAM document provides an example of how you may limit tagging capabilities within a policy. The approach you take will depend on how you are using policies within your environment.

Manual agent installation options

If you don’t want to run the Systems Manager agent that powers the automated agent configuration, or if you have your own strategy to install and configure software on your EC2 instances, there are other deployment options that are better suited for your situation. The following are multiple approaches that you can use to manually install the GuardDuty agent for Runtime Monitoring. See Installing the security agent manually for general pointers on the recommended manual installation steps. With manual installation, you’re responsible for updating the GuardDuty security agent when new versions of the agent are released. Updating the agent can often be performed using the same techniques as installing the agent.

EC2 Image Builder

Your EC2 deployment strategy might be to build custom Amazon EC2 machine images that are then used as the approved machine images for your organization’s workloads. One option for installing the GuardDuty runtime agent as part of a machine image build is to use EC2 Image Builder. Image Builder simplifies the building, testing, and deployment of virtual machine and container images for use on AWS. With Image Builder, you define an image pipeline that includes a recipe with a build component for installing the GuardDuty Runtime Monitoring RPM. This approach with Image Builder helps ensure that your pre-built machine image includes the necessary components for EC2 Runtime Monitoring so that the necessary security monitoring is in place as soon as your instances are launched.

Bootstrap

Some customers prefer to configure their EC2 instances as they’re launched. This is commonly done through the user data field of an EC2 instance definition. For EC2 Runtime Monitoring agent installation, you would add the steps related to download and install of the runtime RPM as part of your user data script. The steps that you would add to your user data script are outlined in the Linux Package Managers method of Installing the security agent manually.

Other tools

In addition to the preceding steps, there are other tools that you can use when you want to incorporate the installation of the GuardDuty runtime monitoring agent. Tools such as Packer for building EC2 images, and Ansible, Chef, and Puppet for instance automation can be used to run the necessary steps to install the runtime agent onto the necessary EC2 instances. See Installing the security agent manually for guidance on the installation commands you would use with these tools.

Conclusion

Through customer feedback, GuardDuty has enhanced its threat detection capabilities with the Runtime Monitoring feature and you can now use it to deploy the same security agent across different compute services in AWS for runtime threat detection. Runtime monitoring provides an additional level of visibility that helps you achieve your security goals for your AWS workloads.

This post outlined the GuardDuty EC2 Runtime Monitoring feature, how you can implement the feature on your EC2 instances, and the security value that the feature provides. The insight provided in this post is intended to help you better understand how EC2 Runtime Monitoring can benefit you in achieving your security goals related to identifying and responding to threats.

To learn more about GuardDuty and its Runtime Monitoring capabilities, see Runtime Monitoring in GuardDuty.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Scott Ward


Scott Ward

Scott is a Principal Solutions Architect with the External Security Services (ESS) product team and has been with Amazon for over 20 years. Scott provides technical guidance to customers on how to use security services to protect their AWS environments. Past roles include technical lead for the AWS Security Partner segment and member of the Amazon.com global financial systems technical team.

Смяна на пола на деца? Някой разбра ли какво от ПП–ДБ (не) искат да се забрани?

Post Syndicated from Светла Енчева original https://www.toest.bg/smiana-na-pola-na-detsa/

Смяна на пола на деца? Някой разбра ли какво от ПП–ДБ (не) искат да се забрани?

Не им е лесно на опитните мишки. Подлагат ги на всевъзможни експерименти, при които животинките не само страдат, а и оцеляването им не е гарантирано. От това лято в подобно положение са и децата в България, които принадлежат към групата на ЛГБТИ хората (лесбийки, гей мъже, бисексуални, транс и интерсекс хора). Както е тръгнало, експериментите с тях ще продължат в най-добрия случай до края на 50-тото Народно събрание. Но с немалка вероятност – и в следващия парламент.

Законът на „Възраждане“ и проектозаконът на ИТН

На 7 август 2024 г. парламентът прие – по предложение на „Възраждане“ – промени в Закона за предучилищното и училищното образование, с които се забранява т.нар. пропаганда на „нетрадиционна“ сексуална ориентация или полова идентичност. Формулировката е толкова широка, че практически включва всякакъв тип изразяване. Защото освен самата пропаганда се забраняват и „популяризирането“ и „подстрекаването“. А всяко нещо, налично в публична среда, каквато е училището, може да се интерпретира като „популяризиране“.

„Нетрадиционната“ сексуална ориентация пък беше дефинирана като „различно от общоприетите и заложените в българската правна традиция схващания за емоционално, романтично, сексуално или чувствено привличане между лица от противоположни полове“.

Междувременно на 2 август от „Има такъв народ“ внесоха законопроект за промяна в Закона за закрила на детето (ЗЗД). С него се предлага забраната на „излагането, представянето, предлагането или по какъвто и да е друг начин разпространяването“ на „съдържание, което не съответства на разбирането на пол на физическите лица като биологична категория“ на всякакви обществени места, които могат да бъдат посетени от деца. Също и на беседи с подобно съдържание и „реклама на половата идентичност, алтернатива на биологичния пол“ и подобни теми, както и всякакви медицински дейности за промяна на пола на деца. За нарушителите се предвиждат глоби между 300 и 50 000 лв.

В законопроекта на ИТН се предвижда и промяна на Закона за здравето, с която отново се забранява „прякото представяне, рекламата и извършването на медицински дейности с методи или технологии за промяна на биологичния пол на лица, ненавършили 18 години“. От партията на Слави Трифонов предлагат и санкции за нарушителите в Наказателния кодекс – затвор между една и шест години и обществено порицание, както и лишаване от право на упражняване на професията.

ПП–ДБ – едно неизпълнено обещание и два проектозакона

Промените в образователния закон станаха обект на остри критики, след като от варненския клон на „Възраждане“ инициираха репресии срещу местните учители и други педагогически служители, включили се в подписка против измененията.

На 22 август „Продължаваме промяната“ (ПП) даде на избирателите си обещание, че депутатите от партията още следващата седмица ще внесат предложение „за отмяна на „нетрадиционната“ дефиниция [на сексуалната ориентация – б.а.], която реално започна този опасен процес към фашизма в България“. От партията заявиха: 

С действия в парламента, не само с постове, ще противодействаме на черните списъци към българските учители.

Обещаните „действия в парламента“ не последваха – нито през следващата седмица, нито по-късно. На 11 септември обаче депутати от ПП и „Демократична България“ (ДБ) внесоха законопроект за изменение на ЗЗД. Те искат в закона да се включат текстове, че всяко дете „има право на закрила от прилагане на медицински дейности, които целят промяна на фенотипните [тоест видимите – б.а.] и други полови белези на генотипния му пол“, както и на „медицинска, психологическа, социална и експертна помощ по въпроси, свързани с пола му, които да укрепят психологическото му развитие и да осигурят лечение при необходимост“.

Същевременно от ПП–ДБ предлагат и промяна в Закона за здравето: 

Забранява се извършването на медицински дейности за промяна на фенотипните и други полови белези на генотипния пол на непълнолетни лица, освен в случаите, когато съществува сериозна опасност за живота или здравето им.

За неспазване на горното се предвиждат глоби от 30 000 до 50 000 лв., а при повторно нарушение – лишаване от правото да се упражнява медицинска професия за срок от 6 месеца до две години. Финансовите санкции, предложени от коалицията, са далеч по-строги от тези на ИТН, които все пак слагат долна граница от 300 лв. Но ПП–ДБ не плашат със затвор.

Смяна на пола на деца в България не се извършва – нещо, което споменават и от ПП–ДБ в мотивите към предложението си. Тоест поне на пръв поглед идеята е да се забрани нещо, което така или иначе го няма.

Същия ден група от петима народни представители, предвождани от Елисавета Белобрадова, внесоха друг законопроект за промяна на ЗЗД. Те искат в него да влезе и следната алинея: 

Всяко дете има право на защита от политическа пропаганда, неистини и предизборни политически и партийни действия, които предизвикват страх, изолация и самота у детето и се отразяват на семейната среда.

За спазването на това трябва да следят предвидените в закона органи.

Белобрадова огласи инициативата в социалните мрежи с аргумента, че нито в българските училища има пропаганда, нито в България се сменя полът на деца под 18 години. Не става ясно обаче на какъв принцип ще се преценява кои политически действия предизвикват у децата „страх, изолация и самота“. В мотивите не се споменават нито анти-ЛГБТИ поправката в образователния закон, нито предложенията на ИТН и ПП–ДБ. А те със сигурност са предизвикали „страх, изолация и самота“ у много деца. Така че предложението е по-скоро показен жест, отколкото има някаква практическа стойност.

В ПП–ДБ имат ли общо мнение какво се опитват да забранят?

След предложението на ПП–ДБ за забраната на медицинските дейности за промяна на половите белези, сред ЛГБТИ активистите настъпи разнобой. Докато според едни то е по-страшно и от законопроекта на „Възраждане“ за чуждестранните агенти, други се опитват да убедят критикуващите, че малко трансфобия (омраза към транс хората) е за предпочитане пред много трансфобия, каквато има в предложението на ИТН. Макар нищо да не гарантира, че парламентът няма да приеме някаква комбинация от предложенията на ИТН и ПП–ДБ. А според трети идеята на законопроекта всъщност е „да се защитават интерсекс децата от принудително генитално осакатяване“.

Интерсекс са хората, чийто пол по рождение не е еднозначен – не като идентичност, а биологично. Разпространена практика до неотдавна (в България – и до днес) е на интерсекс деца и тийнейджъри да се правят – без тяхното съгласие – „нормализиращи“ операции, за да заприличат на момчета или момичета. Това обаче не означава, че те ще се чувстват като представители на пола, в който са насилени да се впишат. 

Съветът на Европа и редица държави признават правата на интерсекс хората. В тях се включва и правото да не бъдат принудително подлагани на медицински процедури. Противното се възприема като „генитално осакатяване“.

Ако човек обаче внимателно прочете мотивите на законопроекта, в тях трудно може да се открие нещо, свързано със защитата от генитално осакатяване на интерсекс деца. В мотивите се изтъква верният факт, че смяна на пола на деца в България не се извършва. Но се споменават „абсурдни, квази-нормативни текстове [вероятно се намеква за предложението на ИТН – б.а.] , с които […] се застрашават правата на децата с редки генетични заболявания и на българските лекари“. Затова, твърди се, предложените промени „запазват правото на здравна и психологическа подкрепа и лечение, в случай на доказана необходимост“.

Кои са децата „с редки генетични заболявания“ и каква е тази „доказана необходимост“? Най-вероятно се имат предвид интерсекс децата, чиито „нормализиращи“ операции се възприемат като „доказана необходимост“.

Това предположение е в синхрон и с публикация във Facebook на депутатката от ДБ Кристина Петкова: 

Вчера внесохме наши поправки в Закона за закрила на детето, провокирани от безумния проект на ИТН, които да гарантират правата на децата при лечение, в случай на доказана медицинска необходимост и да осигурят защита на българските лекари.

Според нея „предложението на ИТН на практика щеше да попречи на хирургичните интервенции при деца с вродени малформации на половите органи“.

Любопитен детайл е, че самата Петкова не е сред вносителите на законопроекта, за разлика от Вяра Тодева, Ивайло Митковски и Даниел Лорер. Тодева и Митковски са единствените депутати от ПП–ДБ, които на първо четене подкрепиха закона на „Възраждане“ против ЛГБТИ пропагандата в училище. Лорер пък заяви по bTV емпирично неверния факт, че „всички в България имат подсъзнателен страх, че говоренето в училище за сексуалната ориентация може да повлияе на децата“. Според него родителите – също като него – се страхували „с какво ще се върне детето му от училище и дали няма да се върне по друг начин, защото са му говорили неща за неговите сексуални наклонности“.

Малко вероятно е основният проблем на тези трима депутати да са интерсекс децата. В мотивите на законопроекта обаче се казва още: 

Много млади хора са изложени на различни социални и културни влияния, които водят със себе си дебата за […] осигуряване на защита на децата и младежите, [за] гарантиране на правото им на физическо и психическо здраве.

Подчертава се и че предложенията „отчитат индивидуалните права, медицинската етика и обществения интерес. Те осигуряват нужната защита на децата срещу влияния и тенденции“.

С тези текстове се правят внушения, че транс хората са станали такива, защото са „изложени“ на „социални и културни влияния“, и затова трябва да бъдат защитени от вредни „влияния и тенденции“. Тоест транс децата са станали такива поради някаква мода – твърдение, което съвременната наука отхвърля.

В този контекст обаче може да се интерпретира и предложението, че детето има право на „психологическа, социална и експертна помощ по въпроси, свързани с пола му, които да укрепят психологическото му развитие“. Ако да си транс е въпрос на „влияние“, тогава укрепването на психологическото развитие ще рече „излекуване“ от въпросното влияние.

Опитите за „лечение“ на сексуалната ориентация или джендър идентичността на хомосексуални или транс лица се наричат конверсионна терапия. Няма доказателства за тяхната успешност, затова пък конверсионната терапия е допринесла за самоомраза, депресия и опити за самоубийство (включително успешни) сред много ЛГБТИ хора. И докато все повече държави я забраняват, Русия я прави задължителна.

Интерсекс активистът, когото законотворците не питат и не чуват

Темата за интерсекс хората е сложна и специфична, а в България тя е и като цяло непозната. Много лекари продължават да смятат, че „нормализиращите“ операции са необходими, а не са принудително генитално осакатяване.

Затова е обяснимо и политици, които не са се задълбочавали в тази проблематика, от най-добро сърце да повярват, че ако интерсекс децата не бъдат подлагани на операция, това ще е опасно за здравето и живота им. И не допускат, че в зряла възраст те сами биха могли преценят дали искат тялото им да притежава външните белези на някой (и на кой) пол. Че могат да изберат да си останат, каквито са – като главния герой в романа на Джефри Юдженидис „Мидълсекс“, или да отложат решението си за неопределено бъдеще – като героинята от тийнейджърския филм Fitting In („Да се впишеш“).

Първият (и почти единственият) интерсекс активист в България се казва Пол Найденов. Той е също така първият интерсекс човек в страната, спечелил дело за промяна на гражданския си пол. „Тоест“ се обърна към него с въпроса дали смята, че предложеният от ПП–ДБ законопроект защитава интерсекс децата, или легитимира принудителното генитално осакатяване.

Според Найденов законопроектът не защитава интерсекс децата, а напротив. За „нормализиращите“ операции той казва, че са „руска рулетка, само че срещу дулото е главата на интерсекс дете“. Защото някой друг решава каква ще е половата му идентичност, и формира тялото му съобразно представите си за „най-доброто“ за детето.

Найденов напомня, че дори според Конституционния съд интерсекс хората имат право сами да определят пола си, което противоречи на принудителните операции на деца. И иронично предлага пластичните операции на тийнейджъри – например на бюст или устни – също да се забранят, защото са вид промяна на видимите полови белези.

Пол Найденов обръща внимание и на друг проблем във формулировката на проектозакона: „А ако имаме смесен кариотип, кой е генотипният пол?“ „Смесен кариотип“ ще рече, че съществуват хора, чиито хромозоми не са ХХ (жена) или ХУ (мъж), а са в други комбинации, при част от които полът не може да бъде еднозначно определен. 

Разпространеното схващане, че „нормализиращите“ операции спасяват живота на интерсекс децата, Найденов коментира с думите, че „в някакъв извратен смисъл в нашата държава реално е така“. Но има предвид не здравословното състояние на тези деца, а социалната стигма и отхвърлянето, на които са подложени и които съдебните решения, вменяващи „бинарно съществуване на човешкия вид“, и анти-ЛГБТИ законодателството допълнително подсилват.

Мижав, но пък вреден резултат

С предложението си за забрана на несъществуващите операции за смяна на пола на транс деца и легитимиране на съществуващите операции за „нормализиране“ на пола на интерсекс деца от ПП–ДБ се опитват угодят на всички. И на хомофобите и трансфобите, и на ЛГБТИ активистите, и на консервативните, и на либералните избиратели. В тези перманентно предизборни времена те хем се опитват да покажат някаква позиция, хем да не си създават врагове.

Законопроектът обаче прави внушения, че да си транс не е добра идея и че това е въпрос на някакви вредни влияния. Ако той се приеме, интерсекс децата няма да бъдат защитени. За сметка на това транс младежи може да бъдат подлагани на безплодни, но мъчителни „терапии“ за „вкарване в правия път“.

В електорален план ПП–ДБ едва ли печелят нещо с предложението си. След толкова дълга поредица от избори последното, от което имат нужда избирателите, е още хлъзгави, безгръбначни и противоречиви позиции. Политиката е свързана с отстояването на принципи и ценности. Ако се опитваш да се харесаш на всички, накрая може вече никой да не те харесва.

My Zabbix is down, now what? Restoring Zabbix functionality

Post Syndicated from Aurea Araujo original https://blog.zabbix.com/my-zabbix-is-down-now-what/28776/

We’ve all been in a situation in which Zabbix was somehow unavailable. It can happen for a variety of reasons, and our goal is always to help you get everything back up and running as quickly as possible. In this blog post, we’ll show you what to do in the event of a Zabbix failure, and we’ll also go into detail about how to work with the Zabbix technical support team to resolve more complex issues.

Step by step: Understanding why Zabbix is unavailable

When Zabbix becomes unavailable, it’s important to follow a few key steps to try to resolve the problem as quickly as possible.

  1. Check the service status. First, verify if your Zabbix service is truly inactive. You can do this by accessing the machine where Zabbix is installed and checking the service status using a command like systemctl status zabbix-serveron Linux.
  2. Analyze the Zabbix logs. Check the Zabbix logs for any error messages or clues about what may have caused the failure.
  3. Restart the service. If the Zabbix service has stopped, try restarting it using the appropriate command for your operating system. For example, on Linux, you can use sudo systemctl restart zabbix-server.
  4. Check the database connectivity. Zabbix uses a database to store data and Zabbix server configurations. Make sure that the database is accessible and functioning properly. You can test database connectivity using tools like ping or telnet.
  5. Check your available disk space. Verify that there is available disk space on the machine where Zabbix is installed. A lack of disk space is a common cause of system failures.
  6. Evaluate dependencies. Make sure all Zabbix dependencies are installed and working correctly. This includes libraries, services, and any other software required for Zabbix to function.

If the problem persists after carrying out these steps, it may be necessary to refer to the official Zabbix documentation, seek help from the official Zabbix forum, or contact the Zabbix technical support team, depending on the severity and urgency of the situation.

Making the most of a Zabbix technical support contract

If you or your company have a Zabbix technical support contract, access to our global team of technical experts is guaranteed. This is an ideal option for resolving more complex or urgent issues. Here are a few steps you can follow when contacting the Zabbix technical support team:

  1. Gather all important information. Before contacting the Zabbix technical support team, gather all relevant information about the issue you’re facing. This can include error messages, logs, screenshots, and any steps you’ve already taken to try to resolve the issue.
  2. Open a ticket with the Zabbix technical support team. Contact Zabbix technical support by opening a ticket on the Zabbix Support System. Provide all the information gathered in the previous step to help the technicians understand the problem and find a solution as quickly as possible.
  3. Explain exactly how Zabbix crashed. When describing the problem, be as precise and detailed as possible. Include information such as the Zabbix version you are using, your operating system, your network configuration, and any other relevant details that might help our team diagnose the issue.
  4. Be available to follow up on the ticket. Once you’ve opened a ticket, be available to provide additional information or clarify any questions the support technicians may have. This will help speed up the problem resolution process.
  5. Follow the Zabbix technical support team’s recommendations. After receiving recommendations, follow them carefully and test to see whether they resolve the issue. If the problem persists or if new issues arise, inform the Zabbix technical support team immediately so they can continue assisting you.

A Zabbix technical support subscription gives you access to a team of Zabbix experts who can help you configure and troubleshoot your Zabbix environment. Check out the benefits of each type of subscription on the Zabbix website and make sure you have all the support you need to keep your monitoring fully operational.

 

The post My Zabbix is down, now what? Restoring Zabbix functionality appeared first on Zabbix Blog.

Amazon S3 Express One Zone now supports AWS KMS with customer managed keys

Post Syndicated from Elizabeth Fuentes original https://aws.amazon.com/blogs/aws/amazon-s3-express-one-zone-now-supports-aws-kms-with-customer-managed-keys/

Amazon S3 Express One Zone, a high-performance, single-Availability Zone (AZ) S3 storage class, now supports server-side encryption with AWS Key Management Service (KMS) keys (SSE-KMS). S3 Express One Zone already encrypts all objects stored in S3 directory buckets with Amazon S3 managed keys (SSE-S3) by default. Starting today, you can use AWS KMS customer managed keys to encrypt data at rest, with no impact on performance. This new encryption capability gives you an additional option to meet compliance and regulatory requirements when using S3 Express One Zone, which is designed to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications.

S3 directory buckets allow you to specify only one customer managed key per bucket for SSE-KMS encryption. Once the customer managed key is added, you cannot edit it to use a new key. On the other hand, with S3 general purpose buckets, you can use multiple KMS keys either by changing the default encryption configuration of the bucket or during S3 PUT requests. When using SSE-KMS with S3 Express One Zone, S3 Bucket Keys are always enabled. S3 Bucket Keys are free and reduce the number of requests to AWS KMS by up to 99%, optimizing both performance and costs.

Using SSE-KMS with Amazon S3 Express One Zone
To show you this new capability in action, I first create an S3 directory bucket in the Amazon S3 console following the steps to create a S3 directory bucket and use apne1-az4 as the Availability Zone. In Base name, I enter s3express-kms and a suffix that includes the Availability Zone ID wich is automatically added to create the final name. Then, I select the checkbox to acknowledge that Data is stored in a single Availability Zone.

In the Default encryption section, I choose Server-side encryption with AWS Key Management Service keys (SSE-KMS). Under AWS KMS Key I can Choose from your AWS KMS keys, Enter AWS KMS key ARN, or Create a KMS key. For this example, I previously created an AWS KMS key, which I selected from the list, and then choose Create bucket.

Now, any new object I upload to this S3 directory bucket will be automatically encrypted using my AWS KMS key.

SSE-KMS with Amazon S3 Express One Zone in action
To use SSE-KMS with S3 Express One Zone via the AWS Command Line Interface (AWS CLI), you need an AWS Identity and Access Management (IAM) user or role with the following policy . This policy allows the CreateSession API operation, which is necessary to successfully upload and download encrypted files to and from your S3 directory bucket.

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"s3express:CreateSession"
			],
			"Resource": [
				"arn:aws:s3express:*:<account>:bucket/s3express-kms--apne1-az4--x-s3"
			]
		},
		{
			"Effect": "Allow",
			"Action": [
				"kms:Decrypt",
				"kms:GenerateDataKey"
			],
			"Resource": [
				"arn:aws:kms:*:<account>:key/<keyId>"
			]
		}
	]
}

With the PutObject command, I upload a new file named confidential-doc.txt to my S3 directory bucket.

aws s3api put-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt \
--body confidential-doc.txt

As a success of the previous command I receive the following output:

{
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ChecksumCRC32": "0duteA==",
    "ServerSideEncryption": "aws:kms",
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true
}

Checking the object’s properties with HeadObject command, I see that it’s encrypted using SSE-KMS with the key that I created before:

aws s3api head-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt

I get the following output:

 
{
    "AcceptRanges": "bytes",
    "LastModified": "2024-08-21T15:29:22+00:00",
    "ContentLength": 5,
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "aws:kms",
    "Metadata": {},
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true,
    "StorageClass": "EXPRESS_ONEZONE"
}

I download the encrypted object with GetObject:

aws s3api get-object --bucket s3express-kms--apne1-az4--x-s3 \
--key confidential-doc.txt output-confidential-doc.txt

As my session has the necessary permissions, the object is downloaded and decrypted automatically.

{
    "AcceptRanges": "bytes",
    "LastModified": "2024-08-21T15:29:22+00:00",
    "ContentLength": 5,
    "ETag": "\"664469eeb92c4218bbdcf92ca559d03b\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "aws:kms",
    "Metadata": {},
    "SSEKMSKeyId": "arn:aws:kms:ap-northeast-1:<accountId>:key/<keyId>",
    "BucketKeyEnabled": true,
    "StorageClass": "EXPRESS_ONEZONE"
}

For this second test, I use a different IAM user with a policy that is not granted the necessary KMS key permissions to download the object. This attempt fails with an AccessDenied error, demonstrating that the SSE-KMS encryption is functioning as intended.

An error occurred (AccessDenied) when calling the CreateSession operation: Access Denied

This demonstration shows how SSE-KMS works seamlessly with S3 Express One Zone, providing an additional layer of security while maintaining ease of use for authorized users.

Things to know
Getting started – You can enable SSE-KMS for S3 Express One Zone using the Amazon S3 console, AWS CLI, or AWS SDKs. Set the default encryption configuration of your S3 directory bucket to SSE-KMS and specify your AWS KMS key. Remember, you can only use one customer managed key per S3 directory bucket for its lifetime.

Regions – S3 Express One Zone support for SSE-KMS using customer managed keys is available in all AWS Regions where S3 Express One Zone is currently available.

Performance – Using SSE-KMS with S3 Express One Zone does not impact request latency. You’ll continue to experience the same single-digit millisecond data access.

Pricing – You pay AWS KMS charges to generate and retrieve data keys used for encryption and decryption. Visit the AWS KMS pricing page for more details. In addition, when using SSE-KMS with S3 Express One Zone, S3 Bucket Keys are enabled by default for all data plane operations except for CopyObject and UploadPartCopy, and can’t be disabled. This reduces the number of requests to AWS KMS by up to 99%, optimizing both performance and costs.

AWS CloudTrail integration – You can audit SSE-KMS actions on S3 Express One Zone objects using AWS CloudTrail. Learn more about that in my previous blog post.

– Eli.

Streamline SMS and Emailing Marketing Compliance with Amazon Comprehend

Post Syndicated from Koushik Mani original https://aws.amazon.com/blogs/messaging-and-targeting/streamline-sms-and-emailing-marketing-compliance-with-amazon-comprehend/

In today’s digital landscape, businesses heavily rely on SMS and email campaigns to engage with customers and deliver timely, relevant messages. The shift towards digital marketing has increased customer engagement, accelerated delivery, and expanded personalization options. Email and SMS marketing is essential to digital strategies according to 44% of Chief Marketing Officers and they allocate approximately 8% of their marketing towards this. Industries face stringent restrictions on the content they can send due to legal regulations and carrier filtering policies.

Messages related to the subjects listed below are considered restricted and are subject to heavy filtering or even being blocked outright. Failing to comply with these restrictions can result in severe consequences, including legal action, fines, and irreparable damage to a brand’s reputation. Marketers need a solution that will proactively scan their content used in campaigns and flag restricted content before sending it out to their customers without facing penalties and losing trust.:

  • Gambling
  • High-risk financial services
  • Debt forgiveness
  • S.H.A.F.T (Sex, Hate, Alcohol, Firearms, and Tobacco)
  • Illegal substances

In this blog, we will explore how to leverage Amazon Comprehend, Amazon S3, and AWS Lambda to proactively scan text-based marketing campaigns before publishing content . This solution enables businesses to enhance their marketing efforts while maintaining compliance with industry regulations, avoiding costly fines, and preserving their hard-earned reputation, conforming to best practices.

Solution Overview

AWS provides a robust suite of services to meet the infrastructure needs of the booming digital marketing industry, including messaging capabilities through email, SMS, push, and other channels through Amazon Simple Email Service, Amazon Simple Notification Service, or Amazon Pinpoint.

The main goal for this approach is to flag any message that contains restricted content mentioned above before distribution.

Figure 1: Architecture for proactive scanning of marketing content

Figure 1: Architecture for proactive scanning of marketing content

Following are the high-level steps:

  1. Upload documents to be scanned to the S3 bucket.
  2. Utilize Amazon Comprehend custom classification for categorizing the documents uploaded.
  3. Create an Amazon Comprehend endpoint to perform analysis.
  4. Inference output is published to the destination S3 bucket.
  5. Utilize AWS Lambda function to consume the output from the destination S3 bucket.
  6. Send the compliant messages through various messaging channels.

Solution Walkthrough

Step 1: Upload Documents to Be Scanned to S3

  1. Sign in to the AWS Management Console and open the Amazon S3 console
  2. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket.
  3. In the left navigation pane, choose Buckets.
  4. Choose Create bucket.
    • The Create bucket page opens.
  5. Under General configuration, view the AWS Region where your bucket will be created.
  6. Under Bucket type, choose General purpose.
  7. For Bucket name, enter a name for your bucket.
    • The bucket name must:
      • Be unique within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud (US) Regions).
      • Be between 3 and 63 characters long.
      • Consist only of lowercase letters, numbers, dots (.), and hyphens (-). For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets that are used only for static website hosting.
      • Begin and end with a letter or number.
  8. In the Buckets list, choose the name of the bucket that you want to upload your folders or files to.
  9. Choose Upload.
  10. In the Upload window, do one of the following:
    • Drag and drop files and folders to the Upload window.
    • Choose Add file or Add folder, choose the files or folders to upload, and choose Open.
    • To enable versioning, under Destination, choose Enable Bucket Versioning.
    • To upload the listed files and folders without configuring additional upload options, at the bottom of the page, choose Upload.
  11. Amazon S3 uploads your objects and folders. When the upload is finished, you see a success message on the Upload: status page.

Step 2: Creating a Custom Classifiction Model

Custom Classification Model

Out-of-the-box models may not capture nuances and terminology specific to an organization’s industry or use case. Therefore, we train a custom model to identify compliant messages.

A custom classification model is a feature that allows you to train a machine learning model to classify text data based on categories that are specific to your use case or industry. It trains the model to recognize and sort different types of content which is used to power the endpoint. A custom classification model is designed to save costs and promote compliant messages and further prevent marketing companies from potential fines.

Requirements for custom classification:

  • Dataset creation
    • A CSV dataset with 1000 examples of marketing messages, each labeled as compliant (1) or non-compliant (0).
    • Designed to train a model for accurate predictions on marketing message compliance.
Figure 2: Screenshot of dataset – 20 entries of censored marketing messages

Figure 2: Screenshot of dataset – 20 entries of censored marketing messages

  • Creating a Test Data Set
    In addition to providing a dataset to power your customer classification model, a test dataset is also required to test the data that the model will be running on. Without a test dataset, Amazon Comprehend trains the model with 90 percent of the training data. It reserves 10 percent of the training data to use for testing. When using a test dataset, the test data must include at least one example for each unique label (0 or 1) in the training dataset.
  1. Upload the data set and test data set to an S3 Bucket, by following the steps in this user guide.
  2. In the AWS Console, search for Amazon Comprehend.
  3. Once selected, select custom classification on the left panel.
  4. Once there, select Create new model.
  5. Next specify model settings:
    • Model name
    • Specify the version (optional)
    • Language: EnglishModel Setting
  6. Specify the data specifications:
    • Training model type: Plain Text Documents
    • Data format: CSV File
    • Classifier Mode: Using Single-Label Mode
    • Training Dataset: Give the name of the bucket you created in step 1
    • Test Data set: Autosplit, i.e. how much of your data will be used for training and testing.
    • Data Specifications
  1. Specify the location of the model output in S3
    • Output data
  1. Create an IAM Role
    • Permissions to access: Train, Test and output data (if specified in your S3 Buckets)
    • IAM Role
  2. Once all parameters have been identified, select Create.
    • Preferences
  3. Once the model has been created, you can view it under Custom Classification. To check and verify the accuracy and F1 score, select the version number of the model. Here, you can view the model details under the Performance tab.

Step 3: Creating an Endpoint in Amazon Comprehend

Next, an endpoint needs to be created to analyze documents. To create an endpoint:

  • Select endpoint on the left panel in Amazon Comprehend.
  • Select Create endpoint in the left panel.
  • Specify Endpoints Settings :
    • Provide a name
    • Custom model type: Custom Classification
    • Choose a custom model that you want to attach to the new endpoint. From the dropdown, you can search by model name.
    • create_endpointFigure 8: Amazon Comprehend – Endpoint settings
  • Provide the number of inference units (IUs): 1
  • Once all the parameters have been provided, ensure that the Acknowledge checkbox has been selected.
  • Finally, select Create endpoint.
  • inference_units

Step 4: Scanning Text with the Custom Classification Endpoint

Once the endpoint has been successfully created, it can be used for real-time analysis or batch-processing jobs. Below is a walkthrough of how both options can be achieved.

Real-time analysis:

  • On the left panel, select Realtime Analysis.
  • Pick Analysis type: custom, to view real-time insights based on the custom models from an endpoint you’ve created
  • Select custom model type
  • Select your Endpoint
  • Insert your input text.
    • For this example, we have used a non-compliant message: Huge sale at Pine County Shooting Range 25% off for 6mm and 9mm bullets! Lazer add-ons on clearance too
  • Once inserted, click Analyze.input_data
  • Once analyzed, you will see a confidence score under Classes. Because the dataset is labeled as 0 for non-compliant and 1 for compliant. The message that was inserted was non-compliant, the result of the real-time analysis is a high confidence score for non-compliant.insights

Real-time analysis in Amazon Comprehend:

  • On the left panel in Amazon Comprend, select Analysis Jobs.
  • Select the Create Job button.
  • Configure Job settings:
    • Enter the Name
    • Analysis Type: Custom Classification
    • Classifications Model: The model you have created for your Classifier, as well as the version number of that model you would like to use for this job.
    • create_analysis
  • Enter the location of the Input Data and Output Data in the form of an S3 bucket URL.
  • input_data
  • Before creating a job the last thing, we want to do is provide the right access permission, by creating an IAM role that give access permissions to the S3 input and output locations.
  • access_premissions
  • Once the batch processing job shows a status of completed, you can view the results in the output S3 bucket which was identified earlier. The results will be in a json file where each line represents the confidence score for each marketing message.
  • json

Step 5 (optional): Publish message to communication service

The result from the batch processing is automatically uploaded to the output S3 bucket. For each json file uploaded, S3 will initiate an S3 Event Notification which will inform a Lambda function that a new S3 object has been created.

The Lambda function will evaluate the results and automatically identify the messages labeled as compliant (label 0). These compliant messages will then be published to communication services using one of the following three APIs, depending on the desired service:

To automatically trigger the AWS Lambda function, which will read the files uploaded into the S3 bucket and display the data using the Python Pandas library, we will use the boto3 API to read the files from the S3 bucket.

  1. Create an IAM Role in AWS.
  2. Create an AWS S3 bucket.
  3. Create the AWS Lambda function with S3 triggers enabled.
  4. Update the Lambda code with a Python script to read the data and send the communication to customer.

Conclusion

Proactively scanning and classifying marketing content for compliance is a critical aspect of ensuring successful digital marketing campaigns while adhering to industry regulations. Leveraging the powerful combination of Amazon Comprehend, Amazon S3, and AWS Lambda enables the automatic analysis of text-based marketing messages and flagging of any non-compliant content before sending them to your customer. Following these steps provides you with the tools and knowledge to implement proactive scanning for your marketing content. This solution will help mitigate the risks of non-compliance, avoiding costly fines and reputational damage, while freeing up time for your content creation teams to focus on ideation and crafting compelling marketing messages. Regular monitoring and fine-tuning of the custom classification model should be conducted to ensure accurate identification of non-compliant language.

To get started with proactively scanning and classifying marketing content for compliance, see Amazon Comprehend Custom Classification.

About the Authors

Caroline Des Rochers

Caroline Des Rochers

Caroline is a Solutions Architect at Amazon Web Services, based in Montreal, Canada. She works closely with customers, in both English and French, to accelerate innovation and advise them through technical challenges. In her spare time, she is a passionate skier and cyclist, always ready for new outdoor adventures.

Erika_Houde_Pearce

Erika Houde Pearce

Erika Houde-Pearce is a bilingual Solutions Architect in Montreal, guiding small and medium businesses in eastern Canada through their cloud transformations. Her expertise empowers organizations to unlock the full potential of cloud technology, accelerating their growth and agility. Away from work, she spends her spare time with her golden retriever, Scotia.

Koushik_Mani

Koushik Mani

Koushik Mani is a Solutions Architect at AWS. He had worked as a Software Engineer for two years focusing on machine learning and cloud computing use cases at Telstra. He completed his masters in computer science from University of Southern California. He is passionate about machine learning and generative AI use cases and building solutions.

[$] A discussion of Rust safety documentation

Post Syndicated from daroc original https://lwn.net/Articles/990273/


Kangrejos 2024
started off with a talk from Benno Lossin about his
recent work
to establish a standard for safety documentation in Rust kernel code. Lossin
began his talk by giving a brief review of what safety documentation is, and
why it’s needed, before moving on to the current status of his work. Safety
documentation is easier to read and write when there’s a shared vocabulary for
discussing common requirements; Lossin wants to establish that shared vocabulary
for Rust code in the Linux kernel.

[$] Vanilla OS 2: an immutable distribution to run all software

Post Syndicated from jzb original https://lwn.net/Articles/989629/

Vanilla OS, an immutable desktop
Linux distribution designed for developers and advanced users, has
recently published its 2.0
“Orchid” release
. Previously based on Ubuntu, Vanilla OS has now
shifted to Debian unstable (“sid”). The release has made it easier to
install software from other distributions’ package repositories, and it
is now theoretically possible to install and run Android applications as well.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close