Tag Archives: assessment

Streamlining evidence collection with AWS Audit Manager

Post Syndicated from Nicholas Parks original https://aws.amazon.com/blogs/security/streamlining-evidence-collection-with-aws-audit-manager/

In this post, we will show you how to deploy a solution into your Amazon Web Services (AWS) account that enables you to simply attach manual evidence to controls using AWS Audit Manager. Making evidence-collection as seamless as possible minimizes audit fatigue and helps you maintain a strong compliance posture.

As an AWS customer, you can use APIs to deliver high quality software at a rapid pace. If you have compliance-focused teams that rely on manual, ticket-based processes, you might find it difficult to document audit changes as those changes increase in velocity and volume.

As your organization works to meet audit and regulatory obligations, you can save time by incorporating audit compliance processes into a DevOps model. You can use modern services like Audit Manager to make this easier. Audit Manager automates evidence collection and generates reports, which helps reduce manual auditing efforts and enables you to scale your cloud auditing capabilities along with your business.

AWS Audit Manager uses services such as AWS Security Hub, AWS Config, and AWS CloudTrail to automatically collect and organize evidence, such as resource configuration snapshots, user activity, and compliance check results. However, for controls represented in your software or processes without an AWS service-specific metric to gather, you need to manually create and provide documentation as evidence to demonstrate that you have established organizational processes to maintain compliance. The solution in this blog post streamlines these types of activities.

Solution architecture

This solution creates an HTTPS API endpoint, which allows integration with other software development lifecycle (SDLC) solutions, IT service management (ITSM) products, and clinical trial management systems (CTMS) solutions that capture trial process change amendment documentation (in the case of pharmaceutical companies who use AWS to build robust pharmacovigilance solutions). The endpoint can also be a backend microservice to an application that allows contract research organizations (CRO) investigators to add their compliance supporting documentation.

In this solution’s current form, you can submit an evidence file payload along with the assessment and control details to the API and this solution will tie all the information together for the audit report. This post and solution is directed towards engineering teams who are looking for a way to accelerate evidence collection. To maximize the effectiveness of this solution, your engineering team will also need to collaborate with cross-functional groups, such as audit and business stakeholders, to design a process and service that constructs and sends the message(s) to the API and to scale out usage across the organization.

To download the code for this solution, and the configuration that enables you to set up auto-ingestion of manual evidence, see the aws-audit-manager-manual-evidence-automation GitHub repository.

Architecture overview

In this solution, you use AWS Serverless Application Model (AWS SAM) templates to build the solution and deploy to your AWS account. See Figure 1 for an illustration of the high-level architecture.

Figure 1. The architecture of the AWS Audit Manager automation solution

Figure 1. The architecture of the AWS Audit Manager automation solution

The SAM template creates resources that support the following workflow:

  1. A client can call an Amazon API Gateway endpoint by sending a payload that includes assessment details and the evidence payload.
  2. An AWS Lambda function implements the API to handle the request.
  3. The Lambda function uploads the evidence to an Amazon Simple Storage Service (Amazon S3) bucket (3a) and uses AWS Key Management Service (AWS KMS) to encrypt the data (3b).
  4. The Lambda function also initializes the AWS Step Functions workflow.
  5. Within the Step Functions workflow, a Standard Workflow calls two Lambda functions. The first looks for a matching control within an assessment, and the second updates the control within the assessment with the evidence.
  6. When the Step Functions workflow concludes, it sends a notification for success or failure to subscribers of an Amazon Simple Notification Service (Amazon SNS) topic.

Deploy the solution

The project available in the aws-audit-manager-manual-evidence-automation GitHub repository contains source code and supporting files for a serverless application you can deploy with the AWS SAM command line interface (CLI). It includes the following files and folders:

src Code for the application’s Lambda implementation of the Step Functions workflow.
It also includes a Step Functions definition file.
template.yml A template that defines the application’s AWS resources.

Resources for this project are defined in the template.yml file. You can update the template to add AWS resources through the same deployment process that updates your application code.

Prerequisites

This solution assumes the following:

  1. AWS Audit Manager is enabled.
  2. You have already created an assessment in AWS Audit Manager.
  3. You have the necessary tools to use the AWS SAM CLI (see details in the table that follows).

For more information about setting up Audit Manager and selecting a framework, see Getting started with Audit Manager in the blog post AWS Audit Manager Simplifies Audit Preparation.

The AWS SAM CLI is an extension of the AWS CLI that adds functionality for building and testing Lambda applications. The AWS SAM CLI uses Docker to run your functions in an Amazon Linux environment that matches Lambda. It can also emulate your application’s build environment and API.

To use the AWS SAM CLI, you need the following tools:

AWS SAM CLI Install the AWS SAM CLI
Node.js Install Node.js 14, including the npm package management tool
Docker Install Docker community edition

To deploy the solution

  1. Open your terminal and use the following command to create a folder to clone the project into, then navigate to that folder. Be sure to replace <FolderName> with your own value.

    mkdir Desktop/<FolderName> && cd $_

  2. Clone the project into the folder you just created by using the following command.

    git clone https://github.com/aws-samples/aws-audit-manager-manual-evidence-automation.git

  3. Navigate into the newly created project folder by using the following command.

    cd aws-audit-manager-manual-evidence-automation

  4. In the AWS SAM shell, use the following command to build the source of your application.

    sam build

  5. In the AWS SAM shell, use the following command to package and deploy your application to AWS. Be sure to replace <DOC-EXAMPLE-BUCKET> with your own unique S3 bucket name.

    sam deploy –guided –parameter-overrides paramBucketName=<DOC-EXAMPLE-BUCKET>

  6. When prompted, enter the AWS Region where AWS Audit Manager was configured. For the rest of the prompts, leave the default values.
  7. To activate the IAM authentication feature for API gateway, override the default value by using the following command.

    paramUseIAMwithGateway=AWS_IAM

To test the deployed solution

After you deploy the solution, run an invocation like the one below for an assessment (using curl). Be sure to replace <YOURAPIENDPOINT> and <AWS REGION> with your own values.

curl –location –request POST
‘https://<YOURAPIENDPOINT>.execute-api.<AWS REGION>.amazonaws.com/Prod’ \
–header ‘x-api-key: ‘ \
–form ‘payload=@”<PATH TO FILE>”‘ \
–form ‘AssessmentName=”GxP21cfr11″‘ \
–form ‘ControlSetName=”General requirements”‘ \
–form ‘ControlIdName=”11.100(a)”‘

Check to see that your file is correctly attached to the control for your assessment.

Form-data interface parameters

The API implements a form-data interface that expects four parameters:

  1. AssessmentName: The name for the assessment in Audit Manager. In this example, the AssessmentName is GxP21cfr11.
  2. ControlSetName: The display name for a control set within an assessment. In this example, the ControlSetName is General requirements.
  3. ControlIdName: this is a particular control within a control set. In this example, the ControlIdName is 11.100(a).
  4. Payload: this is the file representing evidence to be uploaded.

As a refresher of Audit Manager concepts, evidence is collected for a particular control. Controls are grouped into control sets. Control sets can be grouped into a particular framework. The assessment is considered an implementation, or an instance, of the framework. For more information, see AWS Audit Manager concepts and terminology.

To clean up the deployed solution

To clean up the solution, use the following commands to delete the AWS CloudFormation stack and your S3 bucket. Be sure to replace <YourStackId> and <DOC-EXAMPLE-BUCKET> with your own values.

aws cloudformation delete-stack –stack-name <YourStackId>
aws s3 rb s3://<DOC-EXAMPLE-BUCKET> –force

Conclusion

This solution provides a way to allow for better coordination between your software delivery organization and compliance professionals. This allows your organization to continuously deliver new updates without overwhelming your security professionals with manual audit review tasks.

Next steps

There are various ways to extend this solution.

  1. Update the API Lambda implementation to be a webhook for your favorite software development lifecycle (SDLC) or IT service management (ITSM) solution.
  2. Modify the steps within the Step Functions state machine to more closely match your unique compliance processes.
  3. Use AWS CodePipeline to start Step Functions state machines natively, or integrate a variation of this solution with any continuous compliance workflow that you have.

Learn more AWS Audit Manager, DevOps, and AWS for Health and start building!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Nicholas Parks

Nicholas Parks

Nicholas has been using AWS since 2010 across various enterprise verticals including healthcare, life sciences, financial, retail, and telecommunications. Nicholas focuses on modernizations in pursuit of new revenue as well as application migrations. He specializes in Lean, DevOps cultural change, and Continuous Delivery.

Brian Tang

Brian Tang

Brian Tang is an AWS Solutions Architect based out of Boston, MA. He has 10 years of experience helping enterprise customers across a wide range of industries complete digital transformations by migrating business-critical workloads to the cloud. His core interests include DevOps and serverless-based solutions. Outside of work, he loves rock climbing and playing guitar.

Formative assessment in the computer science classroom

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/research-seminar-formative-assessment-computer-science-classroom/

In computing education research, considerable focus has been put on the design of teaching materials and learning resources, and investigating how young people learn computing concepts. But there has been less focus on assessment, particularly assessment for learning, which is called formative assessment. As classroom teachers are engaged in assessment activities all the time, it’s pretty strange that researchers in the area of computing and computer science in school have not put a lot of focus on this.

Shuchi Grover

That’s why in our most recent seminar, we were delighted to hear about formative assessment — assessment for learning — from Dr Shuchi Grover, of Looking Glass Ventures and Stanford University in the USA. Shuchi has a long track record of work in the learning sciences (called education research in the UK), and her contribution in the area of computational thinking has been hugely influential and widely drawn on in subsequent research.

Two types of assessment

Assessment is typically divided into two types:

  1. Summative assessment (i.e. assessing what has been learned), which typically takes place through examinations, final coursework, projects, etcetera.
  2. Formative assessment (i.e. assessment for learning), which is not aimed at giving grades and typically takes place through questioning, observation, plenary classroom activities, and dialogue with students.

Through formative assessment, teachers seek to find out where students are at, in order to use that information both to direct their preparation for the next teaching activities and to give students useful feedback to help them progress. Formative assessment can be used to surface misconceptions (or alternate conceptions) and for diagnosis of student difficulties.

Venn diagram of how formative assessment practices intersect with teacher knowledge and skills
Click to enlarge

As Shuchi outlined in her talk, a variety of activities can be used for formative assessment, for example:

  • Self- and peer-assessment activities (commonly used in schools).
  • Different forms of questioning and quizzes to support learning (not graded tests).
  • Rubrics and self-explanations (for assessing projects).

A framework for formative assessment

Shuchi described her own research in this topic, including a framework she has developed for formative assessment. This comprises three pillars:

  1. Assessment design.
  2. Teacher or classroom practice.
  3. The role of the community in furthering assessment practice.
Shuchi Grover's framework for formative assessment
Click to enlarge

Shuchi’s presentation then focused on part of the first pillar in the framework: types of assessments, and particularly types of multiple-choice questions that can be automatically marked or graded using software tools. Tools obviously don’t replace teachers, but they can be really useful for providing timely and short-turnaround feedback for students.

As part of formative assessment, carefully chosen questions can also be used to reveal students’ misconceptions about the subject matter — these are called diagnostic questions. Shuchi discussed how in a classroom setting, teachers can employ this kind of question to help them decide what to focus on in future lessons, and to understand their students’ alternate or different conceptions of a topic. 

Formative assessment of programming skills

The remainder of the seminar focused on the formative assessment of programming skills. There are many ways of assessing developing programming skills (see Shuchi’s slides), including Parsons problems, microworlds, hotspot items, rubrics (for artifacts), and multiple-choice questions. As an MCQ example, in the figure below you can see some snippets of block-based code, which students need to read and work out what the outcome of running the snippets will be. 

Click to enlarge

Questions such as this highlight that it’s important for learners to engage in code comprehension and code reading activities when learning to program. This really underlines the fact that such assessment exercises can be used to support learning just as much as to monitor progress.

Formative assessment: our support for teachers

Interestingly, Shuchi commented that in her experience, teachers in the UK are more used to using code reading activities than US teachers. This may be because code comprehension activities are embedded into the curriculum materials and support for pedagogy, both of which the Raspberry Pi Foundation developed as part of the National Centre for Computing Education in England. We explicitly share approaches to teaching programming that incorporate code reading, for example the PRIMM approach. Moreover, our work in the Raspberry Pi Foundation includes the Isaac Computer Science online learning platform for A level computer science students and teachers, which is centered around different types of questions designed as tools for learning.

All these materials are freely available to teachers wherever they are based.

Further work on formative assessment

Based on her work in US classrooms researching this topic, Shuchi’s call to action for teachers was to pay attention to formative assessment in computer science classrooms and to investigate what useful tools can support them to give feedback to students about their learning. 

Advice from Shuchi Grover on how to embed formative assessment in classroom practice
Click to enlarge

Shuchi is currently involved in an NSF-funded research project called CS Assess to further develop formative assessment in computer science via a community of educators. For further reading, there are two chapters related to formative assessment in computer science classrooms in the recently published book Computer Science in K-12 edited by Shuchi.

There was much to take away from this seminar, and we are really grateful to Shuchi for her input and look forward to hearing more about her developing project.

Join our next seminar

If you missed the seminar, you can find the presentation slides and a recording of the Shuchi’s talk on our seminars page.

In our next seminar on Tuesday 3 November at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PT / 18:00–19:30 CEST, I will be presenting my work on PRIMM, particularly focusing on language and talk in programming lessons. To join, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended this past seminar, the link remains the same.

The post Formative assessment in the computer science classroom appeared first on Raspberry Pi.