Tag Archives: artificial intelligence

AI to Aid Democracy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/ai-to-aid-democracy.html

There’s good reason to fear that AI systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

These risks may be the fallout of a world where businesses deploy poorly tested AI systems in a battle for market share, each hoping to establish a monopoly.

But dystopia isn’t the only possible future. AI could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an AI not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it.

An AI built for public benefit could be tailor-made for those use cases where technology can best help democracy. It could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use large language models, or LLMs, like GPT4 to better understand what their citizens want.

Today, state-of-the-art AI systems are controlled by multibillion-dollar tech companies: Google, Meta, and OpenAI in connection with Microsoft. These companies get to decide how we engage with their AIs and what sort of access we have. They can steer and shape those AIs to conform to their corporate interests. That isn’t the world we want. Instead, we want AI options that are both public goods and directed toward public good.

We know that existing LLMs are trained on material gathered from the internet, which can reflect racist bias and hate. Companies attempt to filter these data sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But leaked emails and conversations suggest that they are rushing half-baked products to market in a race to establish their own monopoly.

These companies make decisions with huge consequences for democracy, but little democratic oversight. We don’t hear about political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they skirt controversial topics completely? Currently, we have to trust companies to tell us the truth about the trade-offs they face.

A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products. It would also allow us to figure out the limits of LLMs and direct their applications with those in mind.

We know that LLMs often “hallucinate,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for. Democracy could be undermined if citizens trust technologies that just make stuff up at random, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

But a public option AI could do more than check technology companies’ honesty. It could test new applications that could support democracy rather than undermining it.

Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals. By this we don’t mean that AI will replace humans in the political debate, only that they can help us express ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re OK with accepting help to articulate your personal sentiments or political beliefs. AI will make it easier to generate first drafts, and provide editing help and suggest alternative phrasings. How these AI uses are perceived will change over time, and there is still much room for improvement in LLMs—but their assistive power is real. People are already testing and speculating on their potential for speechwriting, lobbying, and campaign messaging. Highly influential people often rely on professional speechwriters and staff to help develop their thoughts, and AI could serve a similar role for everyday citizens.

If the hallucination problem can be solved, LLMs could also become explainers and educators. Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that has command of the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

Finally, and most ambitiously, AI could help facilitate radical democracy at scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed, we delegate decisions to elected politicians in part because we don’t have time to deliberate on every issue. But AI could manage massive political conversations in chat rooms, on social networking sites, and elsewhere: identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

AI chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of AI-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the AI company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden, imagines how AI might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.

This future requires an AI public option. Building one, through a government-directed model development and deployment program, would require a lot of effort—and the greatest challenges in developing public AI systems would be political.

Some technological tools are already publicly available. In fairness, tech giants like Google and Meta have made many of their latest and greatest AI tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

While state-of-the-art LLMs achieve spectacular results, they do so using techniques that are mostly well known and widely used throughout the industry. OpenAI has only revealed limited details of how it trained its latest model, but its major advance over its earlier ChatGPT model is no secret: a multi-modal training process that accepts both image and textual inputs.

Financially, the largest-scale LLMs being trained today cost hundreds of millions of dollars. That’s beyond ordinary people’s reach, but it’s a pittance compared to U.S. federal military spending—and a great bargain for the potential return. While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology, the Lawrence Livermore National Laboratory, and other Department of Energy labs, as well as universities and nonprofits, with the AI expertise and capability to oversee this effort.

Instead of releasing half-finished AI systems for the public to test, we need to make sure that they are robust before they’re released—and that they strengthen democracy rather than undermine it. The key advance that made recent AI chatbot models dramatically more useful was feedback from real people. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.

To build assistive AI for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate. This gives us a path to “align” LLMs with our democratic values: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

Capturing that kind of user interaction and feedback within a political environment suspicious of both AI and technology generally will be challenging. It’s easy to imagine the same politicians who rail against the untrustworthiness of companies like Meta getting far more riled up by the idea of government having a role in technology development.

As Karl Popper, the great theorist of the open society, argued, we shouldn’t try to solve complex problems with grand hubristic plans. Instead, we should apply AI through piecemeal democratic engineering, carefully determining what works and what does not. The best way forward is to start small, applying these technologies to local decisions with more constrained stakeholder groups and smaller impacts.

The next generation of AI experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public AI option.

Even with these approaches, building and fielding a democratic AI option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial AI domination undermines democratic politics—will be much messier and much worse.

This essay was written with Henry Farrell and Nathan Sanders, and previously appeared on Slate.com.

EDITED TO ADD: Linux Weekly News discussion.

Automate discovery of data relationships using ML and Amazon Neptune graph technology

Post Syndicated from Moira Lennox original https://aws.amazon.com/blogs/big-data/automate-discovery-of-data-relationships-using-ml-and-amazon-neptune-graph-technology/

Data mesh is a new approach to data management. Companies across industries are using a data mesh to decentralize data management to improve data agility and get value from data. However, when a data producer shares data products on a data mesh self-serve web portal, it’s neither intuitive nor easy for a data consumer to know which data products they can join to create new insights. This is especially true in a large enterprise with thousands of data products.

This post shows how to use machine learning (ML) and Amazon Neptune to create automated recommendations to join data products and display those recommendations alongside the existing data products. This allows data consumers to easily identify new datasets and provides agility and innovation without spending hours doing analysis and research.

Background

The success of a data-driven organization recognizes data as a key enabler to increase and sustain innovation. It follows what is called a distributed system architecture. The goal of a data product is to solve the long-standing issue of data silos and data quality. Independent data products often only have value if you can connect them, join them, and correlate them to create a higher order data product that creates additional insights. A modern data architecture is critical in order to become a data-driven organization. It allows stakeholders to manage and work with data products across the organization, enhancing the pace and scale of innovation.

Solution overview

A data mesh architecture starts to solve for the decoupled architecture by decoupling the data infrastructure from the application infrastructure, which is a common challenge in traditional data architectures. It focuses on decentralized ownership, domain design, data products, and self-serve data infrastructure. This allows for a new way of thinking and new organizational elements—namely, a modern data community.

However, today’s data mesh platform contains largely independent data products. Even with well-documented data products, knowing how to connect or join data products is a time-consuming job. Data consumers spend hours, days, or months to understand and analyze the data. Identifying links or relationships between data products is critical to create value from the data mesh and enable a data-driven organization.

The solution in this post illustrates an approach to solving these challenges. It uses a fictional insurance company with several data products shared on their data mesh marketplace. The following figure shows the sample data products used in our solution.

Suppose a consumer is browsing the customer data product in the data mesh marketplace. The consumer wonders if the customer data could be linked to claim, policy, or encounter data. Because these data products come from different lines of business (LOBs) or silos, it’s hard to know. A consumer would have to review each data product and do the necessary analysis and research to know this with any certainty.

To solve this problem, our solution uses ML and Neptune to create recommendations for the data consumer. The solution generates a list of data products, product attributes, and the associated probability scores to show join ability. This reduces the time to discover, analyze, and create new insights.

We use Valentine, a data science algorithm for comparing datasets, to improve data product recommendations. Neptune, the managed AWS graph database service, stores information about explicit connections between datasets, improving the recommendations.

Example use case

Let’s walk through a concrete example. Suppose a consumer is browsing the Customer data product in the data mesh marketplace. Customer is similar to the Policy and Encounter data products, but these products come from different silos. Their similarity to the Customer is hard to gauge. To expedite the consumer’s work, the mesh recommends how the Policy and Encounter products can be connected to the Customer product.

Let’s consider two cases. First, is Customer similar to Claim? The following is a sample of the data in each product.

Intuitively, these two products have lots of overlap. Every Cust_Nbr in Claim has a corresponding Customer_ID in Customer. There is no foreign key constraint in Claim that assures us it points to Customer. We think there is enough similarity to infer a join relationship.

The data science algorithm Valentine is an effective tool for this. Valentine is presented in the paper Valentine: Evaluating Matching Techniques for Dataset Discovery (2021, Koutras et al.). Valentine determines if two datasets are joinable or unionable. We focus on the former. Two datasets are joinable if a record from one dataset has a link to a record in the other dataset using one or more columns. Valentine addresses the use case where data is messy: there is no foreign key constraint in place, and data doesn’t match perfectly between datasets. Valentine looks for similarities, and its findings are probabilistic. It scores its proposed matches.

This solution uses an implementation of Valentine available in the following GitHub repo. The first step is to load each data product from its source into a Pandas data frame. If the data is large, load a representative subset of it, at most a few million records. Pass the frames to the valentine_match() function and select the matching method. We use COMA, one of several methods that Valentine supports. The function’s result indicates the similarity of columns and the score. In this case, it tells us that the Customer_ID for Customer matches the Cust_Nbr for Claim, with a very high score. We then instruct the data mesh to recommend Claim to the consumer browsing Customer.

A graph database isn’t required to recommend Claim; the two products could be directly compared. But let’s consider Encounter. Is Customer similar to Encounter? This case is more complicated. Many encounters in the Encounter product don’t link to a customer. An encounter occurs when someone contacts the contact center, which could be by phone or email. The party may or may not be a customer, and if they are a customer, we may not know their customer ID during this encounter. Additionally, sometimes the phone or email they use isn’t the same as the one from a customer record in the Customer product.

In the following sample encounter set, encounters 1 and 2 match to Customer_ID 4. Note that encounter 2’s inbound_email doesn’t exactly match the inbound_email in that customer’s record in the Customer product. Encounter 3 has no Customer_ID, but its inbound_email matches the customer with ID 8. Encounter 4 appears to refer to the customer with ID 8, but the email doesn’t match, and no Customer_ID is given. Encounter 5 only has Inbound_Phone, but that matches the customer with ID 1. Encounter 6 only has an Inbound_Phone, and it doesn’t appear to match any of the customers we’ve listed so far.

We don’t have a strong enough comparison to infer similarity.

But we know more about the customer than the Customer product tells us. In the Neptune database, we maintain a knowledge graph that combines multiple products and links them through relationships. A knowledge graph allows us to combine data from different sources to gain a better understanding of a specific problem domain. In Neptune, we combine the Customer product data with an additional data product: Sales Opportunity. We ingest each product from its source into the knowledge graph and model a hasSalesOpportunity relationship between Customer and SalesOpportunity resources. The following figure shows these resources, their attributes, and their relationship.

With the AWS SDK for Pandas, we combine this data by running a query against the Neptune graph. We use a graph query language (such as SPARQL) to wrangle a representative subset of customer and sales opportunity data into a Pandas data frame (shown as Enhanced Customer View in the following figure). In the following example, we enhance customers 7 and 8 with alternate phone or email contact data from sales opportunities.

We pass that frame to Valentine and compare it to Encounter. This time, two additional encounters match a customer.

The score meets our threshold, and is high enough to share with the consumer as a possible match. To the customer browsing Customer in the mesh marketplace, we present the recommendation of Encounter, along with scoring details to support the recommendation. With this recommendation, the consumer can explore the Encounter product with greater confidence.

Conclusion

Data-driven organizations are transitioning to a data product way of thinking. Utilizing strategies like data mesh generates value on a large scale. We took this a step further by creating a blueprint to create smart recommendations by linking similar data products using graph technology and ML. In this post, we showed how an organization can augment a data catalog with additional metadata by using ML and Neptune with an automated process.

This solution solves the interoperability and linkage problem for data products. Additionally, it gives organizations real-time insights, agility, and innovation without spending time on data analysis and research. This approach creates a truly connected ecosystem with simplified access to delight your data consumers. The current solution is platform agnostic; however, in a future post we will show how to implement this using data.all (open-source software) and Amazon DataZone.

To learn more about ML in Neptune, refer to Amazon Neptune ML for machine learning on graphs. You can also explore Neptune notebooks demonstrating ML and data science for graphs. For more information about the data mesh architecture, refer to Design a data mesh architecture using AWS Lake Formation and AWS Glue. To learn more about Amazon DataZone and how you can share, search, and discover data at scale across organizational boundaries.


About the Authors


Moira Lennox
is a Senior Data Strategy Technical Specialist for AWS with 27 years’ experience helping companies innovate and modernize their data strategies to achieve new heights and allow for strategic decision-making. She has experience working in large enterprises and technology providers, in both business and technical roles across multiple industries, including health care live sciences, financial services, communications, digital entertainment, energy, and manufacturing.

Joel Farvault is Principal Specialist SA Analytics for AWS with 25 years’ experience working on enterprise architecture, data strategy, and analytics, mainly in the financial services industry. Joel has led data transformation projects on fraud analytics, claims automation, and data governance.

Mike Havey is a Solutions Architect for AWS with over 25 years of experience building enterprise applications. Mike is the author of two books and numerous articles. His Amazon author page

Optimizing GPU utilization for AI/ML workloads on Amazon EC2

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/optimizing-gpu-utilization-for-ai-ml-workloads-on-amazon-ec2/

­­­­This blog post is written by Ben Minahan, DevOps Consultant, and Amir Sotoodeh, Machine Learning Engineer.

Machine learning workloads can be costly, and artificial intelligence/machine learning (AI/ML) teams can have a difficult time tracking and maintaining efficient resource utilization. ML workloads often utilize GPUs extensively, so typical application performance metrics such as CPU, memory, and disk usage don’t paint the full picture when it comes to system performance. Additionally, data scientists conduct long-running experiments and model training activities on existing compute instances that fit their unique specifications. Forcing these experiments to be run on newly provisioned infrastructure with proper monitoring systems installed might not be a viable option.

In this post, we describe how to track GPU utilization across all of your AI/ML workloads and enable accurate capacity planning without needing teams to use a custom Amazon Machine Image (AMI) or to re-deploy their existing infrastructure. You can use Amazon CloudWatch to track GPU utilization, and leverage AWS Systems Manager Run Command to install and configure the agent across your existing fleet of GPU-enabled instances.

Overview

First, make sure that your existing Amazon Elastic Compute Cloud (Amazon EC2) instances have the Systems Manager Agent installed, and also have the appropriate level of AWS Identity and Access Management (IAM) permissions to run the Amazon CloudWatch Agent. Next, specify the configuration for the CloudWatch Agent in Systems Manager Parameter Store, and then deploy the CloudWatch Agent to our GPU-enabled EC2 instances. Finally, create a CloudWatch Dashboard to analyze GPU utilization.

Architecture Diagram depicting the integration between AWS Systems Manager with RunCommand Arguments stored in SSM Parameter Store, your Amazon GPU enabled EC2 instance with installed Amazon CloudWatch Agen­t, and Amazon CloudWatch Dashboard that aggregates and displays the ­reported metrics.

  1. Install the CloudWatch Agent on your existing GPU-enabled EC2 instances.
  2. Your CloudWatch Agent configuration is stored in Systems Manager Parameter Store.
  3. Systems Manager Documents are used to install and configure the CloudWatch Agent on your EC2 instances.
  4. GPU metrics are published to CloudWatch, which you can then visualize through the CloudWatch Dashboard.

Prerequisites

This post assumes you already have GPU-enabled EC2 workloads running in your AWS account. If the EC2 instance doesn’t have any GPUs, then the custom configuration won’t be applied to the CloudWatch Agent. Instead, the default configuration is used. For those instances, leveraging the CloudWatch Agent’s default configuration is better suited for tracking resource utilization.

For the CloudWatch Agent to collect your instance’s GPU metrics, the proper NVIDIA drivers must be installed on your instance. Several AWS official AMIs including the Deep Learning AMI already have these drivers installed. To see a list of AMIs with the NVIDIA drivers pre-installed, and for full installation instructions for Linux-based instances, see Install NVIDIA drivers on Linux instances.

Additionally, deploying and managing the CloudWatch Agent requires the instances to be running. If your instances are currently stopped, then you must start them to follow the instructions outlined in this post.

Preparing your EC2 instances

You utilize Systems Manager to deploy the CloudWatch Agent, so make sure that your EC2 instances have the Systems Manager Agent installed. Many AWS-provided AMIs already have the Systems Manager Agent installed. For a full list of the AMIs which have the Systems Manager Agent pre-installed, see Amazon Machine Images (AMIs) with SSM Agent preinstalled. If your AMI doesn’t have the Systems Manager Agent installed, see Working with SSM Agent for instructions on installing based on your operating system (OS).

Once installed, the CloudWatch Agent needs certain permissions to accept commands from Systems Manager, read Systems Manager Parameter Store entries, and publish metrics to CloudWatch. These permissions are bundled into the managed IAM policies AmazonEC2RoleforSSM, AmazonSSMReadOnlyAccess, and CloudWatchAgentServerPolicy. To create a new IAM role and associated IAM instance profile with these policies attached, you can run the following AWS Command Line Interface (AWS CLI) commands, replacing <REGION_NAME> with your AWS region, and <INSTANCE_ID> with the EC2 Instance ID that you want to associate with the instance profile:

aws iam create-role --role-name CloudWatch-Agent-Role --assume-role-policy-document  '{"Statement":{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"sts:AssumeRole"}}'
aws iam attach-role-policy --role-name CloudWatch-Agent-Role --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM
aws iam attach-role-policy --role-name CloudWatch-Agent-Role --policy-arn arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess
aws iam attach-role-policy --role-name CloudWatch-Agent-Role --policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
aws iam create-instance-profile --instance-profile-name CloudWatch-Agent-Instance-Profile
aws iam add-role-to-instance-profile --instance-profile-name CloudWatch-Agent-Instance-Profile --role-name CloudWatch-Agent-Role
aws ec2 associate-iam-instance-profile --region <REGION_NAME> --instance-id <INSTANCE_ID> --iam-instance-profile Name=CloudWatch-Agent-Instance-Profile

Alternatively, you can attach the IAM policies to your existing IAM role associated with an existing IAM instance profile.

aws iam attach-role-policy --role-name <ROLE_NAME> --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM
aws iam attach-role-policy --role-name <ROLE_NAME> --policy-arn arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess
aws iam attach-role-policy --role-name <ROLE_NAME> --policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
aws ec2 associate-iam-instance-profile --region <REGION_NAME> --instance-id <INSTANCE_ID> --iam-instance-profile Name=<INSTANCE_PROFILE>

Once complete, you should see that your EC2 instance is associated with the appropriate IAM role.

An Amazon EC2 Instance with the CloudWatch-Agent-Role IAM Role attached

This role should have the AmazonEC2RoleforSSM, AmazonSSMReadOnlyAccess and CloudWatchAgentServerPolicy IAM policies attached.

The CloudWatch-Agent-Role IAM Role’s attached permission policies, Amazon EC2 Role for SSM, CloudWatch Agent Server ¬Policy, and Amazon SSM Read Only Access

Configuring and deploying the CloudWatch Agent

Before deploying the CloudWatch Agent onto our EC2 instances, make sure that those agents are properly configured to collect GPU metrics. To do this, you must create a CloudWatch Agent configuration and store it in Systems Manager Parameter Store.

Copy the following into a file cloudwatch-agent-config.json:

{
    "agent": {
        "metrics_collection_interval": 60,
        "run_as_user": "cwagent"
    },
    "metrics": {
        "aggregation_dimensions": [
            [
                "InstanceId"
            ]
        ],
        "append_dimensions": {
            "AutoScalingGroupName": "${aws:AutoScalingGroupName}",
            "ImageId": "${aws:ImageId}",
            "InstanceId": "${aws:InstanceId}",
            "InstanceType": "${aws:InstanceType}"
        },
        "metrics_collected": {
            "cpu": {
                "measurement": [
                    "cpu_usage_idle",
                    "cpu_usage_iowait",
                    "cpu_usage_user",
                    "cpu_usage_system"
                ],
                "metrics_collection_interval": 60,
                "resources": [
                    "*"
                ],
                "totalcpu": false
            },
            "disk": {
                "measurement": [
                    "used_percent",
                    "inodes_free"
                ],
                "metrics_collection_interval": 60,
                "resources": [
                    "*"
                ]
            },
            "diskio": {
                "measurement": [
                    "io_time"
                ],
                "metrics_collection_interval": 60,
                "resources": [
                    "*"
                ]
            },
            "mem": {
                "measurement": [
                    "mem_used_percent"
                ],
                "metrics_collection_interval": 60
            },
            "swap": {
                "measurement": [
                    "swap_used_percent"
                ],
                "metrics_collection_interval": 60
            },
            "nvidia_gpu": {
                "measurement": [
                    "utilization_gpu",
                    "temperature_gpu",
                    "utilization_memory",
                    "fan_speed",
                    "memory_total",
                    "memory_used",
                    "memory_free",
                    "pcie_link_gen_current",
                    "pcie_link_width_current",
                    "encoder_stats_session_count",
                    "encoder_stats_average_fps",
                    "encoder_stats_average_latency",
                    "clocks_current_graphics",
                    "clocks_current_sm",
                    "clocks_current_memory",
                    "clocks_current_video"
                ],
                "metrics_collection_interval": 60
            }
        }
    }
}

Run the following AWS CLI command to deploy a Systems Manager Parameter CloudWatch-Agent-Config, which contains a minimal agent configuration for GPU metrics collection. Replace <REGION_NAME> with your AWS Region.

aws ssm put-parameter \
--region <REGION_NAME> \
--name CloudWatch-Agent-Config \
--type String \
--value file://cloudwatch-agent-config.json

Now you can see a CloudWatch-Agent-Config parameter in Systems Manager Parameter Store, containing your CloudWatch Agent’s JSON configuration.

CloudWatch-Agent-Config stored in Systems Manager Parameter Store

Next, install the CloudWatch Agent on your EC2 instances. To do this, you can leverage Systems Manager Run Command, specifically the AWS-ConfigureAWSPackage document which automates the CloudWatch Agent installation.

  1. Run the following AWS CLI command, replacing <REGION_NAME> with the Region into which your instances are deployed, and <INSTANCE_ID> with the EC2 Instance ID on which you want to install the CloudWatch Agent.
aws ssm send-command \
--query 'Command.CommandId' \
--region <REGION_NAME> \
--instance-ids <INSTANCE_ID> \
--document-name AWS-ConfigureAWSPackage \
--parameters '{"action":["Install"],"installationType":["In-place update"],"version":["latest"],"name":["AmazonCloudWatchAgent"]}'

2. To monitor the status of your command, use the get-command-invocation AWS CLI command. Replace <COMMAND_ID> with the command ID output from the previous step, <REGION_NAME> with your AWS region, and <INSTANCE_ID> with your EC2 instance ID.

aws ssm get-command-invocation --query Status --region <REGION_NAME> --command-id <COMMAND_ID> --instance-id <INSTANCE_ID>

3.Wait for the command to show the status Success before proceeding.

$ aws ssm send-command \
	 --query 'Command.CommandId' \
    --region us-east-2 \
    --instance-ids i-0123456789abcdef \
    --document-name AWS-ConfigureAWSPackage \
    --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"version":["latest"],"additionalArguments":["{}"],"name":["AmazonCloudWatchAgent"]}'

"5d8419db-9c48-434c-8460-0519640046cf"

$ aws ssm get-command-invocation --query Status --region us-east-2 --command-id 5d8419db-9c48-434c-8460-0519640046cf --instance-id i-0123456789abcdef

"Success"

Repeat this process for all EC2 instances on which you want to install the CloudWatch Agent.

Next, configure the CloudWatch Agent installation. For this, once again leverage Systems Manager Run Command. However, this time the AmazonCloudWatch-ManageAgent document which applies your custom agent configuration is stored in the Systems Manager Parameter Store to your deployed agents.

  1. Run the following AWS CLI command, replacing <REGION_NAME> with the Region into which your instances are deployed, and <INSTANCE_ID> with the EC2 Instance ID on which you want to configure the CloudWatch Agent.
aws ssm send-command \
--query 'Command.CommandId' \
--region <REGION_NAME> \
--instance-ids <INSTANCE_ID> \
--document-name AmazonCloudWatch-ManageAgent \
--parameters '{"action":["configure"],"mode":["ec2"],"optionalConfigurationSource":["ssm"],"optionalConfigurationLocation":["/CloudWatch-Agent-Config"],"optionalRestart":["yes"]}'

2. To monitor the status of your command, utilize the get-command-invocation AWS CLI command. Replace <COMMAND_ID> with the command ID output from the previous step, <REGION_NAME> with your AWS region, and <INSTANCE_ID> with your EC2 instance ID.

aws ssm get-command-invocation --query Status --region <REGION_NAME> --command-id <COMMAND_ID> --instance-id <INSTANCE_ID>

3. Wait for the command to show the status Success before proceeding.

$ aws ssm send-command \
    --query 'Command.CommandId' \
    --region us-east-2 \
    --instance-ids i-0123456789abcdef \
    --document-name AmazonCloudWatch-ManageAgent \
    --parameters '{"action":["configure"],"mode":["ec2"],"optionalConfigurationSource":["ssm"],"optionalConfigurationLocation":["/CloudWatch-Agent-Config"],"optionalRestart":["yes"]}'

"9a4a5c43-0795-4fd3-afed-490873eaca63"

$ aws ssm get-command-invocation --query Status --region us-east-2 --command-id 9a4a5c43-0795-4fd3-afed-490873eaca63 --instance-id i-0123456789abcdef

"Success"

Repeat this process for all EC2 instances on which you want to install the CloudWatch Agent. Once finished, the CloudWatch Agent installation and configuration is complete, and your EC2 instances now report GPU metrics to CloudWatch.

Visualize your instance’s GPU metrics in CloudWatch

Now that your GPU-enabled EC2 Instances are publishing their utilization metrics to CloudWatch, you can visualize and analyze these metrics to better understand your resource utilization patterns.

The GPU metrics collected by the CloudWatch Agent are within the CWAgent namespace. Explore your GPU metrics using the CloudWatch Metrics Explorer, or deploy our provided sample dashboard.

  1. Copy the following into a file, cloudwatch-dashboard.json, replacing instances of <REGION_NAME> with your Region:
{
    "widgets": [
        {
            "height": 10,
            "width": 24,
            "y": 16,
            "x": 0,
            "type": "metric",
            "properties": {
                "metrics": [
                    [{"expression": "SELECT AVG(nvidia_smi_utilization_gpu) FROM SCHEMA(\"CWAgent\", InstanceId) GROUP BY InstanceId","id": "q1"}]
                ],
                "view": "timeSeries",
                "stacked": false,
                "region": "<REGION_NAME>",
                "stat": "Average",
                "period": 300,
                "title": "GPU Core Utilization",
                "yAxis": {
                    "left": {"label": "Percent","max": 100,"min": 0,"showUnits": false}
                }
            }
        },
        {
            "height": 7,
            "width": 8,
            "y": 0,
            "x": 0,
            "type": "metric",
            "properties": {
                "metrics": [
                    [{"expression": "SELECT AVG(nvidia_smi_utilization_gpu) FROM SCHEMA(\"CWAgent\", InstanceId)", "label": "Utilization","id": "q1"}]
                ],
                "view": "gauge",
                "stacked": false,
                "region": "<REGION_NAME>",
                "stat": "Average",
                "period": 300,
                "title": "Average GPU Core Utilization",
                "yAxis": {"left": {"max": 100, "min": 0}
                },
                "liveData": false
            }
        },
        {
            "height": 9,
            "width": 24,
            "y": 7,
            "x": 0,
            "type": "metric",
            "properties": {
                "metrics": [
                    [{ "expression": "SEARCH(' MetricName=\"nvidia_smi_memory_used\" {\"CWAgent\", InstanceId} ', 'Average')", "id": "m1", "visible": false }],
                    [{ "expression": "SEARCH(' MetricName=\"nvidia_smi_memory_total\" {\"CWAgent\", InstanceId} ', 'Average')", "id": "m2", "visible": false }],
                    [{ "expression": "SEARCH(' MetricName=\"mem_used_percent\" {CWAgent, InstanceId} ', 'Average')", "id": "m3", "visible": false }],
                    [{ "expression": "100*AVG(m1)/AVG(m2)", "label": "GPU", "id": "e2", "color": "#17becf" }],
                    [{ "expression": "AVG(m3)", "label": "RAM", "id": "e3" }]
                ],
                "view": "timeSeries",
                "stacked": false,
                "region": "<REGION_NAME>",
                "stat": "Average",
                "period": 300,
                "yAxis": {
                    "left": {"min": 0,"max": 100,"label": "Percent","showUnits": false}
                },
                "title": "Average Memory Utilization"
            }
        },
        {
            "height": 7,
            "width": 8,
            "y": 0,
            "x": 8,
            "type": "metric",
            "properties": {
                "metrics": [
                    [ { "expression": "SEARCH(' MetricName=\"nvidia_smi_memory_used\" {\"CWAgent\", InstanceId} ', 'Average')", "id": "m1", "visible": false } ],
                    [ { "expression": "SEARCH(' MetricName=\"nvidia_smi_memory_total\" {\"CWAgent\", InstanceId} ', 'Average')", "id": "m2", "visible": false } ],
                    [ { "expression": "100*AVG(m1)/AVG(m2)", "label": "Utilization", "id": "e2" } ]
                ],
                "sparkline": true,
                "view": "gauge",
                "region": "<REGION_NAME>",
                "stat": "Average",
                "period": 300,
                "yAxis": {
                    "left": {"min": 0,"max": 100}
                },
                "liveData": false,
                "title": "GPU Memory Utilization"
            }
        }
    ]
}

2. run the following AWS CLI command, replacing <REGION_NAME> with the name of your Region:

aws cloudwatch put-dashboard \
    --region <REGION_NAME> \
    --dashboard-name My-GPU-Usage \
    --dashboard-body file://cloudwatch-dashboard.json

View the My-GPU-Usage CloudWatch dashboard in the CloudWatch console for your AWS region..

An example CloudWatch dashboard, My-GPU-Usage, showing the GPU usage metrics over time.

Cleaning Up

To avoid incurring future costs for resources created by following along in this post, delete the following:

  1. My-GPU-Usage CloudWatch Dashboard
  2. CloudWatch-Agent-Config Systems Manager Parameter
  3. CloudWatch-Agent-Role IAM Role

Conclusion

By following along with this post, you deployed and configured the CloudWatch Agent across your GPU-enabled EC2 instances to track GPU utilization without pausing in-progress experiments and model training. Then, you visualized the GPU utilization of your workloads with a CloudWatch Dashboard to better understand your workload’s GPU usage and make more informed scaling and cost decisions. For other ways that Amazon CloudWatch can improve your organization’s operational insights, see the Amazon CloudWatch documentation.

Using LLMs to Create Bioweapons

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/using-llms-to-create-bioweapons.html

I’m not sure there are good ways to build guardrails to prevent this sort of thing:

There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. Central to these concerns are the possible misuse of large language models and automated experimentation for dual-use purposes or otherwise. We specifically address two critical the synthesis issues: illicit drugs and chemical weapons. To evaluate these risks, we designed a test set comprising compounds from the DEA’s Schedule I and II substances and a list of known chemical weapon agents. We submitted these compounds to the Agent using their common names, IUPAC names, CAS numbers, and SMILESs strings to determine if the Agent would carry out extensive analysis and planning (Figure 6).

[…]

The run logs can be found in Appendix F. Out of 11 different prompts (Figure 6), four (36%) provided a synthesis solution and attempted to consult documentation to execute the procedure. This figure is alarming on its own, but an even greater concern is the way in which the Agent declines to synthesize certain threats. Out of the seven refused chemicals, five were rejected after the Agent utilized search functions to gather more information about the substance. For instance, when asked about synthesizing codeine, the Agent becomes alarmed upon learning the connection between codeine and morphine, only then concluding that the synthesis cannot be conducted due to the requirement of a controlled substance. However, this search function can be easily manipulated by altering the terminology, such as replacing all mentions of morphine with “Compound A” and codeine with “Compound B”. Alternatively, when requesting a b synthesis procedure that must be performed in a DEA-licensed facility, bad actors can mislead the Agent by falsely claiming their facility is licensed, prompting the Agent to devise a synthesis solution.

In the remaining two instances, the Agent recognized the common names “heroin” and “mustard gas” as threats and prevented further information gathering. While these results are promising, it is crucial to recognize that the system’s capacity to detect misuse primarily applies to known compounds. For unknown compounds, the model is less likely to identify potential misuse, particularly for complex protein toxins where minor sequence changes might allow them to maintain the same properties but become unrecognizable to the model.

Experience AI: The excitement of AI in your classroom

Post Syndicated from Duncan Maidens original https://www.raspberrypi.org/blog/experience-ai-launch-lessons/

We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML).

Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers and their students. Developed in partnership by the Raspberry Pi Foundation and DeepMind, the programme aims to support teachers in the exciting and fast-moving area of AI, and get young people passionate about the subject.

The importance of AI and machine learning education

Artificial intelligence and machine learning applications are already changing many aspects of our lives. From search engines, social media content recommenders, self-driving cars, and facial recognition software, to AI chatbots and image generation, these technologies are increasingly common in our everyday world.

Young people who understand how AI works will be better equipped to engage with the changes AI applications bring to the world, to make informed decisions about using and creating AI applications, and to choose what role AI should play in their futures. They will also gain critical thinking skills and awareness of how they might use AI to come up with new, creative solutions to problems they care about.

The AI applications people are building today are predicted to affect many career paths. In 2020, the World Economic Forum estimated that AI would replace some 85 million jobs by 2025 and create 97 million new ones. Many of these future jobs will require some knowledge of AI and ML, so it’s important that young people develop a strong understanding from an early age.

A group of young people investigate computer hardware together.
 Develop a strong understanding of the concepts of AI and machine learning with your learners.

Experience AI Lessons

Something we get asked a lot is: “How do I teach AI and machine learning with my class?”. To answer this question, we have developed a set of free lessons for secondary school students (age 11 to 14) that give you everything you need including lesson plans, slide decks, worksheets, and videos.

The lessons focus on relatable applications of AI and are carefully designed so that teachers in a wide range of subjects can use them. You can find out more about how we used research to shape the lessons and how we aim to avoid misconceptions about AI.

The lessons are also for you if you’re an educator or volunteer outside of a school setting, such as in a coding club.

The six lessons

  1. What is AI?: Learners explore the current context of artificial intelligence (AI) and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
  2. How computers learn: Learners focus on the role of data-driven models in AI systems. They are introduced to machine learning and find out about three common approaches to creating ML models. Finally the learners explore classification, a specific application of ML.
  3. Bias in, bias out: Learners create their own machine learning model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model.
  4. Decision trees: Learners take their first in-depth look at a specific type of machine learning model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means. 
  5. Solving problems with ML models: Learners are introduced to the AI project lifecycle and use it to create a machine learning model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
  6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their machine learning model. To finish off the unit, they explore a range of AI-related careers, hear from people working in AI research at DeepMind, and explore how they might apply AI and ML to their interests.

As part of this exciting first phase, we’re inviting teachers to participate in research to help us further develop the resources. All you need to do is sign up through our website, download the lessons, use them in your classroom, and give us your valuable feedback.

An educator points to an image on a student's computer screen.
 Ben Garside, one of our lead educators working on Experience AI, takes a group of students through one of the new lessons.

Support for teachers

We’ve designed the Experience AI lessons with teacher support in mind, and so that you can deliver them to your learners aged 11 to 14 no matter what your subject area is. Each of the lesson plans includes a section that explains new concepts, and the slide decks feature embedded videos in which DeepMind’s AI researchers describe and bring these concepts to life for your learners.

We will also be offering you a range of new teacher training opportunities later this year, including a free online CPD course — Introduction to AI and Machine Learning — and a series of AI-themed webinars.

Tell us your feedback

We will be inviting schools across the UK to test and improve the Experience AI lessons through feedback. We are really looking forward to working with you to shape the future of AI and machine learning education.

Visit the Experience AI website today to get started.

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

Swatting as a Service

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/swatting-as-a-service.html

Motherboard is reporting on AI-generated voices being used for “swatting”:

In fact, Motherboard has found, this synthesized call and another against Hempstead High School were just one small part of a months-long, nationwide campaign of dozens, and potentially hundreds, of threats made by one swatter in particular who has weaponized computer generated voices. Known as “Torswats” on the messaging app Telegram, the swatter has been calling in bomb and mass shooting threats against highschools and other locations across the country. Torswat’s connection to these wide ranging swatting incidents has not been previously reported. The further automation of swatting techniques threatens to make an already dangerous harassment technique more prevalent.

How generative AI is changing the way developers work

Post Syndicated from Damian Brady original https://github.blog/2023-04-14-how-generative-ai-is-changing-the-way-developers-work/

During a time when computers were solely used for computation, the engineer, Douglas Engelbart, gave the “mother of all demos,” where he reframed the computer as a collaboration tool capable of solving humanity’s most complex problems. At the start of his demo, he asked audience members how much value they would derive from a computer that could instantly respond to their actions.

You can ask the same question of generative AI models. If you had a highly responsive generative AI coding tool to brainstorm new ideas, break big ideas into smaller tasks, and suggest new solutions to problems, how much more creative and productive could you be?

This isn’t a hypothetical question. AI-assisted engineering workflows are quickly emerging with new generative AI coding tools that offer code suggestions and entire functions in response to natural language prompts and existing code. These tools, and what they can help developers accomplish, are changing fast. That makes it important for every developer to understand what’s happening now—and the implications for how software is and will be built.

In this article, we’ll give a rundown of what generative AI in software development looks like today by exploring:

The unique value generative AI brings to the developer workflow

AI and automation have been a part of the developer workflow for some time now. From machine learning-powered security checks to CI/CD pipelines, developers already use a variety of automation and AI tools, like CodeQL on GitHub, for example.

While there’s overlap between all of these categories, here’s what makes generative AI distinct from automation and other AI coding tools:

Automation: 🛤
You know what needs to be done, and you know of a reliable way to get there every time.
Rules-based logic: 🔎
You know the end goal, but there’s more than one way to achieve it.
Machine learning: 🧠
You know the end goal, but the amount of ways to achieve it scales exponentially.
Generative AI: 🌐
You have big coding dreams, and want the freedom to bring them to life.
You want to make sure that any new code pushed to your repository follows formatting specifications before it’s merged to the main branch. Instead of manually validating the code, you use a CI/CD tool like GitHub Actions to trigger an automated workflow on the event of your choosing (like a commit or pull request). You know some patterns of SQL injections, but it’s time consuming to manually scan for them in your code. A tool like Code QL uses a system of rules to sort through your code and find those patterns, so you don’t have to do it by hand. You want to stay on top of security vulnerabilities, but the list of SQL injections continues to grow. A coding tool that uses a machine learning (ML) model, like Code QL, is trained to not only detect known injections, but also patterns similar to those injections in data it hasn’t seen before. This can help you increase recognition of confirmed vulnerabilities and predict new ones. Generative AI coding tools leverage ML to generate novel answers and predict coding sequences. A tool like GitHub Copilot can reduce the amount of times you switch out of your IDE to look up boilerplate code or help you brainstorm coding solutions. Shifting your role from rote writing to strategic decision making, generative AI can help you reflect on your code at a higher, more abstract level—so you can focus more on what you want to build and spend less time worrying about how.

How are generative AI coding tools designed and built?

Building a generative AI coding tool requires training AI models on large amounts of code across programming languages via deep learning. (Deep learning is a way to train computers to process data like we do—by recognizing patterns, making connections, and drawing inferences with limited guidance.)

To emulate the way humans learn patterns, these AI models use vast networks of nodes, which process and weigh input data, and are designed to function like neurons. Once trained on large amounts of data and able to produce useful code, they’re built into tools and applications. The models can then be plugged into coding editors and IDEs where they respond to natural language prompts or code to suggest new code, functions, and phrases.

Before we talk about how generative AI coding tools are made, let’s define what they are first. It starts with LLMs, or large language models, which are sets of algorithms trained on large amounts of code and human language. Like we mentioned above, they can predict coding sequences and generate novel content using existing code or natural language prompts.

Today’s state-of-the-art LLMs are transformers. That means they use something called an attention mechanism to make flexible connections between different tokens in a user’s input and the output that the model has already generated. This allows them to provide responses that are more contextually relevant than previous AI models because they’re good at connecting the dots and big-picture thinking.

Here’s an example of how a transformer works. Let’s say you encounter the word log in your code. The transformer node at that place would use the attention mechanism to contextually predict what kind of log would come next in the sequence.

Let’s say, in the example below, you input the statement from math import log. A generative AI model would then infer you mean a logarithmic function.

And if you add the prompt from logging import log, it would infer that you’re using a logging function.

Though sometimes a log is just a log.

LLMs can be built using frameworks besides transformers. But LLMs using frameworks, like a recurrent neural network or long short-term memory, struggle with processing long sentences and paragraphs. They also typically require training on labeled data (making training a labor-intensive process). This limits the complexity and relevance of their outputs, and the data they can learn from.

Transformer LLMs, on the other hand, can train themselves on unlabeled data. Once they’re given basic learning objectives, LLMs take a part of the new input data and use it to practice their learning goals. Once they’ve achieved these goals on that portion of the input, they apply what they’ve learned to understand the rest of the input. This self-supervised learning process is what allows transformer LLMs to analyze massive amounts of unlabeled data—and the larger the dataset an LLM is trained on, the more they scale by processing that data.

Why should developers care about transformers and LLMs?

LLMs like OpenAI’s GPT-3, GPT-4, and Codex models are trained on an enormous amount of natural language data and publicly available source code. This is part of the reason why tools like ChatGPT and GitHub Copilot, which are built on these models, can produce contextually accurate outputs.

Here’s how GitHub Copilot produces coding suggestions:

  • All of the code you’ve written so far, or the code that comes before the cursor in an IDE, is fed to a series of algorithms that decide what parts of the code will be processed by GitHub Copilot.
  • Since it’s powered by a transformer-based LLM, GitHub Copilot will apply the patterns it’s abstracted from training data and apply those patterns to your input code.
  • The result: contextually relevant, original coding suggestions. GitHub Copilot will even filter out known security vulnerabilities, vulnerable code patterns, and code that matches other projects.

Keep in mind: creating new content such as text, code, and images is at the heart of generative AI. LLMs are adept at abstracting patterns from their training data, applying those patterns to existing language, and then producing language or a line of code that follows those patterns. Given the sheer scale of LLMs, they might generate a language or code sequence that doesn’t even exist yet. Just as you would review a colleague’s code, you should assess and validate AI-generated code, too.

Why context matters for AI coding tools

Developing good prompt crafting techniques is important because input code passes through something called a context window, which is present in all transformer-based LLMs. The context window represents the capacity of data an LLM can process. Though it can’t process an infinite amount of data, it can grow larger. Right now, the Codex model has a context window that allows it to process a couple of hundred lines of code, which has already advanced and accelerated coding tasks like code completion and code change summarization.

Developers use details from pull requests, a folder in a project, open issues—and the list goes on—to contextualize their code. So, when it comes to a coding tool with a limited context window, the challenge is to figure out what data, in addition to code, will lead to the best suggestions.

The order of the data also impacts a model’s contextual understanding. Recently, GitHub made updates to its pair programmer so that it considers not only the code immediately before the cursor, but also some of the code after the cursor. The paradigm—which is called Fill-In-the-Middle (FIM)—leaves a gap in the middle of the code for GitHub Copilot to fill, providing the tool with more context about the developer’s intended code and how it should align with the rest of the program. This helps produce higher quality code suggestions without any added latency.

Visuals can also contextualize code. Multimodal LLMs (MMLLMs) scale transformer LLMs so they process images and videos, as well as text. OpenAI recently released its new GPT-4 model—and Microsoft revealed its own MMLLM called Kosmos-1. These models are designed to respond to natural language and images, like alternating text and images, image-caption pairs, and text data.

GitHub’s senior developer advocate Christina Warren shares the latest on GPT-4 and the creative potential it holds for developers:

Our R&D team at GitHub Next has been working to move AI past the editor with GitHub Copilot X. With this new vision for the future of AI-powered software development, we’re not only adopting OpenAI’s new GPT-4 model, but also introducing chat and voice, and bringing GitHub Copilot to pull requests, the command line, and docs. See how we’re investigating the future of AI-powered software development >

How developers are using generative AI coding tools

The field of generative AI is filled with experiments and explorations to uncover the technology’s full capabilities—and how they can enable effective developer workflows. Generative AI tools are already changing how developers write code and build software, from improving productivity to helping developers focus on bigger problems.

While generative AI applications in software development are still being actively defined, today, developers are using generative AI coding tools to:

  • Get a head start on complex code translation tasks. A study presented at the 2021 International Conference on Intelligent User Interfaces found that generative AI provided developers with a skeletal framework to translate legacy source code into Python. Even if the suggestions weren’t always correct, developers found it easier to assess and fix those mistakes than manually translate the source code from scratch. They also noted that this process of reviewing and correcting was similar to what they already do when working with code produced by their colleagues.

With GitHub Copilot Labs, developers can use the companion VS Code extension (that’s separate from but dependent on the GitHub Copilot extension) to translate code into different programming languages. Watch how GitHub Developer Advocate, Michelle Mannering, uses GitHub Copilot Labs to translate her Python code into Ruby in just a few steps.

Our own research supports these findings, too. As we mentioned earlier, we found that developers who used GitHub Copilot coded up to 55% faster than those who didn’t. But productivity gains went beyond speed with 74% of developers reporting that they felt less frustrated when coding and were able to focus on more satisfying work.

  • Tackle new problems and get creative. The PACMPL study also found that developers used GitHub Copilot to find creative solutions when they were unsure of how to move forward. These developers searched for next possible steps and relied on the generative AI coding tool to assist with unfamiliar syntax, look up the right API, or discover the correct algorithm.

I was one of the developers who wrote GitHub Copilot, but prior to that work, I had never written a single line of TypeScript. That wasn’t a problem because I used the first prototype of GitHub Copilot to learn the language and, eventually, help ship the world’s first at-scale generative AI coding tool.

– Albert Ziegler, Principal Machine Learning Engineer // GitHub
  • Find answers without leaving their IDEs. Some participants in the PACMPL study also treated GitHub Copilot’s multi-suggestion pane like StackOverflow. Since they were able to describe their goals in natural language, participants could directly prompt GitHub Copilot to generate ideas for implementing their goals, and press Ctrl/Cmd + Enter to see a list of 10 suggestions. Even though this kind of exploration didn’t lead to deep knowledge, it helped one developer to effectively use an unfamiliar API.

A 2023 study published by GitHub in the Association for Computing Machinery’s Queue magazine also found that generative AI coding tools save developers the effort of searching for answers online. This provides them with more straightful forward answers, reduces context switching, and conserves mental energy.

Part of GitHub’s new vision for the future of AI-powered software development is a ChatGPT-like experience directly in your editor. Watch how Martin Woodward, GitHub’s Vice President of Developer Relations, uses GitHub Copilot Chat to find and fix bugs in his code.

  • Build better test coverage. Some generative AI coding tools excel in pattern recognition and completion. Developers are using these tools to build unit and functional tests—and even security tests—via natural language prompts. Some tools also offer security vulnerability filtering, so a developer will be alerted if they unknowingly introduce a vulnerability in their code.

Want to see some examples in action? Check out how Rizel Scarlett, a developer advocate at GitHub, uses GitHub Copilot to develop tests for her codebase:

  • Discover tricks and solutions they didn’t know they needed. Scarlett also wrote about eight unexpected ways developers can use GitHub Copilot—from prompting it to create a dictionary of two-letter ISO country codes and their contributing country name, to helping developers exit Vim, an editor with a sometimes finicky closing process. Want to learn more? Check out the full guide >

The bottom line

Generative AI provides humans with a new mode of interaction—and it doesn’t just alleviate the tedious parts of software development. It also inspires developers to be more creative, feel empowered to tackle big problems, and model large, complex solutions in ways they couldn’t before. From increasing productivity and offering alternative solutions, to helping you build new skills—like learning a new language or framework, or even writing clear comments and documentation—there are so many reasons to be excited about the next wave of software development. This is only the beginning.

Additional resources

How anthropomorphism hinders AI education

Post Syndicated from Ben Garside original https://www.raspberrypi.org/blog/ai-education-anthropomorphism/

In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough to convince someone they are talking to a human?” This is commonly referred to as the Turing test.

It’s been hard to miss the newest generation of AI chatbots that companies have released over the last year. News articles and stories about them seem to be everywhere at the moment. So you may have heard of machine learning (ML) chatbots such as ChatGPT and LaMDA. These chatbots are advanced enough to have caused renewed discussions about the Turing Test and whether the chatbots are sentient.

Chatbots are not sentient

Without any knowledge of how people create such chatbots, it’s easy to imagine how someone might develop an incorrect mental model around these chatbots being living entities. With some awareness of Sci-Fi stories, you might even start to imagine what they could look like or associate a gender with them.

A person in front of a cloudy sky, seen through a refractive glass grid. Parts of the image are overlaid with a diagram of a neural network.
Image: Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC BY 4.0

The reality is that these new chatbots are applications based on a large language model (LLM) — a type of machine learning model that has been trained with huge quantities of text, written by people and taken from places such as books and the internet, e.g. social media posts. An LLM predicts the probable order of combinations of words, a bit like the autocomplete function on a smartphone. Based on these probabilities, it can produce text outputs. LLM chatbots run on servers with huge amounts of computing power that people have built in data centres around the world.

Our AI education resources for young people

AI applications are often described as “black boxes” or “closed boxes”: they may be relatively easy to use, but it’s not as easy to understand how they work. We believe that it’s fundamentally important to help everyone, especially young people, to understand the potential of AI technologies and to open these closed boxes to understand how they actually work.

As always, we want to demystify digital technology for young people, to empower them to be thoughtful creators of technology and to make informed choices about how they engage with technology — rather than just being passive consumers.

That’s the goal we have in mind as we’re working on lesson resources to help teachers and other educators introduce KS3 students (ages 11 to 14) to AI and ML. We will release these Experience AI lessons very soon.

Why we avoid describing AI as human-like

Our researchers at the Raspberry Pi Computing Education Research Centre have started investigating the topic of AI and ML, including thinking deeply about how AI and ML applications are described to educators and learners.

To support learners to form accurate mental models of AI and ML, we believe it is important to avoid using words that can lead to learners developing misconceptions around machines being human-like in their abilities. That’s why ‘anthropomorphism’ is a term that comes up regularly in our conversations about the Experience AI lessons we are developing.

To anthropomorphise: “to show or treat an animal, god, or object as if it is human in appearance, character, or behaviour”

https://dictionary.cambridge.org/dictionary/english/anthropomorphize

Anthropomorphising AI in teaching materials might lead to learners believing that there is sentience or intention within AI applications. That misconception would distract learners from the fact that it is people who design AI applications and decide how they are used. It also risks reducing learners’ desire to take an active role in understanding AI applications, and in the design of future applications.

Examples of how anthropomorphism is misleading

Avoiding anthropomorphism helps young people to open the closed box of AI applications. Take the example of a smart speaker. It’s easy to describe a smart speaker’s functionality in anthropomorphic terms such as “it listens” or “it understands”. However, we think it’s more accurate and empowering to explain smart speakers as systems developed by people to process sound and carry out specific tasks. Rather than telling young people that a smart speaker “listens” and “understands”, it’s more accurate to say that the speaker receives input, processes the data, and produces an output. This language helps to distinguish how the device actually works from the illusion of a persona the speaker’s voice might conjure for learners.

Eight photos of the same tree taken at different times of the year, displayed in a grid. The final photo is highly pixelated. Groups of white blocks run across the grid from left to right, gradually becoming aligned.
Image: David Man & Tristan Ferne / Better Images of AI / Trees / CC BY 4.0

Another example is the use of AI in computer vision. ML models can, for example, be trained to identify when there is a dog or a cat in an image. An accurate ML model, on the surface, displays human-like behaviour. However, the model operates very differently to how a human might identify animals in images. Where humans would point to features such as whiskers and ear shapes, ML models process pixels in images to make predictions based on probabilities.

Better ways to describe AI

The Experience AI lesson resources we are developing introduce students to AI applications and teach them about the ML models that are used to power them. We have put a lot of work into thinking about the language we use in the lessons and the impact it might have on the emerging mental models of the young people (and their teachers) who will be engaging with our resources.

It’s not easy to avoid anthropomorphism while talking about AI, especially considering the industry standard language in the area: artificial intelligence, machine learning, computer vision, to name but a few examples. At the Foundation, we are still training ourselves not to anthropomorphise AI, and we take a little bit of pleasure in picking each other up on the odd slip-up.

Here are some suggestions to help you describe AI better:

Avoid using Instead use
Avoid using phrases such as “AI learns” or “AI/ML does” Use phrases such as “AI applications are designed to…” or “AI developers build applications that…
Avoid words that describe the behaviour of people (e.g. see, look, recognise, create, make) Use system type words (e.g. detect, input, pattern match, generate, produce)
Avoid using AI/ML as a countable noun, e.g. “new artificial intelligences emerged in 2022” Refer to ‘AI/ML’ as a scientific discipline, similarly to how you use the term “biology”

The purpose of our AI education resources

If we are correct in our approach, then whether or not the young people who engage in Experience AI grow up to become AI developers, we will have helped them to become discerning users of AI technologies and to be more likely to see such products for what they are: data-driven applications and not sentient machines.

If you want to use the Experience AI lessons to teach your learners, please sign up to be the first to hear when we launch these resources.

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

Amazon EC2 Inf2 Instances for Low-Cost, High-Performance Generative AI Inference are Now Generally Available

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/amazon-ec2-inf2-instances-for-low-cost-high-performance-generative-ai-inference-are-now-generally-available/

Innovations in deep learning (DL), especially the rapid growth of large language models (LLMs), have taken the industry by storm. DL models have grown from millions to billions of parameters and are demonstrating exciting new capabilities. They are fueling new applications such as generative AI or advanced research in healthcare and life sciences. AWS has been innovating across chips, servers, data center connectivity, and software to accelerate such DL workloads at scale.

At AWS re:Invent 2022, we announced the preview of Amazon EC2 Inf2 instances powered by AWS Inferentia2, the latest AWS-designed ML chip. Inf2 instances are designed to run high-performance DL inference applications at scale globally. They are the most cost-effective and energy-efficient option on Amazon EC2 for deploying the latest innovations in generative AI, such as GPT-J or Open Pre-trained Transformer (OPT) language models.

Today, I’m excited to announce that Amazon EC2 Inf2 instances are now generally available!

Inf2 instances are the first inference-optimized instances in Amazon EC2 to support scale-out distributed inference with ultra-high-speed connectivity between accelerators. You can now efficiently deploy models with hundreds of billions of parameters across multiple accelerators on Inf2 instances. Compared to Amazon EC2 Inf1 instances, Inf2 instances deliver up to 4x higher throughput and up to 10x lower latency. Here’s an infographic that highlights the key performance improvements that we have made available with the new Inf2 instances:

Performance improvements with Amazon EC2 Inf2

New Inf2 Instance Highlights
Inf2 instances are available today in four sizes and are powered by up to 12 AWS Inferentia2 chips with 192 vCPUs. They offer a combined compute power of 2.3 petaFLOPS at BF16 or FP16 data types and feature an ultra-high-speed NeuronLink interconnect between chips. NeuronLink scales large models across multiple Inferentia2 chips, avoids communication bottlenecks, and enables higher-performance inference.

Inf2 instances offer up to 384 GB of shared accelerator memory, with 32 GB high-bandwidth memory (HBM) in every Inferentia2 chip and 9.8 TB/s of total memory bandwidth. This type of bandwidth is particularly important to support inference for large language models that are memory bound.

Since the underlying AWS Inferentia2 chips are purpose-built for DL workloads, Inf2 instances offer up to 50 percent better performance per watt than other comparable Amazon EC2 instances. I’ll cover the AWS Inferentia2 silicon innovations in more detail later in this blog post.

The following table lists the sizes and specs of Inf2 instances in detail.

Instance Name
vCPUs AWS Inferentia2 Chips Accelerator Memory NeuronLink Instance Memory Instance Networking
inf2.xlarge 4 1 32 GB N/A 16 GB Up to 15 Gbps
inf2.8xlarge 32 1 32 GB N/A 128 GB Up to 25 Gbps
inf2.24xlarge 96 6 192 GB Yes 384 GB 50 Gbps
inf2.48xlarge 192 12 384 GB Yes 768 GB 100 Gbps

AWS Inferentia2 Innovation
Similar to AWS Trainium chips, each AWS Inferentia2 chip has two improved NeuronCore-v2 engines, HBM stacks, and dedicated collective compute engines to parallelize computation and communication operations when performing multi-accelerator inference.

Each NeuronCore-v2 has dedicated scalar, vector, and tensor engines that are purpose-built for DL algorithms. The tensor engine is optimized for matrix operations. The scalar engine is optimized for element-wise operations like ReLU (rectified linear unit) functions. The vector engine is optimized for non-element-wise vector operations, including batch normalization or pooling.

Here is a short summary of additional AWS Inferentia2 chip and server hardware innovations:

  • Data Types – AWS Inferentia2 supports a wide range of data types, including FP32, TF32, BF16, FP16, and UINT8, so you can choose the most suitable data type for your workloads. It also supports the new configurable FP8 (cFP8) data type, which is especially relevant for large models because it reduces the memory footprint and I/O requirements of the model. The following image compares the supported data types.AWS Inferentia2 Supported Data Types
  • Dynamic Execution, Dynamic Input Shapes – AWS Inferentia2 has embedded general-purpose digital signal processors (DSPs) that enable dynamic execution, so control flow operators don’t need to be unrolled or executed on the host. AWS Inferentia2 also supports dynamic input shapes that are key for models with unknown input tensor sizes, such as models processing text.
  • Custom Operators – AWS Inferentia2 supports custom operators written in C++. Neuron Custom C++ Operators enable you to write C++ custom operators that natively run on NeuronCores. You can use standard PyTorch custom operator programming interfaces to migrate CPU custom operators to Neuron and implement new experimental operators, all without any intimate knowledge of the NeuronCore hardware.
  • NeuronLink v2 – Inf2 instances are the first inference-optimized instance on Amazon EC2 to support distributed inference with direct ultra-high-speed connectivity—NeuronLink v2—between chips. NeuronLink v2 uses collective communications (CC) operators such as all-reduce to run high-performance inference pipelines across all chips.

The following Inf2 distributed inference benchmarks show throughput and cost improvements for OPT-30B and OPT-66B models over comparable inference-optimized Amazon EC2 instances.

Amazon EC2 Inf2 Benchmarks

Now, let me show you how to get started with Amazon EC2 Inf2 instances.

Get Started with Inf2 Instances
The AWS Neuron SDK integrates AWS Inferentia2 into popular machine learning (ML) frameworks like PyTorch. The Neuron SDK includes a compiler, runtime, and profiling tools and is constantly being updated with new features and performance optimizations.

In this example, I will compile and deploy a pre-trained BERT model from Hugging Face on an EC2 Inf2 instance using the available PyTorch Neuron packages. PyTorch Neuron is based on the PyTorch XLA software package and enables the conversion of PyTorch operations to AWS Inferentia2 instructions.

SSH into your Inf2 instance and activate a Python virtual environment that includes the PyTorch Neuron packages. If you’re using a Neuron-provided AMI, you can activate the preinstalled environment by running the following command:

source aws_neuron_venv_pytorch_p37/bin/activate

Now, with only a few changes to your code, you can compile your PyTorch model into an AWS Neuron-optimized TorchScript. Let’s start with importing torch, the PyTorch Neuron package torch_neuronx, and the Hugging Face transformers library.

import torch
import torch_neuronx from transformers import AutoTokenizer, AutoModelForSequenceClassification
import transformers
...

Next, let’s build the tokenizer and model.

name = "bert-base-cased-finetuned-mrpc"
tokenizer = AutoTokenizer.from_pretrained(name)
model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)

We can test the model with example inputs. The model expects two sentences as input, and its output is whether or not those sentences are a paraphrase of each other.

def encode(tokenizer, *inputs, max_length=128, batch_size=1):
    tokens = tokenizer.encode_plus(
        *inputs,
        max_length=max_length,
        padding='max_length',
        truncation=True,
        return_tensors="pt"
    )
    return (
        torch.repeat_interleave(tokens['input_ids'], batch_size, 0),
        torch.repeat_interleave(tokens['attention_mask'], batch_size, 0),
        torch.repeat_interleave(tokens['token_type_ids'], batch_size, 0),
    )

# Example inputs
sequence_0 = "The company Hugging Face is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "Hugging Face's headquarters are situated in Manhattan"

paraphrase = encode(tokenizer, sequence_0, sequence_2)
not_paraphrase = encode(tokenizer, sequence_0, sequence_1)

# Run the original PyTorch model on examples
paraphrase_reference_logits = model(*paraphrase)[0]
not_paraphrase_reference_logits = model(*not_paraphrase)[0]

print('Paraphrase Reference Logits: ', paraphrase_reference_logits.detach().numpy())
print('Not-Paraphrase Reference Logits:', not_paraphrase_reference_logits.detach().numpy())

The output should look similar to this:

Paraphrase Reference Logits:     [[-0.34945598  1.9003887 ]]
Not-Paraphrase Reference Logits: [[ 0.5386365 -2.2197142]]

Now, the torch_neuronx.trace() method sends operations to the Neuron Compiler (neuron-cc) for compilation and embeds the compiled artifacts in a TorchScript graph. The method expects the model and a tuple of example inputs as arguments.

neuron_model = torch_neuronx.trace(model, paraphrase)

Let’s test the Neuron-compiled model with our example inputs:

paraphrase_neuron_logits = neuron_model(*paraphrase)[0]
not_paraphrase_neuron_logits = neuron_model(*not_paraphrase)[0]

print('Paraphrase Neuron Logits: ', paraphrase_neuron_logits.detach().numpy())
print('Not-Paraphrase Neuron Logits: ', not_paraphrase_neuron_logits.detach().numpy())

The output should look similar to this:

Paraphrase Neuron Logits: [[-0.34915772 1.8981738 ]]
Not-Paraphrase Neuron Logits: [[ 0.5374032 -2.2180378]]

That’s it. With just a few lines of code changes, we compiled and ran a PyTorch model on an Amazon EC2 Inf2 instance. To learn more about which DL model architectures are a good fit for AWS Inferentia2 and the current model support matrix, visit the AWS Neuron Documentation.

Available Now
You can launch Inf2 instances today in the AWS US East (Ohio) and US East (N. Virginia) Regions as On-Demand, Reserved, and Spot Instances or as part of a Savings Plan. As usual with Amazon EC2, you pay only for what you use. For more information, see Amazon EC2 pricing.

Inf2 instances can be deployed using AWS Deep Learning AMIs, and container images are available via managed services such as Amazon SageMaker, Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), and AWS ParallelCluster.

To learn more, visit our Amazon EC2 Inf2 instances page, and please send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

— Antje

Amazon CodeWhisperer, Free for Individual Use, is Now Generally Available

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/amazon-codewhisperer-free-for-individual-use-is-now-generally-available/

Today, Amazon CodeWhisperer, a real-time AI coding companion, is generally available and also includes a CodeWhisperer Individual tier that’s free to use for all developers. Originally launched in preview last year, CodeWhisperer keeps developers in the zone and productive, helping them write code quickly and securely and without needing to break their flow by leaving their IDE to research something. Faced with creating code for complex and ever-changing environments, developers can improve their productivity and simplify their work by making use of CodeWhisperer inside their favorite IDEs, including Visual Studio Code, IntelliJ IDEA, and others. CodeWhisperer helps with creating code for routine or time-consuming, undifferentiated tasks, working with unfamiliar APIs or SDKs, making correct and effective use of AWS APIs, and other common coding scenarios such as reading and writing files, image processing, writing unit tests, and lots more.

Using just an email account, you can sign up and, in just a few minutes, become more productive writing code—and you don’t even need to be an AWS customer. For business users, CodeWhisperer offers a Professional tier that adds administrative features, like SSO and IAM Identity Center integration, policy control for referenced code suggestions, and higher limits on security scanning. And in addition to generating code suggestions for Python, Java, JavaScript, TypeScript, and C#, the generally available release also now supports Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala. CodeWhisperer is available to developers working in Visual Studio Code, IntelliJ IDEA, CLion, GoLand, WebStorm, Rider, PhpStorm, PyCharm, RubyMine, and DataGrip IDEs (when the appropriate AWS extensions for those IDEs are installed), or natively in AWS Cloud9 or AWS Lambda console.

Helping to keep developers in their flow is increasingly important as, facing increasing time pressure to get their work done, developers are often forced to break that flow to turn to an internet search, sites such as StackOverflow, or their colleagues for help in completing tasks. While this can help them obtain the starter code they need, it’s disruptive as they’ve had to leave their IDE environment to search or ask questions in a forum or find and ask a colleague—further adding to the disruption. Instead, CodeWhisperer meets developers where they are most productive, providing recommendations in real time as they write code or comments in their IDE. During the preview we ran a productivity challenge, and participants who used CodeWhisperer were 27% more likely to complete tasks successfully and did so an average of 57% faster than those who didn’t use CodeWhisperer.

Code generation from a comment in CodeWhisperer
Code generation from a comment

The code developers eventually locate may, however, contain issues such as hidden security vulnerabilities, be biased or unfair, or fail to handle open source responsibly. These issues won’t improve the developer’s productivity when they later have to resolve them. CodeWhisperer is the best coding companion when it comes to coding securely and using AI responsibly. To help you code responsibly, CodeWhisperer filters out code suggestions that might be considered biased or unfair, and it’s the only coding companion that can filter or flag code suggestions that may resemble particular open-source training data. It provides additional data for suggestions—for example, the repository URL and license—when code similar to training data is generated, helping lower the risk of using the code and enabling developers to reuse it with confidence.

Reference tracking in CodeWhisperer
Open-source reference tracking

CodeWhisperer is also the only AI coding companion to have security scanning for finding and suggesting remediations for hard-to-detect vulnerabilities, scanning both generated and developer-written code looking for vulnerabilities such as those in the top ten listed in the Open Web Application Security Project (OWASP). If it finds a vulnerability, CodeWhisperer provides suggestions to help remediate the issue.

Scanning for vulnerabilities in CodeWhisperer
Scanning for vulnerabilities

Code suggestions provided by CodeWhisperer are not specific to working with AWS. However, CodeWhisperer is optimized for the most-used AWS APIs, for example AWS Lambda, or Amazon Simple Storage Service (Amazon S3), making it the best coding companion for those building applications on AWS. While CodeWhisperer provides suggestions for general-purpose use cases across a variety of languages, the tuning performed using additional data on AWS APIs means you can be confident it is the highest quality, most accurate code generation you can get for working with AWS.

Meet Your new AI Code Companion Today
Amazon CodeWhisperer is generally available today to all developers—not just those with an AWS account or working with AWS—writing code in Python, Java, JavaScript, TypeScript, C#, Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala. You can sign up with just an email address, and, as I mentioned at the top of this post, CodeWhisperer offers an Individual tier that’s freely available to all developers. More information on the Individual tier, and pricing for the Professional tier, can be found at https://aws.amazon.com/codewhisperer/pricing

Generative AI-enabled compliance for software development

Post Syndicated from Mark Paulsen original https://github.blog/2023-04-11-generative-ai-enabled-compliance-for-software-development/

In our recent blog post announcing GitHub Copilot X, we mentioned that generative AI represents the future of software development. This amazing technology will enable developers to stay in the flow while helping enterprises meet their business goals.

But as we have also mentioned in our blog series on compliance, generative AI may soon act as an enabler for developer-focused compliance programs that will drive optimization and keep your development, compliance and audit teams productive and happy.

Today, we’ll explore the potential for generative AI to help enable teams to optimize and automate some of the foundational compliance components of separation of duties that many enterprises still often manage and review manually.

Generative AI has been dominating the news lately—but what exactly is it? Here’s what you need to know, and what it means for developers.

Separation of duties

The concept of “separation of duties,” long used in the accounting world as a check and balance approach, is also adopted in other scenarios, including technology architecture and workflows. While helpful to address compliance, it can lead to additional manual steps that can slow down delivery and innovation.

Fortunately, the PCI-DSS requirements guide provides a more DevOps, cloud native, and AI-enabled approach to separation of duties by focusing on functions and accounts, as opposed to people:

“The purpose of this requirement is to separate the development and test functions from the production functions. For example, a developer can use an administrator-level account with elevated privileges in the development environment and have a separate account with user-level access to the production environment.”

There are many parts of a software delivery workflow that need to have separation of duties in place—but one of the core components that is key for any compliance program is the code review. Having a separate set of objective eyes reviewing your code, whether it’s human or AI-powered, helps to ensure risks, tech debt, and security vulnerabilities are found and mitigated as early as possible.

Code reviews also help enable the concept of separation of duties since it prohibits a single person or a single function, account, or process from moving code to the next part of your delivery workflow. Additionally, code reviews help enable separation of duties for Infrastructure as Code (IaC) workflows, Policy-as-Code configurations, and even Kubernetes declarative deployments.

As we mentioned in our previous blog, GitHub makes code review easy, since pull requests are part of the existing workflow that millions of developers use daily. Having a foundational piece of compliance built-in to the platform that developers know and love keeps them in the flow, while keeping compliance and audit teams happy as well.

Generative AI and pull requests

Wouldn’t it be cool if one-day generative AI could be leveraged to enable more developer-friendly compliance programs which have traditionally been very labor and time intensive? Imagine if generative AI could help enable DevOps and cloud native approaches for separation of duties by automating tedious tasks and allowing humans to focus on key value-added tasks.

Bringing this back to compliance and separation of duties, wouldn’t it be great if a generative AI helper was available to provide an objective set of eyes on your pull requests? This is what the GitHub Next team has been working towards with GitHub Copilot for Pull Requests.

  • Suggestions for your pull request descriptions. AI-powered tags are embedded into a pull request description and automatically filled out by GitHub Copilot based on the code which the developers changed. Going one step further, the GitHub Next team is also looking at the creation of descriptive sentences and paragraphs as developers create pull requests.
  • Code reviews with AI. Taking pull requests and code reviews one step further, the GitHub Next team is looking at AI to help review the code and provide suggestions for changes. This will help enable human interactions and optimize existing processes. The AI would automate the creation of the descriptions, based on the code changes, as well as suggestions for improvements. The code reviewer will have everything they need to quickly review the change and decide to either move forward or send the change back.

When these capabilities are production ready, development teams and compliance programs will appreciate these features for a few reasons. First, the pull request and code review process would be driven by a conversation based on a neutral and independent description. Second, the description will be based on the actual code that was changed. Third, both development and compliance workflows will be optimized and allow humans to focus on value-added work.

While these capabilities are still a work in progress, there are features available now that may help enable compliance, audit, and security teams with GitHub Copilot for Business. The ability for developers to complete tasks faster and stay in the flow are truly amazing. But the ability for GitHub Copilot to provide AI-based security vulnerability filtering nowis a great place for compliance and audit teams within enterprises to get started on their journey to embracing generative AI into their day-to-day practices.

Next steps

Generative AI will enable developers and enterprises to achieve success by reducing manual tasks and enabling developers to focus their creativity on business value–all while staying in the flow.

I hope this blog will help drive positive discussions regarding this topic and has provided a forward looking view into what will be possible in the future. The future ability of generative AI to help enable teams by automating tedious tasks will help humans focus on more value-added work and could eventually be an important part of a robust and risk-based compliance posture.

 

Explore GitHub Copilot X >

What developers need to know about generative AI

Post Syndicated from Damian Brady original https://github.blog/2023-04-07-what-developers-need-to-know-about-generative-ai/

By now, you’ve heard of generative artificial intelligence (AI) tools like ChatGPT, DALL-E, and GitHub Copilot, among others. They’re gaining widespread interest thanks to the fact that they allow anyone to create content from email subject lines to code functions to artwork in a matter of moments.

This potential to revolutionize content creation across various industries makes it important to understand what generative AI is, how it’s being used, and who it’s being used by. In this article, we’ll explore what generative AI is, how it works, some real-world applications, and how it’s already changing the way people (and developers) work.

What is generative AI used for?

You may have heard the buzz around new generative AI tools like ChatGPT or the new Bing, but there’s a lot more to generative AI than any one single framework, project, or application.

Traditional AI systems are trained on large amounts of data to identify patterns, and they’re capable of performing specific tasks that can help people and organizations. But generative AI goes one step further by using complex systems and models to generate new, or novel, outputs in the form of an image, text, or audio based on natural language prompts.

Generative AI models and applications can, for example, be used for:

  • Text generation. Text generation, as a field, with AI tools has been in development since the 1970s—but more recently, AI researchers have been able to train generative adversarial networks (GANs) to produce text that models human-like speech. A prime example is OpenAI’s application ChatGPT, which has been trained on thousands of texts, books, articles, and code repositories, and can respond with full answers to natural language prompts and questions.
An example of text generation in ChatGPT
An example of text generation in ChatGPT
  • Image generation. Generative AI models can be used to create new images with natural language prompts, which is one of the most popular techniques with current tools and applications. The goal with text-to-image generation is to create an image that accurately represents the content of a given prompt. For example, when we give the text prompt, “impressionist style oil painting of a Shiba Inu dog giving a tarot card reading,” to the popular AI image generator DALL-E 2 we get something that looks like this (and yes, it’s a gem):
An AI-generated image from DALL-E 2 of a Shiba Inu dog giving a tarot card reading
An AI-generated image from DALL-E 2 of a Shiba Inu dog giving a tarot card reading

An example of a video created with a text prompt using diffusion models from [Imagen Video](https://imagen.research.google/).

  • Programming code generation. Rather than scouring the internet or developer community groups for help with code examples, generative AI models can be used to help generate new programming code with natural language prompts, complete partially written code with suggestions, or even translate code from one programming language to another. This is how, at a simple level, GitHub Copilot works: it uses OpenAI’sCodex model to offer code suggestions right from a developer’s editor. However, as you would with any software development tool, we encourage you to review generated code before merging into production.

  • Data generation. Creating new data—which is called synthetic data—and augmenting existing data sets is another common use case for generative AI. This involves generating new samples from an existing dataset to increase the dataset’s size and improve machine learning models trained on it, all while providing a layer of privacy since real user data is not being utilized to power models. Synthetic data generation provides a way to create useful, meaningful data for more than just ML training though—a number of self-driving car companies like Cruise and Waymo utilize AI-generated synthetic data for training perception systems to prepare vehicles for real-world situations while in operation.

  • Language translation. Natural-language understanding (NLU) models combined with generative AI have become increasingly popular to provide language translations on-the-fly. These types of tools help companies break language barriers and increase their scope of accessibility for customer bases by being able to provide things like support or documentation in their native language. Through complex, deep learning algorithms, generative AI is able to understand the context of a source text and linguistically construct those sentences in another language. This practice can also apply to coding languages, for example, translating a desired function from Python to Java.

The bottom line: Even though generative AI is a relatively new technology, it’s already being used in consumer and business applications. The use cases, as well as the quantity of applications created with it, will continue evolving to meet more distinct and specific needs.

How does generative AI work?

Generative AI models work by using neural networks to identify patterns from large sets of data, then generate new and original data or content.

But what are neural networks? In simple terms, they use interconnected nodes that are inspired by neurons in the human brain. These networks are the foundation of machine learning and deep learning models, which use a complex structure of algorithms to process large amounts of data such as text, code, or images. Training these neural networks involves adjusting the weights or parameters of the connections between neurons to minimize the difference between predicted and desired outputs, which allows the network to learn from mistakes and make more accurate predictions based on the data.

Algorithms are a key component of machine learning and generative AI models. But beyond helping machines learn from data, algorithms are also used to optimize accuracy of outputs and make decisions, or recommendations, based on input data.

While algorithms help automate these processes, building a generative AI model is incredibly complex due to the massive amounts of data and compute resources they require. People and organizations need large datasets to train these models, and generating high-quality data can be time-consuming and expensive.

To restate the obvious, these models are complicated. Need proof? Here are some common generative AI models and how they work:

  • Large language models (LLM): LLMs are a type of machine learning model that process and generate natural language text. One of the most significant advancements in the development of large language models has been the availability of vast amounts of text data, such as books, websites, and social media posts. This data can be used to train models that are capable of predicting and generating natural language responses in a variety of contexts. As a result, large language models have multiple practical applications, such as virtual assistants, chatbots, or text generators, like ChatGPT.

  • Generative adversarial networks (GAN): GANs are one of the most used models for generative AI, and they employ two different neural networks. GANs consist of two different types of neural networks: a generator and a discriminator. The generator network generates new data, such as images or audio, from a random noise signal while the discriminator is trained to distinguish between real data from the training set and the data produced by the generator.

During training, the generator tries to create data that can trick the discriminator network into thinking it’s real. This “adversarial” process will continue until the generator can produce data that is totally indistinguishable from real data in the training set. This process helps both networks improve at their respective tasks, which ultimately results in more realistic and higher-quality generated data.

A diagram illustrating how a generative adversarial network works. Image [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en) האדם-החושב on wikipedia
A diagram illustrating how a generative adversarial network works. Image [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en) האדם-החושב on wikipedia
  • Transformer-based models: A transformer-based model’s neural networks operate by learning context and meaning through tracking relationships of sequential data, which means these models are really good at natural language processing tasks like machine translation, language modeling, and answering questions. These models have been used in popular language models, such as GPT-4 (which stands for Generative Pre-trained Transformer 4), and have also been adapted for other such tasks that require the modeling of sequential data such as image recognition.
  • Variational autoencoder models (VAEs): These models are similar to GANs in that they work with two different neural networks: encoders and decoders. VAEs can take a large amount of data and compress it into a smaller representation, which can be used to create new data that is similar to the original data. VAEs are often used in image, video, and audio generation—and here’s a fun fact: you can train a VAE on datasets like CelebA, which contains over 200,000 images of celebrities, to create completely new portraits of people that don’t exist.
 The smile vector, a concept vector discovered by [Tom White](https://aiartists.org/tom-white) using VAEs trained on the CelebA dataset.
The smile vector, a concept vector discovered by Tom White using VAEs trained on the CelebA dataset.

The real-world applications of generative AI

The impact of generative AI is quickly becoming apparent—but it’s still in its early days. Despite this, we’re already seeing a proliferation of applications, products, and open source projects that are using generative AI models to achieve specific outcomes for people and organizations (and yes, developers, too).

Though generative AI is constantly evolving, it already has some solid real world applications. Here’s just a few of them:

Coding

New and seasoned developers alike can utilize generative AI to improve their coding processes. Generative AI coding tools can help automate some of the more repetitive tasks, like testing, as well as complete code or even generate brand new code. GitHub has its own AI-powered pair programmer, GitHub Copilot, which uses generative AI to provide developers with code suggestions. And GitHub also has announced GitHub Copilot X, which brings generative AI to more of the developer experience across the editor, pull requests, documentation, CLI, and more.

Accessibility

Generative AI has the potential to greatly impact and improve accessibility for folks with disabilities through a variety of modalities, such as speech-to-text transcription, text-to-speech audio generation, or assistive technologies. One of the most exciting facets of our GitHub Copilot tool is its voice-activated capabilities that allow developers with difficulties using a keyboard to code with their voice. By leveraging the power of generative AI, these types of tools are paving the way for a more inclusive and accessible future in technology.

Gaming

Generative AI can take gaming to the next level (get it? 😉) by generating new characters, storylines, design components, and more. Case in point: The developer behind the game, This Girl Does Not Exist, has said that every component of the game—from the storyline to the art and even the music—was generated entirely by AI. This use of generative AI can enable gaming studios to create new and exciting content for their users, all without increasing the developer workload, which frees them up to work on other aspects of the game, such as story development.

Web design

Designers can utilize generative AI tools to automate the design process and save significant time and resources, which allows for a more streamlined and efficient workflow. Additionally, incorporating these tools into the development process can lead to the creation of highly customized designs and logos, enhancing the overall user experience and engagement with the website or application. Generative AI tools can also be used to do some of the more tedious work, such as creating design layouts that are optimized and adaptable across devices. For example, designers can use tools like designs.ai to quickly generate logos, banners, or mockups for their websites.

Microsoft and other industry players are increasingly utilizing generative AI models in search to create more personalized experiences. This includes query expansion, which generates relevant keywords to reduce the number of searches. So, rather than the search engine returning a list of links, generative AI can help these new and improved models return search results in the form of natural language responses. Bing now includes AI-powered features in partnership with OpenAI that provide answers to complex questions and allow users to ask follow-up questions in a chatbox for more refined responses.

Healthcare

Interest has emerged around the potential applications of generative AI in the healthcare field to improve disease detection and diagnosis, advance medical research, and accelerate progress in the pharmaceutical space. Potentially, generative AI could be used to analyze large amounts of data to simulate chemical structures and predict new compounds will be the most effective for new drug discoveries. NVIDIA Clara is one example of a generative AI model specifically designed for medical imaging and healthcare research. (Plus, Gartner suggests more than 30 percent of new pharmaceutical drugs and materials will be discovered via generative AI models by 2025.)

Fun fact: Did you know that ChatGPT recently passed the US Medical Licensing exam without any intervention from clinicians?

Marketing and advertising

In marketing, content is king—and generative AI is making it easier than ever to quickly create large amounts of it. A number of companies, agencies, and creators are already turning to generative AI tools to create images for social posts or write captions, product descriptions, blog posts, email subject lines, and more. Generative AI can also help companies personalize ad experiences by creating custom, engaging content for individuals at speed. Writers, marketers, and creators can leverage tools like Jasper to generate copy, Surfer SEO to optimize organic search, or albert.ai to personalize digital advertising content.

Art and design

As we’ve seen above, the power of AI can be harnessed to create some incredible portraits in a matter of moments (re: the future-telling Shiba 🐕). Artists and designers alike are using these AI tools as a source of inspiration. For example, architects can quickly create 3D models of objects or environments and artists can breathe new life into their portraits by using AI to apply different styles, like adding a Cubist style to their original image. Need proof? Designers are already starting to use AI image generators, such as Midjourney and Microsoft Designer, to create high-quality images by simply typing out Discord commands.

Finance

In a recent discussion about tech trends and how they’ll affect the finance sector, Michael Schrage, a research fellow at the MIT Sloan School Initiative on the Digital Economy, said, “I think, increasingly, we’re going to be seeing generative AI used for financial forecasts and scenario generation.” This is a likely path forward—generative AI can be used to analyze large amounts of data to detect fraud, manage risk, and inform decision making. And that has obvious applications in the financial services industry.

Manufacturing

Manufacturers are starting to turn to generative AI solutions to help with product design, quality control, and predictive maintenance. Generative AI can be used to analyze historical data to improve machine failure predictions and help manufacturers with maintenance planning. According to research conducted by Capgemini, more than half of European manufacturers are implementing some AI solutions (although so far, these aren’t generative AI solutions). This is largely because the sheer amount of manufacturing data is easier for machines to analyze at speed than humans.

AI as a partner: Generative AI models and tools are narrow in focus, and work best at generating content, code, and images. In research at GitHub, we’ve found that GitHub Copilot helps developers code up to 55% faster, underscoring how generative AI models and tools can improve overall productivity and boost efficiency. Metrics like these show how generative AI tools are already changing how people and teams work—but they also underscore how these tools act as complement to human efforts.

Take this with you

Whether it’s creating visual assets for an ad campaign or augmenting medical images to help diagnose diseases, generative AI is helping us solve complex problems at speed. And the emergence of generative AI-based programming tools has revolutionized the way developers approach writing code.

We know that developers want to design and write software quickly, and tools like GitHub Copilot are enabling them to access large datasets to write more efficient code and boost productivity. In fact, 96% of developers surveyed reported spending less time on repetitive tasks using GitHub Copilot, which in turn allowed 74% of them to focus on more rewarding work.

While these models aren’t perfect yet, they’re getting better by the day—and that’s creating an exciting immediate future for developers and generative AI.

Research on AI in Adversarial Settings

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/04/research-on-ai-in-adversarial-settings.html

New research: “Achilles Heels for AGI/ASI via Decision Theoretic Adversaries”:

As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at or above the human level is of particular concern. One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart. As a challenge to this assumption, this paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions which cause them to make irrational decisions in adversarial settings. In a survey of key dilemmas and paradoxes from the decision theory literature, a number of these potential Achilles Heels are discussed in context of this hypothesis. Several novel contributions are made toward understanding the ways in which these weaknesses might be implanted into a system.

New – Ready-to-use Models and Support for Custom Text and Image Classification Models in Amazon SageMaker Canvas

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/new-ready-to-use-models-and-support-for-custom-text-and-image-classification-models-in-amazon-sagemaker-canvas/

Today AWS announces new features in Amazon SageMaker Canvas that help business analysts generate insights from thousands of documents, images, and lines of text in minutes with machine learning (ML). Starting today, you can access ready-to-use models and create custom text and image classification models alongside previously supported custom models for tabular data, all without requiring ML experience or writing a line of code.

Business analysts across different industries want to apply AI/ML solutions to generate insights from a variety of data and respond to ad-hoc analysis requests coming from business stakeholders. By applying AI/ML in their workflows, analysts can automate manual, time-consuming, and error-prone processes, such as inspection, classification, as well as extraction of insights from raw data, images, or documents. However, applying AI/ML to business problems requires technical expertise and building custom models can take several weeks or even months.

Launched in 2021, Amazon SageMaker Canvas is a visual, point-and-click service that allows business analysts to use a variety of ready-to-use models or create custom models to generate accurate ML predictions on their own.

Ready-to-use Models
Customers can use SageMaker Canvas to access ready-to-use models that can be used to extract information and generate predictions from thousands of documents, images, and lines of text in minutes. These ready-to-use models include sentiment analysis, language detection, entity extraction, personal information detection, object and text detection in images, expense analysis for invoices and receipts, identity document analysis, and more generalized document and form analysis.

For example, you can select the sentiment analysis ready-to-use model and upload product reviews from social media and customer support tickets to quickly understand how your customers feel about your products. Using the personal information detection ready-to-use model, you can detect and redact personally identifiable information (PII) from emails, support tickets, and documents. Using the expense analysis ready-to-use model, you can easily detect and extract data from your scanned invoices and receipts and generate insights about that data.

These ready-to-use models are powered by AWS AI services, including Amazon Rekognition, Amazon Comprehend, and Amazon Textract.

Ready-to-use models available

Custom Text and Image Classification Models
Customers that need custom models trained for their business-specific use-case can use SageMaker Canvas to create text and image classification models. 

You can use SageMaker Canvas to create custom text classification models to classify data according to your needs. For example, imagine that you work as a business analyst at a company that provides customer support. When a customer support agent engages with a customer, they create a ticket, and they need to record the ticket type, for example, “incident”, “service request”, or “problem”. Many times, this field gets forgotten, and so, when the reporting is done, the data is hard to analyze. Now, using SageMaker Canvas, you can create a custom text classification model, train it with existing customer support ticket information and ticket type, and use it to predict the type of tickets in the future when working on a report with missing data.

You can also use SageMaker Canvas to create custom image classification models using your own image datasets. For instance, imagine you work as a business analyst at a company that manufactures smartphones. As part of your role, you need to prepare reports and respond to questions from business stakeholders related to quality assessment and it’s trends. Every time a phone is assembled, a picture is automatically taken, and at the end of the week, you receive all those images. Now with SageMaker Canvas, you can create a new custom image classification model that is trained to identify common manufacturing defects. Then, every week, you can use the model to analyze the images and predict the quality of the phones produced.

SageMaker Canvas in Action
Let’s imagine that you are a business analyst for an e-commerce company. You have been tasked with understanding the customer sentiment towards all the new products for this season. Your stakeholders require a report that aggregates the results by item category to decide what inventory they should purchase in the following months. For example, they want to know if the new furniture products have received positive sentiment. You have been provided with a spreadsheet containing reviews for the new products, as well as an outdated file that categorizes all the products on your e-commerce platform. However, this file does not yet include the new products.

To solve this problem, you can use SageMaker Canvas. First, you will need to use the sentiment analysis ready-to-use model to understand the sentiment for each review, classifying them as positive, negative, or neutral. Then, you will need to create a custom text classification model that predicts the categories for the new products based on the existing ones.

Ready-to-use Model – Sentiment Analysis
To quickly learn the sentiment of each review, you can do a bulk update of the product reviews and generate a file with all the sentiment predictions.

To get started, locate Sentiment analysis on the Ready-to-use models page, and under Batch prediction, select Import new dataset.

Using ready-to-use sentiment analysis with a batch dataset

When you create a new dataset, you can upload the dataset from your local machine or use Amazon Simple Storage Service (Amazon S3). For this demo, you will upload the file locally. You can find all the product reviews used in this example in the Amazon Customer Reviews dataset.

After you complete uploading the file and creating the dataset, you can Generate predictions.

Select dataset and generate predictions

The prediction generation takes less than a minute, depending on the size of the dataset, and then you can view or download the results.

View or download predictions

The results from this prediction can be downloaded as a .csv file or viewed from the SageMaker Canvas interface. You can see the sentiment for each of the product reviews.

Preview results from ready-to-use model

Now you have the first part of your task ready—you have a .csv file with the sentiment of each review. The next step is to classify those products into categories.

Custom Text Classification Model
To classify the new products into categories based on the product title, you need to train a new text classification model in SageMaker Canvas.

In SageMaker Canvas, create a New model of the type Text analysis.

The first step when creating the model is to select a dataset with which to train the model. You will train this model with a dataset from last season, which contains all the products except for the new collection.

Once the dataset has finished importing, you will need to select the column that contains the data you want to predict, which in this case is the product_category column, and the column that will be used as the input for the model to make predictions, which is the product_title column.

After you finish configuring that, you can start to build the model. There are two modes of building:

  • Quick build that returns a model in 15–30 minutes.
  • Standard build takes 2–5 hours to complete.

To learn more about the differences between the modes of building you can check the documentation. For this demo, pick quick build, as our dataset is smaller than 50,000 rows.

Prepare and build your model

When the model is built, you can analyze how the model performs. SageMaker Canvas uses the 80-20 approach; it trains the model with 80 percent of the data from the dataset and uses 20 percent of the data to validate the model.

Model score

When the model finishes building, you can check the model score. The scoring section gives you a visual sense of how accurate the predictions were for each category. You can learn more about how to evaluate your model’s performance in the documentation.

After you make sure that your model has a high prediction rate, you can move on to generate predictions. This step is similar to the ready-to-use models for sentiment analysis. You can make a prediction on a single product or on a set of products. For a batch prediction, you need to select a dataset and let the model generate the predictions. For this example, you will select the same dataset that you selected in the ready-to-use model, the one with the reviews. This can take a few minutes, depending on the number of products in the dataset.

When the predictions are ready, you can download the results as a .csv file or view how each product was classified. In the prediction results, each product is assigned only one category based on the categories provided during the model-building process.

Predict categories

Now you have all the necessary resources to conduct an analysis and evaluate the performance of each product category with the new collection based on customer reviews. Using SageMaker Canvas, you were able to access a ready-to-use model and create a custom text classification model without having to write a single line of code.

Available Now
Ready-to-use models and support for custom text and image classification models in SageMaker Canvas are available in all AWS Regions where SageMaker Canvas is available. You can learn more about the new features and how they are priced by visiting the SageMaker Canvas product detail page.

— Marcia

Publish Amazon DevOps Guru Insights to ServiceNow for Incident Management

Post Syndicated from Abdullahi Olaoye original https://aws.amazon.com/blogs/devops/publish-amazon-devops-guru-insights-to-servicenow-for-incident-management/

Amazon DevOps Guru is a fully managed AIOps service that uses machine learning (ML) to quickly identify when applications are behaving outside of their normal operating patterns and generates insights from its findings. These insights generated by Amazon DevOps Guru can be used to alert on-call teams to react to anomalies for mission critical workloads. Various customers already utilize Incident management systems like ServiceNow to identify, analyze and resolve critical incidents which could impact business operations. ServiceNow is an IT Service Management (ITSM) platform that enables enterprise organizations to improve operational efficiencies. Among its products is Incident Management which provides a single pane view to customers and allows customers restore services and resolve issues quickly.

This blog post will show you how to integrate Amazon DevOps Guru insights with ServiceNow to automatically create and manage Incidents. We will demonstrate how an insight generated by Amazon DevOps Guru for an anomaly can automatically create a ServiceNow Incident, update the incident when there are new anomalies or recommendations from Amazon DevOps Guru, and close the ServiceNow Incident once the insight is resolved by Amazon DevOps Guru.

Overview of solution

This solution uses a combination of event driven architecture and Serverless technologies, to integrate DevOps Guru insights with ServiceNow. When an Amazon DevOps Guru insight is created, an Amazon EventBridge rule is used to capture the insight as an event and routed to an AWS Lambda Function target. The lambda function interacts with ServiceNow using a REST API to create, update and close an incident for corresponding DevOps Guru events captured by EventBridge.

The EventBridge rule can be customized to capture all DevOps Guru insights or narrowed down to specific insights. In this blog, we will be capturing all DevOps Guru insights and will be performing actions on ServiceNow for the below DevOps Guru events:

  • DevOps Guru New Insight Open
  • DevOps Guru New Anomaly Association
  • DevOps Guru Insight Severity Upgraded
  • DevOps Guru New Recommendation Created
  • DevOps Guru Insight Closed

    Serverless architecture where Amazon EventBridge receives Amazon DevOps Guru insights and using Lambda function transforms and posts to ServiceNow REST API to create, update, and resolve incidents

    Figure 1: Amazon DevOps Guru Integration with ServiceNow using Amazon EventBridge and AWS Lambda

Solution Implementation Steps

Prerequisites

Before you deploy the solution and proceed with this walkthrough, you should have the following prerequisites:

  • Gather the hostname for your ServiceNow cloud instance. If you do not have a ServiceNow instance, you can request a developer instance through the ServiceNow Developer page.
  • Gather the credentials of a ServiceNow user who has permissions to make REST API calls to ServiceNow, specifically to the Table API. If you don’t have a user provisioned, you can create one by following the steps in Getting started with the REST API in the ServiceNow documentation.
  • Create a secret in Secrets Manager to store the ServiceNow credentials created in previous step. You can choose any name for the secret but it should have two key/value pairs, one for username and other for password.
  • Enable DevOps Guru for your applications by following these steps or you can follow this blog to deploy a sample serverless application that can be used to generate DevOps Guru insights for anomalies detected in the application.
  • Install and set up SAM CLI – Install the SAM CLI
  • Download and set up Java. The version should be matching to the runtime that you defined in the SAM template.yaml Serverless function configuration – Install the Java SE Development Kit 11
  • Maven – Install Maven
  • Docker – Install Docker community edition

You have two options to deploy this solution, one options is to deploy from the AWS Serverless Repository and other from the Command Line Interface (CLI).

Option 1: Deploy sample ServiceNow Connector App from AWS Serverless Repository

The DevOps Guru ServiceNow Connector application is available in the AWS Serverless Application Repository which is a managed repository for serverless applications. The application is packaged with an AWS Serverless Application Model (SAM) template, definition of the AWS resources used and the link to the source code. Follow the steps below to quickly deploy this serverless application in your AWS account.

Follow the steps below to quickly deploy this serverless application in your AWS account:

  • Login to the AWS management console of the account to which you plan to deploy this solution.
  • Go to the DevOps Guru ServiceNow Connector application in the AWS Serverless Repository and click on “Deploy”.

    DevOps Guru ServiceNow Connector application page on the AWS Serverless Application Repository with the Deploy button to quickly deploy this solution to your AWS account.

    Figure 2: Deploy solution through AWS Serverless Repository

  • The Lambda application deployment screen will be displayed where you can enter the ServiceNow hostname (do not include the https prefix) and the Secret Name you created in the prerequisite steps. Click on the ‘Deploy’ button.

    Lambda Application Deployment page to enter the ServiceNow hostname and Secret name needed for interacting with your ServiceNow instance before deploying the solution.

    Figure 3: AWS Lambda Application Settings

  • After successful deployment the AWS Lambda Application page will display the “Create complete” status for the serverlessrepo-DevOps-Guru-ServiceNow-Connector application. The CloudFormation template creates four resources:
    1. Lambda function which has the logic to integrate to the ServiceNow
    2. Event Bridge rule for the DevOps Guru Insights
    3. Lambda permission
    4. IAM role
  • 5.     Now you can skip Option 2 and follow the steps in the “Test the Solution” section to trigger some DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Option 2: Build and Deploy sample ServiceNow Connector App using AWS SAM Command Line Interface

As you have seen above, you can directly deploy the sample serverless application from the Serverless Repository with one click deployment. Alternatively, you can choose to clone the github source repository and deploy using the SAM CLI from your terminal.

The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing serverless applications. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see AWS SAM reference – AWS Serverless Application Model.

Before you proceed, make sure you have completed the Prerequisites section in the beginning which should set up the AWS SAM CLI, Maven and Java on your local terminal. You also need to install and set up Docker to run your functions in an Amazon Linux environment that matches Lambda.

Follow the steps below to build and deploy this serverless application using AWS SAM CLI in your AWS account:

  • Clone the source code from the github repo
$ git clone https://github.com/aws-samples/amazon-devops-guru-connector-servicenow.git
  • Before you build the resources defined in the SAM template, you can use the below validate command which will run cfn-lint validations on your SAM JSON/YAML template
$ sam validate –-lint --template template.yaml

3.     Build the application with SAM CLI

$ cd amazon-devops-guru-connector-servicenow
$ sam build

If everything is set up correctly, you should have a success message like shown below:

Build Succeeded

Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Validate SAM template: sam validate
[*] Invoke Function: sam local invoke
[*] Test Function in the Cloud: sam sync --stack-name {{stack-name}} --watch
[*] Deploy: sam deploy –guided

4.  Deploy the application with SAM CLI

$ sam deploy –-guided

This command will package and deploy your application to AWS, with a series of prompts that you should respond to as shown below:

  • Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name – amazon-devops-guru-connector-servicenow
  • AWS Region: The AWS region you want to deploy your application to.
  • Parameter ServiceNowHost []: The ServiceNow host name/instance URL you set up. Example: dev92031.service-now.com
  • Parameter SecretName []: The secret name that you set up for ServiceNow credentials in the Prerequisites.
  • Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes.
  • Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn’t provided through this prompt, to deploy this example you must explicitly pass --capabilities CAPABILITY_IAM to the sam deploy command.
  • Disable rollback [y/N]: If set to Y, preserves the state of previously provisioned resources when an operation fails.
  • Save arguments to configuration file (samconfig.toml): If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application.

After you enter your parameters, you should see something like this if you have provided Y to view and confirm ChangeSets. Proceed here by providing ‘Y’ for deploying the resources.

Initiating deployment
=====================
Uploading to amazon-devops-guru-connector-servicenow/46bb4841f8f37fd41d3f40f86f31c4d7.template 1918 / 1918 (100.00%)

Waiting for changeset to be created..
CloudFormation stack changeset
-----------------------------------------------------------------------------------------------------------------------------------------------------
Operation LogicalResourceId ResourceType Replacement
-----------------------------------------------------------------------------------------------------------------------------------------------------
+ Add FunctionsDevOpsGuruPermission AWS::Lambda::Permission N/A
+ Add FunctionsDevOpsGuru AWS::Events::Rule N/A
+ Add FunctionsRole AWS::IAM::Role N/A
+ Add Functions AWS::Lambda::Function N/A
-----------------------------------------------------------------------------------------------------------------------------------------------------

Changeset created successfully. arn:aws:cloudformation:us-east-1:123456789012:changeSet/samcli-deploy1669232233/7c97b7f5-369d-400d-89cd-ebabefaa0b57

Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]:

Once the deployment succeeds, you should be able to see the successful creation of your resources

CloudFormation events from stack operations (refresh every 0.5 seconds)
-----------------------------------------------------------------------------------------------------------------------------------------------------
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
-----------------------------------------------------------------------------------------------------------------------------------------------------
CREATE_IN_PROGRESS AWS::CloudFormation::Stack amazon-devops-guru-connector- User Initiated
servicenow
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole -
CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole Resource creation Initiated
CREATE_COMPLETE AWS::IAM::Role FunctionsRole -
CREATE_IN_PROGRESS AWS::Lambda::Function Functions -
CREATE_IN_PROGRESS AWS::Lambda::Function Functions Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Function Functions -
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru -
CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru Resource creation Initiated
CREATE_COMPLETE AWS::Events::Rule FunctionsDevOpsGuru -
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission -
CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermission Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Permission FunctionsDevOpsGuruPermission -
CREATE_COMPLETE AWS::CloudFormation::Stack amazon-devops-guru-connector- -
servicenow
-----------------------------------------------------------------------------------------------------------------------------------------------------

Successfully created/updated stack - amazon-devops-guru-connector-servicenow in us-east-1

You can also use the below command to list the resources deployed by passing in the stack name.

$ sam list resources --stack-name amazon-devops-guru-connector-servicenow

You can also choose to test and debug your function locally with sample events using the SAM CLI local functionality. Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Refer the Invoking Lambda functions locally – AWS Serverless Application Model link here for more details.

Follow the below steps for testing the lambda with the SAM CLI local. You have to create an env.json file with the correct values for your ServiceNow Host and SecretManager secret name that was created in the previous step.

  • Make sure you have created the AWS Secrets Manager secret with the desired name as mentioned in the prerequisites, which should be used here for SECRET_NAME.
  • Create env.json as below, by replacing the values for SERVICE_NOW_HOST and SECRET_NAME with your real value. These will be set as the local Lambda execution environment variables.
{"Parameters": {"SERVICE_NOW_HOST": "SNOW_HOST","SECRET_NAME": "SNOW_CREDS"}}
  • Run the command below to validate locally that with a sample DevOps Guru payload, to trigger Lambda locally and invoke. Remember for this to work, you should have Docker instance running and also the Secret Name created in your AWS account.
$ sam local invoke Functions --event Functions/src/test/Events/CreateIncident.json --env-vars Functions/src/test/Events/env.json

Once you are done with the above steps, move on to “Test the Solution” section below to trigger sample DevOps Guru insights and validate that the incidents are created and updated in ServiceNow.

Test the Solution

To test the solution, we will simulate a DevOps Guru insight. You can also simulate an insight by following the steps in this blog. After an anomaly is detected in the application, DevOps Guru creates an insight as seen below.

Sample DevOps Guru insights page with anomalous behavior of DynamoDB ThrottledRequests from the application deployed with the workshop link.

Figure 4: DevOps Guru Insight created for anomalous behavior

For the DevOps Guru insight shown above, a corresponding incident is automatically created on ServiceNow as shown below. In addition to the incident creation, any new anomalies and recommendations from DevOps Guru is also associated with the incident.

ServiceNow incident detail page with the DevOps Guru insight information.

Figure 5: Corresponding ServiceNow Incident is created for the DevOps Guru Insight

When the anomalous behavior that generated the DevOps Guru insight is resolved, DevOps Guru automatically closes the insight. The corresponding ServiceNow incident that was created for the insight is also closed as seen below

ServiceNow incident Notes section showing Incident as resolved due to the insight being closed in Amazon DevOps Guru.

Figure 6: ServiceNow Incident created for DevOps Guru Insight is resolved due to insight closure

Cleaning up

To avoid incurring future charges, delete the resources.

To delete the sample application that you created, use the AWS CLI command below and pass the stack name you provided in the sam deploy step.

$ aws cloudformation delete-stack --stack-name amazon-devops-guru-connector-servicenow

You could also use the AWS CloudFormation Console to delete the stack:

AWS CloudFormation console with Delete option to clean up the deployed stack.

Figure 7: AWS Stack Console with Delete action

Conclusion

This blog post showcased how DevOps Guru continuously monitor resources in a particular region in your AWS account and automatically detects operational issues, predicts impending resource exhaustion, details likely cause, and recommends remediation actions. This post described a custom solution using serverless integration pattern with AWS Lambda and Amazon EventBridge which enabled integration of the DevOps Guru insights with customer’s most popular ITSM and Change management tool ServiceNow thus streamlining the Service Management governance and oversight over AWS services. Using this solution helps Customer’s with ServiceNow to improve their operational efficiencies, and get customized insights and real time incident alerts and management directly from DevOps Guru which provides a single pane of glass to restore services and systems quickly.

This solution was created to help customers who already use ServiceNow Incident Management, if you are already using Incident Manager from AWS Systems Manager, check out how that works with Amazon DevOps Guru here.

To learn more about Amazon DevOps Guru, join us for a free hands-on Immersion Day. Events are virtual and hosted at three global time zones. Register here: April 12th.

About the authors:

Abdullahi Olaoye

Abdullahi is a Senior Cloud Infrastructure Architect at AWS Professional Services where he works with enterprise customers to design and build cloud solutions that solve business challenges. When he’s not working, he enjoys travelling, watching documentaries and listening to history podcasts.

Sreenivas Ganesan

Sreenivas Ganesan is a Sr. DevOps Consultant at AWS experienced in architecting and delivering modernized DevOps solutions for enterprise customers in their journey to AWS Cloud, primarily focused on Infrastructure automation, Security and Compliance, Management and Governance, Provisioning and Orchestration. Outside of work, he enjoys watching new TV series, soccer and spending time with his family outdoors.

Mohan Udyavar

Mohan Udyavar is a Principal Technical Account Manager in the Enterprise Support organization of AWS advising customers in successfully migrating and operating their workloads on AWS. He is primarily focused on the Automotive industry providing prescriptive guidance to customers helping them improve the resilience and operational excellence posture of mission-critical applications. Outside of work, he loves cooking and working on tech projects with his son.

Amazon Chime SDK Call Analytics: Real-Time Voice Tone Analysis and Speaker Search

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-chime-sdk-call-analytics-real-time-voice-tone-analysis-and-speaker-search/

Today, I am pleased to announce the availability of Amazon Chime SDK call analytics, a new set of capabilities that helps make it easier and cost effective to record and generate insights on real-time audio calls: transcription, voice tone analysis, and speaker search. We’ve also improved the Amazon Chime SDK section of the AWS Management Console to let you integrate machine learning (ML)-based services, such as these new call analytics capabilities or Amazon Transcribe into your audio applications in just a few steps.

Voice Analytics: Voice Tone Analysis and Speaker Search
Voice analytics delivers real-time insights into audio conversations. It helps detect and classify participants expressing a positive, neutral, or negative tone. Typically, enterprises working in regulated industries have obligations to record or want to analyze conversations between employees and their business partners, customers, or suppliers.

Voice tone analysis uses ML to extract sentiment from a speech signal based on a joint analysis of lexical and linguistic information as well as acoustic and tonal information. Voice tone analysis for live calls are delivered in the data lake of your choice, on top of which you can create your own dashboards to visualize the data.

Let’s take an example from the finance industry. Trading room supervisors are sometimes required to record all the trading conversations occurring on the floor. Voice tone analysis helps them meet their regulatory requirements. They can also deliver these insights to the traders to help to improve their productivity. But finance is not the only industry that needs to record and analyze calls. We have received similar requests from customers in Business Process Outsourcing (BPO), public sector, healthcare, telecom, and insurance industries.

Alongside with voice tone analysis, your applications can now benefit from speaker search to help match speakers to an existing database. It only requires a short sample to recognize a speaker based on their voice stored in a database of known voices. Speaker search helps your applications expedite caller lookup and enrich call records and transcripts with identity attribution. Speaker search delivers a suggested unique internal identifier for the speaker and a confidence score. The decision to match current the speaker with a known speaker from your organization is up to your application. Some of our customers plan to use speaker search for real-time speaker labeling on communication happening over trading turrets, which are shared devices.

Integration with AI Services in the AWS Management Console
We want to make it easier for developers to add these capabilities into existing telephony applications without requiring expertise in telephony, cloud infrastructure, or AI.

This is why we added a easier-to-use graphical configuration in the Amazon Chime SDK section of the console. On the console, you can choose the AWS AI service you want to use to analyze real-time audio data: voice analytics, Amazon Transcribe, or Amazon Transcribe Call Analytics. Whether you choose to use voice analytics or Amazon Transcribe to generate insights, you don’t have to write any integration code. We manage the integrations with AWS AI services and your voice-based or telephony applications. The console helps you define where you want to send the analytics data: an Amazon Kinesis stream or an Amazon Simple Storage Service (Amazon S3) bucket. Voice analytics can send real-time notifications to a function deployed on AWS Lambda, or an SQS queue or Amazon Simple Notification Service (Amazon SNS) topic.

To visualize insights, call analytics also delivers analyses to a data lake of your choice. You can then use Amazon QuickSight or Tableau to build dashboards and get insights from real-time media. These dashboards can be embedded in apps, wikis, and portals. Of course, we don’t leave you alone with your data. You can download prebuilt dashboards as AWS CloudFormation templates to deploy into your own AWS account. The link to download these templates is available on the console.

Finally, call analytics can generate real-time alerts by posting events to Amazon EventBridge. You can route these events to any destination of your choice, on your AWS account or supported third-party applications.

When using call analytics, you can reduce the initial project time to generate insights from real-time audio from months to days.

How It Works
I’d like to show you how it works.

On the Amazon Chime SDK section of the console, I open Configuration under Call Analytics on the left-side menu. Then, I select Create configuration.

Amazon Chime SDK - Create configuration

I give a name to my configuration. Optionally, I may also associate tags.

Amazon Chime SDK - Configuration first step

Under Configure analytics service, I can choose between Amazon Chime SDK voice analytics or Amazon Transcribe services to analyse calls. For this demo, I select Voice analytics.

Amazon Chime SDK - Configuration second step

I configure where to send the analysis. Voice analytics results are always sent to Kinesis. I specify a Kinesis data stream I created previously. When I want to use a business intelligence tool such as Quicksight to create a dashboard with analytics results, I also specify an S3 bucket to receive the analysis.

The console also gives me the link to the CloudFormation templates I can use to create the voice analytics dashboards.

Finally, I choose a Lambda function, SQS queue, or SNS topic that will receive notifications of events such as when the analytics are available, a new voice enrollment occurs, or the result of a voice verification. In the later case, the payload looks as follow:

{
    ...common to all events...
    "detail-type": "SpeakerSearchStatus",
    "detail": {
        "taskId": "uuid",
        "detailStatus": "IdentificationSuccessful",
        "speakerSearchDetails" : {
            "results": [
                {
                    "voiceProfileId": "guid",
                    "confidenceScore": "0.94",
                },
                {
                    "voiceProfileId": "guid",
                    "confidenceScore": "0.92",
                },
                {
                    "voiceProfileId": "guid",
                    "confidenceScore": "0.91",
                },
                ... (up to 10)
            ]
        },
        "isCaller": false,
        "voiceConnectorId": "guid",
        "transactionId": "guid"

        ...details from Voice connector
    }
}

For this demo, I choose an existing SQS queue.

Amazon Chime SDK - Configuration third step

Under Consent acknowledgment, I select all the boxes and select Next.

Amazon Chime SDK - Configuration second step consent

The next step is only available when I didn’t specify any analytics service in the previous step. It allows us to configure voice recordings. Recordings are available when no analytics are selected.

Under Configure access permissions, I choose a previously created AWS Identity and Access Management (IAM) role allowing the Amazon Chime SDK to access the other AWS services I configured: the Kinesis data stream, S3 bucket, and Lambda function, SQS queue, or SNS topic. The console may create an IAM role for me if I don’t have one already.

Amazon Chime SDK - Configuration four step

The next step is available if I selected Amazon Transcribe service under Configure analytics service. It allows me to configure real-time alerts through EventBridge. I may configure rules to send messages based on keyword match, sentiment detected, or issue detection.

The final step is Review and Create my configuration. I review the configuration details and then, I select Create configuration.

Finally, I link this configuration to a voice connector under the Voice Connector section, on the Streaming tab.

That’s it! As I mentioned earlier, no glue between AWS services or AI knowledge is required.

After the data arrives on Kinesis or your S3 bucket, you can point your preferred business reporting solution at it. When you use the QuickSight template we provide, you can get started in minutes with a high-level overview and a deep-dive view, as shown on the following screenshot.

Chime SDK Call Analytics - dashboard general

Chime SDK Call Analytics - dashboard deep dive

The deep-dive dashboard gives you graphical representations about the distribution of agent and customer sentiments and emotions. You also get a detailed analysis and transcript of the conversation.

Pricing and Availability
Adopting these capabilities in your audio applications requires no up-front infrastructure investment; you will be charged based only on your usage. Pricing is per minute of audio data analyzed. Visit Amazon Chime SDK pricing for details.

Call analytics is available in the following AWS Regions: US East (Ohio, N. Virginia), Asia Pacific (Singapore), and Europe (Frankfurt).

In this post, I discussed Amazon Chime SDK call analytics, a new set of capabilities that makes it easier and cost-effective to record and generate insights on real-time audio calls. With their focus on ease of use, these new capabilities are particularly well adapted to customers with minimal knowledge of cloud infrastructure, telephony, and ML.

Start today and configure your first dashboard!

— seb

AWS Week in Review – March 20, 2023

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-20-2023/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

A new week starts, and Spring is almost here! If you’re curious about AWS news from the previous seven days, I got you covered.

Last Week’s Launches
Here are the launches that got my attention last week:

Picture of an S3 bucket and AWS CEO Adam Selipsky.Amazon S3 – Last week there was AWS Pi Day 2023 celebrating 17 years of innovation since Amazon S3 was introduced on March 14, 2006. For the occasion, the team released many new capabilities:

Amazon Linux 2023 – Our new Linux-based operating system is now generally available. Sébastien’s post is full of tips and info.

Application Auto Scaling – Now can use arithmetic operations and mathematical functions to customize the metrics used with Target Tracking policies. You can use it to scale based on your own application-specific metrics. Read how it works with Amazon ECS services.

AWS Data Exchange for Amazon S3 is now generally available – You can now share and find data files directly from S3 buckets, without the need to create or manage copies of the data.

Amazon Neptune – Now offers a graph summary API to help understand important metadata about property graphs (PG) and resource description framework (RDF) graphs. Neptune added support for Slow Query Logs to help identify queries that need performance tuning.

Amazon OpenSearch Service – The team introduced security analytics that provides new threat monitoring, detection, and alerting features. The service now supports OpenSearch version 2.5 that adds several new features such as support for Point in Time Search and improvements to observability and geospatial functionality.

AWS Lake Formation and Apache Hive on Amazon EMR – Introduced fine-grained access controls that allow data administrators to define and enforce fine-grained table and column level security for customers accessing data via Apache Hive running on Amazon EMR.

Amazon EC2 M1 Mac Instances – You can now update guest environments to a specific or the latest macOS version without having to tear down and recreate the existing macOS environments.

AWS Chatbot – Now Integrates With Microsoft Teams to simplify the way you troubleshoot and operate your AWS resources.

Amazon GuardDuty RDS Protection for Amazon Aurora – Now generally available to help profile and monitor access activity to Aurora databases in your AWS account without impacting database performance

AWS Database Migration Service – Now supports validation to ensure that data is migrated accurately to S3 and can now generate an AWS Glue Data Catalog when migrating to S3.

AWS Backup – You can now back up and restore virtual machines running on VMware vSphere 8 and with multiple vNICs.

Amazon Kendra – There are new connectors to index documents and search for information across these new content: Confluence Server, Confluence Cloud, Microsoft SharePoint OnPrem, Microsoft SharePoint Cloud. This post shows how to use the Amazon Kendra connector for Microsoft Teams.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
A few more blog posts you might have missed:

Example of a geospatial query.Women founders Q&A – We’re talking to six women founders and leaders about how they’re making impacts in their communities, industries, and beyond.

What you missed at that 2023 IMAGINE: Nonprofit conference – Where hundreds of nonprofit leaders, technologists, and innovators gathered to learn and share how AWS can drive a positive impact for people and the planet.

Monitoring load balancers using Amazon CloudWatch anomaly detection alarms – The metrics emitted by load balancers provide crucial and unique insight into service health, service performance, and end-to-end network performance.

Extend geospatial queries in Amazon Athena with user-defined functions (UDFs) and AWS Lambda – Using a solution based on Uber’s Hexagonal Hierarchical Spatial Index (H3) to divide the globe into equally-sized hexagons.

How cities can use transport data to reduce pollution and increase safety – A guest post by Rikesh Shah, outgoing head of open innovation at Transport for London.

For AWS open-source news and updates, here’s the latest newsletter curated by Ricardo to bring you the most recent updates on open-source projects, posts, events, and more.

Upcoming AWS Events
Here are some opportunities to meet:

AWS Public Sector Day 2023 (March 21, London, UK) – An event dedicated to helping public sector organizations use technology to achieve more with less through the current challenging conditions.

Women in Tech at Skills Center Arlington (March 23, VA, USA) – Let’s celebrate the history and legacy of women in tech.

The AWS Summits season is warming up! You can sign up here to know when registration opens in your area.

That’s all from me for this week. Come back next Monday for another Week in Review!

Danilo

How AI Could Write Our Laws

Post Syndicated from Schneier.com Webmaster original https://www.schneier.com/blog/archives/2023/03/how-ai-could-write-our-laws.html

Nearly 90% of the multibillion-dollar federal lobbying apparatus in the United States serves corporate interests. In some cases, the objective of that money is obvious. Google pours millions into lobbying on bills related to antitrust regulation. Big energy companies expect action whenever there is a move to end drilling leases for federal lands, in exchange for the tens of millions they contribute to congressional reelection campaigns.

But lobbying strategies are not always so blunt, and the interests involved are not always so obvious. Consider, for example, a 2013 Massachusetts bill that tried to restrict the commercial use of data collected from K-12 students using services accessed via the internet. The bill appealed to many privacy-conscious education advocates, and appropriately so. But behind the justification of protecting students lay a market-altering policy: the bill was introduced at the behest of Microsoft lobbyists, in an effort to exclude Google Docs from classrooms.

What would happen if such legal-but-sneaky strategies for tilting the rules in favor of one group over another become more widespread and effective? We can see hints of an answer in the remarkable pace at which artificial-intelligence tools for everything from writing to graphic design are being developed and improved. And the unavoidable conclusion is that AI will make lobbying more guileful, and perhaps more successful.

It turns out there is a natural opening for this technology: microlegislation.

“Microlegislation” is a term for small pieces of proposed law that cater—sometimes unexpectedly—to narrow interests. Political scientist Amy McKay coined the term. She studied the 564 amendments to the Affordable Care Act (“Obamacare”) considered by the Senate Finance Committee in 2009, as well as the positions of 866 lobbying groups and their campaign contributions. She documented instances where lobbyist comments—on health-care research, vaccine services, and other provisions—were translated directly into microlegislation in the form of amendments. And she found that those groups’ financial contributions to specific senators on the committee increased the amendments’ chances of passing.

Her finding that lobbying works was no surprise. More important, McKay’s work demonstrated that computer models can predict the likely fate of proposed legislative amendments, as well as the paths by which lobbyists can most effectively secure their desired outcomes. And that turns out to be a critical piece of creating an AI lobbyist.

Lobbying has long been part of the give-and-take among human policymakers and advocates working to balance their competing interests. The danger of microlegislation—a danger greatly exacerbated by AI—is that it can be used in a way that makes it difficult to figure out who the legislation truly benefits.

Another word for a strategy like this is a “hack.” Hacks follow the rules of a system but subvert their intent. Hacking is often associated with computer systems, but the concept is also applicable to social systems like financial markets, tax codes, and legislative processes.

While the idea of monied interests incorporating AI assistive technologies into their lobbying remains hypothetical, specific machine-learning technologies exist today that would enable them to do so. We should expect these techniques to get better and their utilization to grow, just as we’ve seen in so many other domains.

Here’s how it might work.

Crafting an AI microlegislator

To make microlegislation, machine-learning systems must be able to uncover the smallest modification that could be made to a bill or existing law that would make the biggest impact on a narrow interest.

There are three basic challenges involved. First, you must create a policy proposal—small suggested changes to legal text—and anticipate whether or not a human reader would recognize the alteration as substantive. This is important; a change that isn’t detectable is more likely to pass without controversy. Second, you need to do an impact assessment to project the implications of that change for the short- or long-range financial interests of companies. Third, you need a lobbying strategizer to identify what levers of power to pull to get the best proposal into law.

Existing AI tools can tackle all three of these.

The first step, the policy proposal, leverages the core function of generative AI. Large language models, the sort that have been used for general-purpose chatbots such as ChatGPT, can easily be adapted to write like a native in different specialized domains after seeing a relatively small number of examples. This process is called fine-tuning. For example, a model “pre-trained” on a large library of generic text samples from books and the internet can be “fine-tuned” to work effectively on medical literature, computer science papers, and product reviews.

Given this flexibility and capacity for adaptation, a large language model could be fine-tuned to produce draft legislative texts, given a data set of previously offered amendments and the bills they were associated with. Training data is available. At the federal level, it’s provided by the US Government Publishing Office, and there are already tools for downloading and interacting with it. Most other jurisdictions provide similar data feeds, and there are even convenient assemblages of that data.

Meanwhile, large language models like the one underlying ChatGPT are routinely used for summarizing long, complex documents (even laws and computer code) to capture the essential points, and they are optimized to match human expectations. This capability could allow an AI assistant to automatically predict how detectable the true effect of a policy insertion may be to a human reader.

Today, it can take a highly paid team of human lobbyists days or weeks to generate and analyze alternative pieces of microlegislation on behalf of a client. With AI assistance, that could be done instantaneously and cheaply. This opens the door to dramatic increases in the scope of this kind of microlegislating, with a potential to scale across any number of bills in any jurisdiction.

Teaching machines to assess impact

Impact assessment is more complicated. There is a rich series of methods for quantifying the predicted outcome of a decision or policy, and then also optimizing the return under that model. This kind of approach goes by different names in different circles—mathematical programming in management science, utility maximization in economics, and rational design in the life sciences.

To train an AI to do this, we would need to specify some way to calculate the benefit to different parties as a result of a policy choice. That could mean estimating the financial return to different companies under a few different scenarios of taxation or regulation. Economists are skilled at building risk models like this, and companies are already required to formulate and disclose regulatory compliance risk factors to investors. Such a mathematical model could translate directly into a reward function, a grading system that could provide feedback for the model used to create policy proposals and direct the process of training it.

The real challenge in impact assessment for generative AI models would be to parse the textual output of a model like ChatGPT in terms that an economic model could readily use. Automating this would require extracting structured financial information from the draft amendment or any legalese surrounding it. This kind of information extraction, too, is an area where AI has a long history; for example, AI systems have been trained to recognize clinical details in doctors’ notes. Early indications are that large language models are fairly good at recognizing financial information in texts such as investor call transcripts. While it remains an open challenge in the field, they may even be capable of writing out multi-step plans based on descriptions in free text.

Machines as strategists

The last piece of the puzzle is a lobbying strategizer to figure out what actions to take to convince lawmakers to adopt the amendment.

Passing legislation requires a keen understanding of the complex interrelated networks of legislative offices, outside groups, executive agencies, and other stakeholders vying to serve their own interests. Each actor in this network has a baseline perspective and different factors that influence that point of view. For example, a legislator may be moved by seeing an allied stakeholder take a firm position, or by a negative news story, or by a campaign contribution.

It turns out that AI developers are very experienced at modeling these kinds of networks. Machine-learning models for network graphs have been built, refined, improved, and iterated by hundreds of researchers working on incredibly diverse problems: lidar scans used to guide self-driving cars, the chemical functions of molecular structures, the capture of motion in actors’ joints for computer graphics, behaviors in social networks, and more.

In the context of AI-assisted lobbying, political actors like legislators and lobbyists are nodes on a graph, just like users in a social network. Relations between them are graph edges, like social connections. Information can be passed along those edges, like messages sent to a friend or campaign contributions made to a member. AI models can use past examples to learn to estimate how that information changes the network. Calculating the likelihood that a campaign contribution of a given size will flip a legislator’s vote on an amendment is one application.

McKay’s work has already shown us that there are significant, predictable relationships between these actions and the outcomes of legislation, and that the work of discovering those can be automated. Others have shown that graphs of neural network models like those described above can be applied to political systems. The full-scale use of these technologies to guide lobbying strategy is theoretical, but plausible.

Put together, these three components could create an automatic system for generating profitable microlegislation. The policy proposal system would create millions, even billions, of possible amendments. The impact assessor would identify the few that promise to be most profitable to the client. And the lobbying strategy tool would produce a blueprint for getting them passed.

What remains is for human lobbyists to walk the floors of the Capitol or state house, and perhaps supply some cash to grease the wheels. These final two aspects of lobbying—access and financing—cannot be supplied by the AI tools we envision. This suggests that lobbying will continue to primarily benefit those who are already influential and wealthy, and AI assistance will amplify their existing advantages.

The transformative benefit that AI offers to lobbyists and their clients is scale. While individual lobbyists tend to focus on the federal level or a single state, with AI assistance they could more easily infiltrate a large number of state-level (or even local-level) law-making bodies and elections. At that level, where the average cost of a seat is measured in the tens of thousands of dollars instead of millions, a single donor can wield a lot of influence—if automation makes it possible to coordinate lobbying across districts.

How to stop them

When it comes to combating the potentially adverse effects of assistive AI, the first response always seems to be to try to detect whether or not content was AI-generated. We could imagine a defensive AI that detects anomalous lobbyist spending associated with amendments that benefit the contributing group. But by then, the damage might already be done.

In general, methods for detecting the work of AI tend not to keep pace with its ability to generate convincing content. And these strategies won’t be implemented by AIs alone. The lobbyists will still be humans who take the results of an AI microlegislator and further refine the computer’s strategies. These hybrid human-AI systems will not be detectable from their output.

But the good news is: the same strategies that have long been used to combat misbehavior by human lobbyists can still be effective when those lobbyists get an AI assist. We don’t need to reinvent our democracy to stave off the worst risks of AI; we just need to more fully implement long-standing ideals.

First, we should reduce the dependence of legislatures on monolithic, multi-thousand-page omnibus bills voted on under deadline. This style of legislating exploded in the 1980s and 1990s and continues through to the most recent federal budget bill. Notwithstanding their legitimate benefits to the political system, omnibus bills present an obvious and proven vehicle for inserting unnoticed provisions that may later surprise the same legislators who approved them.

The issue is not that individual legislators need more time to read and understand each bill (that isn’t realistic or even necessary). It’s that omnibus bills must pass. There is an imperative to pass a federal budget bill, and so the capacity to push back on individual provisions that may seem deleterious (or just impertinent) to any particular group is small. Bills that are too big to fail are ripe for hacking by microlegislation.

Moreover, the incentive for legislators to introduce microlegislation catering to a narrow interest is greater if the threat of exposure is lower. To strengthen the threat of exposure for misbehaving legislative sponsors, bills should focus more tightly on individual substantive areas and, after the introduction of amendments, allow more time before the committee and floor votes. During this time, we should encourage public review and testimony to provide greater oversight.

Second, we should strengthen disclosure requirements on lobbyists, whether they’re entirely human or AI-assisted. State laws regarding lobbying disclosure are a hodgepodge. North Dakota, for example, only requires lobbying reports to be filed annually, so that by the time a disclosure is made, the policy is likely already decided. A lobbying disclosure scorecard created by Open Secrets, a group researching the influence of money in US politics, tracks nine states that do not even require lobbyists to report their compensation.

Ideally, it would be great for the public to see all communication between lobbyists and legislators, whether it takes the form of a proposed amendment or not. Absent that, let’s give the public the benefit of reviewing what lobbyists are lobbying for—and why. Lobbying is traditionally an activity that happens behind closed doors. Right now, many states reinforce that: they actually exempt testimony delivered publicly to a legislature from being reported as lobbying.

In those jurisdictions, if you reveal your position to the public, you’re no longer lobbying. Let’s do the inverse: require lobbyists to reveal their positions on issues. Some jurisdictions already require a statement of position (a ‘yea’ or ‘nay’) from registered lobbyists. And in most (but not all) states, you could make a public records request regarding meetings held with a state legislator and hope to get something substantive back. But we can expect more—lobbyists could be required to proactively publish, within a few days, a brief summary of what they demanded of policymakers during meetings and why they believe it’s in the general interest.

We can’t rely on corporations to be forthcoming and wholly honest about the reasons behind their lobbying positions. But having them on the record about their intentions would at least provide a baseline for accountability.

Finally, consider the role AI assistive technologies may have on lobbying firms themselves and the labor market for lobbyists. Many observers are rightfully concerned about the possibility of AI replacing or devaluing the human labor it automates. If the automating potential of AI ends up commodifying the work of political strategizing and message development, it may indeed put some professionals on K Street out of work.

But don’t expect that to disrupt the careers of the most astronomically compensated lobbyists: former members Congress and other insiders who have passed through the revolving door. There is no shortage of reform ideas for limiting the ability of government officials turned lobbyists to sell access to their colleagues still in government, and they should be adopted and—equally important—maintained and enforced in successive Congresses and administrations.

None of these solutions are really original, specific to the threats posed by AI, or even predominantly focused on microlegislation—and that’s the point. Good governance should and can be robust to threats from a variety of techniques and actors.

But what makes the risks posed by AI especially pressing now is how fast the field is developing. We expect the scale, strategies, and effectiveness of humans engaged in lobbying to evolve over years and decades. Advancements in AI, meanwhile, seem to be making impressive breakthroughs at a much faster pace—and it’s still accelerating.

The legislative process is a constant struggle between parties trying to control the rules of our society as they are updated, rewritten, and expanded at the federal, state, and local levels. Lobbying is an important tool for balancing various interests through our system. If it’s well-regulated, perhaps lobbying can support policymakers in making equitable decisions on behalf of us all.

This article was co-written with Nathan E. Sanders and originally appeared in MIT Technology Review.

AWS Week in Review – February 27, 2023

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-february-27-2023/

A couple days ago, I had the honor of doing a live stream on generative AI, discussing recent innovations and concepts behind the current generation of large language and vision models and how we got there. In today’s roundup of news and announcements, I will share some additional information—including an expanded partnership to make generative AI more accessible, a blog post about diffusion models, and our weekly Twitch show on Generative AI. Let’s dive right into it!

Last Week’s Launches
Here are some launches that got my attention during the previous week:

Integrated Private Wireless on AWS – The Integrated Private Wireless on AWS program is designed to provide enterprises with managed and validated private wireless offerings from leading communications service providers (CSPs). The offerings integrate CSPs’ private 5G and 4G LTE wireless networks with AWS services across AWS Regions, AWS Local Zones, AWS Outposts, and AWS Snow Family. For more details, read this Industries Blog post and check out this eBook. And, if you’re attending the Mobile World Congress Barcelona this week, stop by the AWS booth at the Upper Walkway, South Entrance, at the Fira Barcelona Gran Via, to learn more.

AWS Glue Crawlers – Now integrate with Lake Formation. AWS Glue Crawlers are used to discover datasets, extract schema information, and populate the AWS Glue Data Catalog. With this Glue Crawler and Lake Formation integration, you can configure a crawler to use Lake Formation permissions to access an S3 data store or a Data Catalog table with an underlying S3 location within the same AWS account or another AWS account. You can configure an existing Data Catalog table as a crawler’s target if the crawler and the Data Catalog table reside in the same account. To learn more, check out this Big Data Blog post.

AWS Glue Crawlers now support integration with AWS Lake Formation

Amazon SageMaker Model Monitor – You can now launch and configure Amazon SageMaker Model Monitor from the SageMaker Model Dashboard using a code-free point-and-click setup experience. SageMaker Model Dashboard gives you unified monitoring across all your models by providing insights into deviations from expected behavior, automated alerts, and troubleshooting to improve model performance. Model Monitor can detect drift in data quality, model quality, bias, and feature attribution and alert you to take remedial actions when such changes occur.

Amazon EKS – Now supports Kubernetes version 1.25. Kubernetes 1.25 introduced several new features and bug fixes, and you can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.25. You can create new 1.25 clusters or upgrade your existing clusters to 1.25 using the Amazon EKS console, the eksctl command line interface, or through an infrastructure-as-code tool. To learn more about this release named “Combiner,” check out this Containers Blog post.

Amazon Detective – New self-paced workshop available. You can now learn to use Amazon Detective with a new self-paced workshop in AWS Workshop Studio. AWS Workshop Studio is a collection of self-paced tutorials designed to teach practical skills and techniques to solve business problems. The Amazon Detective workshop is designed to teach you how to use the primary features of Detective through a series of interactive modules that cover topics such as security alert triage, security incident investigation, and threat hunting. Get started with the Amazon Detective Workshop.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some additional news items and blog posts that you may find interesting:

🤗❤☁ AWS and Hugging Face collaborate to make generative AI more accessible and cost-efficient – This previous week, we announced an expanded collaboration between AWS and Hugging Face to accelerate the training, fine-tuning, and deployment of large language and vision models used to create generative AI applications. Generative AI applications can perform a variety of tasks, including text summarization, answering questions, code generation, image creation, and writing essays and articles. For more details, read this Machine Learning Blog post.

If you are interested in generative AI, I also recommend reading this blog post on how to Fine-tune text-to-image Stable Diffusion models with Amazon SageMaker JumpStart. Stable Diffusion is a deep learning model that allows you to generate realistic, high-quality images and stunning art in just a few seconds. This blog post discusses how to make design choices, including dataset quality, size of training dataset, choice of hyperparameter values, and applicability to multiple datasets.

AWS open-source news and updates – My colleague Ricardo writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #146 here.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

Build On AWS - Generative AI#BuildOn Generative AI – Join our weekly live Build On Generative AI Twitch show. Every Monday morning, 9:00 US PT, my colleagues Emily and Darko take a look at aspects of generative AI. They host developers, scientists, startup founders, and AI leaders and discuss how to build generative AI applications on AWS.

In today’s episode, my colleague Chris walked us through an end-to-end ML pipeline from data ingestion to fine-tuning and deployment of generative AI models. You can watch the video here.

AWS Pi Day 2023 SmallAWS Pi Day – Join me on March 14 for the third annual AWS Pi Day live, virtual event hosted on the AWS On Air channel on Twitch as we celebrate the 17th birthday of Amazon S3 and the cloud.

We will discuss the latest innovations across AWS Data services, from storage to analytics and AI/ML. If you are curious about how AI can transform your business, register here and join my session.

AWS Innovate Data and AI/ML edition – AWS Innovate is a free online event to learn the latest from AWS experts and get step-by-step guidance on using AI/ML to drive fast, efficient, and measurable results. Register now for EMEA (March 9) and the Americas (March 14).

You can browse all upcoming AWS-led in-person, virtual events and developer focused events such as Community Days.

That’s all for this week. Check back next Monday for another Week in Review!

— Antje

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!