Tag Archives: artificial intelligence

AWS Week in Review – AWS Glue Crawlers Now Supports Apache Iceberg, Amazon RDS Updates, and More – July 10, 2023

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-aws-glue-crawlers-now-supports-apache-iceberg-amazon-rds-updates-and-more-july-10-2023/

The US celebrated Independence Day last week on July 4 with fireworks and barbecues across the country. But fireworks weren’t the only thing that launched last week. Let’s have a look!

Last Week’s Launches
Here are some launches that got my attention:

AWS GlueAWS Glue Crawlers now supports Apache Iceberg tables. Apache Iceberg is an open-source table format for data stored in data lakes. You can now automatically register Apache Iceberg tables into AWS Glue Data Catalog by running the Glue Crawler. You can then query Glue Catalog Iceberg tables across various analytics engines and apply AWS Lake Formation fine-grained permissions when querying from Amazon Athena. Check out the AWS Glue Crawler documentation to learn more.

Amazon Relational Database Service (Amazon RDS) for PostgreSQL – PostgreSQL 16 Beta 2 is now available in the Amazon RDS Database Preview Environment. The PostgreSQL community released PostgreSQL 16 Beta 2 on June 29, 2023, which enables logical replication from standbys and includes numerous performance improvements. You can deploy PostgreSQL 16 Beta 2 in the preview environment and start evaluating the pre-release of PostgreSQL 16 on Amazon RDS for PostgreSQL.

In addition, Amazon RDS for PostgreSQL Multi-AZ Deployments with two readable standbys now supports logical replication. With logical replication, you can stream data changes from Amazon RDS for PostgreSQL to other databases for use cases such as data consolidation for analytical applications, change data capture (CDC), replicating select tables rather than the entire database, or for replicating data between different major versions of PostgreSQL. Check out the Amazon RDS User Guide for more details.

Amazon CloudWatch – Amazon CloudWatch now supports Service Quotas in cross-account observability. With this, you can track and visualize resource utilization and limits across various AWS services from multiple AWS accounts within a region using a central monitoring account. You no longer have to track the quotas by logging in to individual accounts, instead from a central monitoring account, you can create dashboards and alarms for the AWS service quota usage across all your source accounts from a central monitoring account. Setup CloudWatch cross-account observability to get started.

Amazon SageMaker – You can now associate a SageMaker Model Card with a specific model version in SageMaker Model Registry. This lets you establish a single source of truth for your registered model versions, with comprehensive, centralized, and standardized documentation across all stages of the model’s journey on SageMaker, facilitating discoverability and promoting governance, compliance, and accountability throughout the model lifecycle. Learn more about SageMaker Model Cards in the developer guide.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some additional blog posts and news items that you might find interesting:

Building generative AI applications for your startup – In this AWS Startups Blog post, Hrushikesh explains various approaches to build generative AI applications, and reviews their key component. Read the full post for the details.

Components of the generative AI landscape

Components of the generative AI landscape.

How Alexa learned to speak with an Irish accent – If you’re curious how Amazon researchers used voice conversation to generate Irish-accented training data in Alexa’s own voice, check out this Amazon Science Blog post. 

AWS open-source news and updates – My colleague Ricardo writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Global Summits – Check your calendars and sign up for the AWS Summit close to where you live or work: Hong Kong (July 20), New York City (July 26), Taiwan (August 2-3), São Paulo (August 3), and Mexico City (August 30).

AWS Community Days – Join a community-led conference run by AWS user group leaders in your region: Malaysia (July 22), Philippines (July 29-30), Colombia (August 12), and West Africa (August 19).

AWS re:Invent 2023AWS re:Invent (November 27 – December 1) – Join us to hear the latest from AWS, learn from experts, and connect with the global cloud community. Registration is now open.

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Week in Review!

— Antje

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

The AI Dividend

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/the-ai-dividend.html

For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

Everyone is talking about these new AI technologies—like ChatGPT—and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.

You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.

Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.

There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable—not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.

Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies—those with substantial revenue—would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.

Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.

Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.

The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.

Our plan would apply to generative AI used in the US. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.

Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable, and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks—both every day and esoteric—posed by AI, and the government may need to develop policies to remedy any harms that arise.

Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides—particularly since this new technology isn’t possible without our contribution.

This essay was written with Barath Raghavan, and previously appeared on Politico.com.

Building Generative AI into Marketing Strategies: A Primer

Post Syndicated from nnatri original https://aws.amazon.com/blogs/messaging-and-targeting/building-generative-ai-into-marketing-strategies-a-primer/

Introduction

Artificial Intelligence has undoubtedly shaped many industries and is poised to be one of the most transformative technologies in the 21st century. Among these is the field of marketing where the application of generative AI promises to transform the landscape. This blog post explores how generative AI can revolutionize marketing strategies, offering innovative solutions and opportunities.

According to Harvard Business Review, marketing’s core activities, such as understanding customer needs, matching them to products and services, and persuading people to buy, can be dramatically enhanced by AI. A 2018 McKinsey analysis of more than 400 advanced use cases showed that marketing was the domain where AI would contribute the greatest value. The ability to leverage AI can not only help automate and streamline processes but also deliver personalized, engaging content to customers. It enhances the ability of marketers to target the right audience, predict consumer behavior, and provide personalized customer experiences. AI allows marketers to process and interpret massive amounts of data, converting it into actionable insights and strategies, thereby redefining the way businesses interact with customers.

Generating content is just one part of the equation. AI-generated content, no matter how good, is useless if it does not arrive at the intended audience at the right point of time. Integrating the generated content into an automated marketing pipeline that not only understands the customer profile but also delivers a personalized experience at the right point of interaction is also crucial to getting the intended action from the customer.

Amazon Web Services (AWS) provides a robust platform for implementing generative AI in marketing strategies. AWS offers a range of AI and machine learning services that can be leveraged for various marketing use cases, from content creation to customer segmentation and personalized recommendations. Two services that are instrumental to delivering customer contents and can be easily integrated with other generative AI services are Amazon Pinpoint and Amazon Simple Email Service. By integrating generative AI with Amazon Pinpoint and Amazon SES, marketers can automate the creation of personalized messages for their customers, enhancing the effectiveness of their campaigns. This combination allows for a seamless blend of AI-powered content generation and targeted, data-driven customer engagement.

As we delve deeper into this blog post, we’ll explore the mechanics of generative AI, its benefits and how AWS services can facilitate its integration into marketing communications.

What is Generative AI?

Generative AI is a subset of artificial intelligence that leverages machine learning techniques to generate new data instances that resemble your training data. It works by learning the underlying patterns and structures of the input data, and then uses this understanding to generate new, similar data. This is achieved through the use of models like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformer models.

What do Generative AI buzzwords mean?

In the world of AI, buzzwords are abundant. Terms like “deep learning”, “neural networks”, “machine learning”, “generative AI”, and “large language models” are often used interchangeably, but they each have distinct meanings. Understanding these terms is crucial for appreciating the capabilities and limitations of different AI technologies.

Machine Learning (ML) is a subset of AI that involves the development of algorithms that allow computers to learn from and make decisions or predictions based on data. These algorithms can be ‘trained’ on a dataset and then used to predict or classify new data. Machine learning models can be broadly categorized into supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.

Deep Learning is a subset of machine learning that uses neural networks with many layers (hence “deep”) to model and understand complex patterns. These layers of neurons process different features, and their outputs are combined to produce a final result. Deep learning models can handle large amounts of data and are particularly good at processing images, speech, and text.

Generative AI refers specifically to AI models that can generate new data that mimic the data they were trained on. This is achieved through the use of models like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Generative AI can create anything from written content to visual designs, and even music, making it a versatile tool in the hands of marketers.

Large Language Models (LLMs) are a type of generative AI that are trained on a large corpus of text data and can generate human-like text. They predict the probability of a word given the previous words used in the text. They are particularly useful in applications like text completion, translation, summarization, and more. While they are a type of generative AI, they are specifically designed for handling text data.

Simply put, you can understand that Large Language Model is a subset of Generative AI, which is then a subset of Machine Learning and they ultimately falls under the umbrella term of Artificial Intelligence.

What are the problems with generative AI and marketing?

While generative AI holds immense potential for transforming marketing strategies, it’s important to be aware of its limitations and potential pitfalls, especially when it comes to content generation and customer engagement. Here are some common challenges that marketers should be aware of:

Bias in Generative AI Generative AI models learn from the data they are trained on. If the training data is biased, the AI model will likely reproduce these biases in its output. For example, if a model is trained primarily on data from one demographic, it may not accurately represent other demographics, leading to marketing campaigns that are ineffective or offensive. Imagine if you are trying to generate an image for a campaign targeting females, a generative AI model might not generate images of females in jobs like doctors, lawyers or judges, leading your campaign to suffer from bias and uninclusiveness.

Insensitivity to Cultural Nuances Generative AI models may not fully understand cultural nuances or sensitive topics, which can lead to content that is insensitive or even harmful. For instance, a generative AI model used to create social media posts for a global brand may inadvertently generate content that is seen as disrespectful or offensive by certain cultures or communities.

Potential for Inappropriate or Offensive Content Generative AI models can sometimes generate content that is inappropriate or offensive. This is often because the models do not fully understand the context in which certain words or phrases should be used. It’s important to have safeguards in place to review and approve content before it’s published. A common problem with LLMs is hallucination: whereby the model speaks false knowledge as if it is accurate. A marketing team might mistakenly publish a auto-generated promotional content that contains a 20% discount on an item when no such promotions were approved. This could have disastrous effect if safeguards are not in place and erodes customers’ trust.

Intellectual Property and Legal Concerns Generative AI models can create new content, such as images, music, videos, and text, which raises questions of ownership and potential copyright infringement. Being a relatively new field, legal discussions are still ongoing to discuss legal implications of using Generative AI, e.g. who should own generated AI content, and copyright infringement.

Not a Replacement for Human Creativity Finally, while generative AI can automate certain aspects of marketing campaigns, it cannot replace the creativity or emotional connections that marketers use in crafting compelling campaigns. The most successful marketing campaigns touch the hearts of the customers, and while Generative AI is very capable of replicating human content, it still lacks in mimicking that “human touch”.

In conclusion, while generative AI offers exciting possibilities for marketing, it’s important to approach its use with a clear understanding of its limitations and potential pitfalls. By doing so, marketers can leverage the benefits of generative AI while mitigating risks.

How can I use generative AI in marketing communications?

Amazon Web Services (AWS) provides a comprehensive suite of services that facilitate the use of generative AI in marketing. These services are designed to handle a variety of tasks, from data processing and storage to machine learning and analytics, making it easier for marketers to implement and benefit from generative AI technologies.

Overview of Relevant AWS Services

AWS offers several services that are particularly relevant for generative AI in marketing:

  • Amazon Bedrock: This service makes FMs accessible via an API. Bedrock offers the ability to access a range of powerful FMs for text and images, including Amazon’s Titan FMs. With Bedrock’s serverless experience, customers can easily find the right model for what they’re trying to get done, get started quickly, privately customize FMs with their own data, and easily integrate and deploy them into their applications using the AWS tools and capabilities they are familiar with.
  • Amazon Titan Models: These are two new large language models (LLMs) that AWS is announcing. The first is a generative LLM for tasks such as summarization, text generation, classification, open-ended Q&A, and information extraction. The second is an embeddings LLM that translates text inputs into numerical representations (known as embeddings) that contain the semantic meaning of the text. In response to the pitfalls mentioned above around Generative AI hallucinations and inaccurate information, AWS is actively working on improving accuracy and ensuring its Titan models produce high-quality responses, said Bratin Saha, an AWS vice president.
  • Amazon SageMaker: This fully managed service enables data scientists and developers to build, train, and deploy machine learning models quickly. SageMaker includes modules that can be used for generative AI, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
  • Amazon Pinpoint: This flexible and scalable outbound and inbound marketing communications service enables businesses to engage with customers across multiple messaging channels. Amazon Pinpoint is designed to scale with your business, allowing you to send messages to a large number of users in a short amount of time. It integrates with AWS’s generative AI services to enable personalized, AI-driven marketing campaigns.
  • Amazon Simple Email Service (SES): This cost-effective, flexible, and scalable email service enables marketers to send transactional emails, marketing messages, and other types of high-quality content to their customers. SES integrates with other AWS services, making it easy to send emails from applications being hosted on services such as Amazon EC2. SES also works seamlessly with Amazon Pinpoint, allowing for the creation of customer engagement communications that drive user activity and engagement.

How to build Generative AI into marketing communications

Dynamic Audience Targeting and Segmentation: Generative AI can help marketers to dynamically target and segment their audience. It can analyze customer data and behavior to identify patterns and trends, which can then be used to create more targeted marketing campaigns. Using Amazon Sagemaker or the soon-to-be-available Amazon Bedrock and Amazon Titan Models, Generative AI can suggest labels for customers based on unstructured data. According to McKinsey, generative AI can analyze data and identify consumer behavior patterns to help marketers create appealing content that resonates with their audience.

Personalized Marketing: Generative AI can be used to automate the creation of marketing content. This includes generating text for blogs, social media posts, and emails, as well as creating images and videos. This can save marketers a significant amount of time and effort, allowing them to focus on other aspects of their marketing strategy. Where it really shines is the ability to productionize marketing content creation, reducing the needs for marketers to create multiple copies for different customer segments. Previously, marketers would need to generate many different copies for each granularity of customers (e.g. attriting customers who are between the age of 25-34 and loves food). Generative AI can automate this process, providing the opportunities to dynamically create these contents programmatically and automatically send out to the most relevant segments via Amazon Pinpoint or Amazon SES.

Marketing Automation: Generative AI can automate various aspects of marketing, such as email marketing, social media marketing, and search engine marketing. This includes automating the creation and distribution of marketing content, as well as analyzing the performance of marketing campaigns. Amazon Pinpoint currently automates customer communications using journeys which is a customized, multi-step engagement experience. Generative AI could create a Pinpoint journey based on customer engagement data, engagement parameters and a prompt. This enables GenAI to not only personalize the content but create a personalized omnichannel experience that can extend throughout a period of time. It then becomes possible that journeys are created dynamically by generative AI and A/B tested on the fly to achieve an optimal pre-defined Key Performance Indicator (KPI).

A Sample Generative AI Use Case in Marketing Communications

AWS services are designed to work together, making it easy to implement generative AI in your marketing strategies. For instance, you can use Amazon SageMaker to build and train your generative AI models which assist with automating marketing content creation, and Amazon Pinpoint or Amazon SES to deliver the content to your customers.

Companies using AWS can theoretically supplement their existing workloads with generative AI capabilities without the needs for migration. The following reference architecture outlines a sample use case and showcases how Generative AI can be integrated into your customer journeys built on the AWS cloud. An e-commerce company can potentially receive many complaints emails a day. Companies spend a lot of money to acquire customers, it’s therefore important to think about how to turn that negative experience into a positive one.

GenAIMarketingSolutionArchitecture

When an email is received via Amazon SES (1), its content can be passed through to generative AI models using GANs to help with sentiment analysis (2). An article published by Amazon Science utilizes GANs for sentiment analysis for cases where a lack of data is a problem. Alternatively, one can also use Amazon Comprehend at this step and run A/B tests between the two models. The limitations with Amazon Comprehend would be the limited customizations you can perform to the model to fit your business needs.

Once the email’s sentiment is determined, the sentiment event is logged into Pinpoint (3), which then triggers an automatic winback journey (4).

Generative AI (e.g. HuggingFace’s Bloom Text Generation Models) can again be used here to dynamically create the content without needing to wait for the marketer’s input (5). Whereas marketers would need to generate many different copies for each granularity of customers (e.g. attriting customers who are between the age of 25-34 and loves food), generative AI provides the opportunities to dynamically create these contents on the fly given the above inputs.

Once the campaign content has been generated, the model pumps the template backs into Amazon Pinpoint (6), which then sends the personalized copy to the customer (7).

Result: Another customer is saved from attrition!

Conclusion

The landscape of generative AI is vast and ever-evolving, offering a plethora of opportunities for marketers to enhance their strategies and deliver more personalized, engaging content. AWS plays a pivotal role in this landscape, providing a comprehensive suite of services that facilitate the implementation of generative AI in marketing. From building and training AI models with Amazon SageMaker to delivering personalized messages with Amazon Pinpoint and Amazon SES, AWS provides the tools and infrastructure needed to harness the power of generative AI.

The potential of generative AI in relation to the marketer is immense. It offers the ability to automate content creation, personalize customer interactions, and derive valuable insights from data, among other benefits. However, it’s important to remember that while generative AI can automate certain aspects of marketing, it is not a replacement for human creativity and intuition. Instead, it should be viewed as a tool that can augment human capabilities and free up time for marketers to focus on strategy and creative direction.

Get started with Generative AI in marketing communications

As we conclude this exploration of generative AI and its applications in marketing, we encourage you to:

  • Brainstorm potential Generative AI use cases for your business. Consider how you can leverage generative AI to enhance your marketing strategies. This could involve automating content creation, personalizing customer interactions, or deriving insights from data.
  • Start leveraging generative AI in your marketing strategies with AWS today. AWS provides a comprehensive suite of services that make it easy to implement generative AI in your marketing strategies. By integrating these services into your workflows, you can enhance personalization, improve customer engagement, and drive better results from your campaigns.
  • Watch out for the next part in the series of integrating Generative AI into Amazon Pinpoint and SES. We will delve deeper into how you can leverage Amazon Pinpoint and SES together with generative AI to enhance your marketing campaigns. Stay tuned!

The journey into the world of generative AI is just beginning. As technology continues to evolve, so too will the opportunities for marketers to leverage AI to enhance their strategies and deliver more personalized, engaging content. We look forward to exploring this exciting frontier with you.

About the Author

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Class-Action Lawsuit for Scraping Data without Permission

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/class-action-lawsuit-for-scraping-data-without-permission.html

I have mixed feelings about this class-action lawsuit against OpenAI and Microsoft, claiming that it “scraped 300 billion words from the internet” without either registering as a data broker or obtaining consent. On the one hand, I want this to be a protected fair use of public data. On the other hand, I want us all to be compensated for our uniquely human ability to generate language.

There’s an interesting wrinkle on this. A recent paper showed that using AI generated text to train another AI invariably “causes irreversible defects.” From a summary:

The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.

Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data.

This is the same idea that Ted Chiang wrote about: that ChatGPT is a “blurry JPEG of all the text on the Web.” But the paper includes the math that proves the claim.

What this means is that text from before last year—text that is known human-generated—will become increasingly valuable.

Generative AI with Large Language Models — New Hands-on Course by DeepLearning.AI and AWS

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/generative-ai-with-large-language-models-new-hands-on-course-by-deeplearning-ai-and-aws/

Generative AI has taken the world by storm, and we’re starting to see the next wave of widespread adoption of AI with the potential for every customer experience and application to be reinvented with generative AI. Generative AI lets you to create new content and ideas including conversations, stories, images, videos, and music. Generative AI is powered by very large machine learning models that are pre-trained on vast amounts of data, commonly referred to as foundation models (FMs).

A subset of FMs called large language models (LLMs) are trained on trillions of words across many natural-language tasks. These LLMs can understand, learn, and generate text that’s nearly indistinguishable from text produced by humans. And not only that, LLMs can also engage in interactive conversations, answer questions, summarize dialogs and documents, and provide recommendations. They can power applications across many tasks and industries including creative writing for marketing, summarizing documents for legal, market research for financial, simulating clinical trials for healthcare, and code writing for software development.

Companies are moving rapidly to integrate generative AI into their products and services. This increases the demand for data scientists and engineers who understand generative AI and how to apply LLMs to solve business use cases.

This is why I’m excited to announce that DeepLearning.AI and AWS are jointly launching a new hands-on course Generative AI with large language models on Coursera’s education platform that prepares data scientists and engineers to become experts in selecting, training, fine-tuning, and deploying LLMs for real-world applications.

DeepLearning.AI was founded in 2017 by machine learning and education pioneer Andrew Ng with the mission to grow and connect the global AI community by delivering world-class AI education.

Generative AI with large language models

DeepLearning.AI teamed up with generative AI specialists from AWS including Chris Fregly, Shelbee Eigenbrode, Mike Chambers, and me to develop and deliver this course for data scientists and engineers who want to learn how to build generative AI applications with LLMs. We developed the content for this course under the guidance of Andrew Ng and with input from various industry experts and applied scientists at Amazon, AWS, and Hugging Face.

Course Highlights
This is the first comprehensive Coursera course focused on LLMs that details the typical generative AI project lifecycle, including scoping the problem, choosing an LLM, adapting the LLM to your domain, optimizing the model for deployment, and integrating into business applications. The course not only focuses on the practical aspects of generative AI but also highlights the science behind LLMs and why they’re effective.

The on-demand course is broken down into three weeks of content with approximately 16 hours of videos, quizzes, labs, and extra readings. The hands-on labs hosted by AWS Partner Vocareum let you apply the techniques directly in an AWS environment provided with the course and includes all resources needed to work with the LLMs and explore their effectiveness.

In just three weeks, the course prepares you to use generative AI for business and real-world applications. Let’s have a quick look at each week’s content.

Week 1 – Generative AI use cases, project lifecycle, and model pre-training
In week 1, you will examine the transformer architecture that powers many LLMs, see how these models are trained, and consider the compute resources required to develop them. You will also explore how to guide model output at inference time using prompt engineering and by specifying generative configuration settings.

In the first hands-on lab, you’ll construct and compare different prompts for a given generative task. In this case, you’ll summarize conversations between multiple people. For example, imagine summarizing support conversations between you and your customers. You’ll explore prompt engineering techniques, try different generative configuration parameters, and experiment with various sampling strategies to gain intuition on how to improve the generated model responses.

Week 2 – Fine-tuning, parameter-efficient fine-tuning (PEFT), and model evaluation
In week 2, you will explore options for adapting pre-trained models to specific tasks and datasets through a process called fine-tuning. A variant of fine-tuning, called parameter efficient fine-tuning (PEFT), lets you fine-tune very large models using much smaller resources—often a single GPU. You will also learn about the metrics used to evaluate and compare the performance of LLMs.

In the second lab, you’ll get hands-on with parameter-efficient fine-tuning (PEFT) and compare the results to prompt engineering from the first lab. This side-by-side comparison will help you gain intuition into the qualitative and quantitative impact of different techniques for adapting an LLM to your domain specific datasets and use cases.

Week 3 – Fine-tuning with reinforcement learning from human feedback (RLHF), retrieval-augmented generation (RAG), and LangChain
In week 3, you will make the LLM responses more humanlike and align them with human preferences using a technique called reinforcement learning from human feedback (RLHF). RLHF is key to improving the model’s honesty, harmlessness, and helpfulness. You will also explore techniques such as retrieval-augmented generation (RAG) and libraries such as LangChain that allow the LLM to integrate with custom data sources and APIs to improve the model’s response further.

In the final lab, you’ll get hands-on with RLHF. You’ll fine-tune the LLM using a reward model and a reinforcement-learning algorithm called proximal policy optimization (PPO) to increase the harmlessness of your model responses. Finally, you will evaluate the model’s harmlessness before and after the RLHF process to gain intuition into the impact of RLHF on aligning an LLM with human values and preferences.

Enroll Today
Generative AI with large language models is an on-demand, three-week course for data scientists and engineers who want to learn how to build generative AI applications with LLMs.

Enroll for generative AI with large language models today.

— Antje

AI as Sensemaking for Public Comments

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/ai-as-sensemaking-for-public-comments.html

It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?

You’d be forgiven if you’re distraught about society’s ability to grapple with this new technology. So far, there’s no lack of prognostications about the democratic doom that AI may wreak on the US system of government. There are legitimate reasons to be concerned that AI could spread misinformation, break public comment processes on regulations, inundate legislators with artificial constituent outreach, help to automate corporate lobbying, or even generate laws in a way tailored to benefit narrow interests.

But there are reasons to feel more sanguine as well. Many groups have started demonstrating the potential beneficial uses of AI for governance. A key constructive-use case for AI in democratic processes is to serve as discussion moderator and consensus builder.

To help democracy scale better in the face of growing, increasingly interconnected populations—as well as the wide availability of AI language tools that can generate reams of text at the click of a button—the US will need to leverage AI’s capability to rapidly digest, interpret and summarize this content.

There are two different ways to approach the use of generative AI to improve civic participation and governance. Each is likely to lead to drastically different experience for public policy advocates and other people trying to have their voice heard in a future system where AI chatbots are both the dominant readers and writers of public comment.

For example, consider individual letters to a representative, or comments as part of a regulatory rulemaking process. In both cases, we the people are telling the government what we think and want.

For more than half a century, agencies have been using human power to read through all the comments received, and to generate summaries and responses of their major themes. To be sure, digital technology has helped.

In 2021, the Council of Federal Chief Data Officers recommended modernizing the comment review process by implementing natural language processing tools for removing duplicates and clustering similar comments in processes governmentwide. These tools are simplistic by the standards of 2023 AI. They work by assessing the semantic similarity of comments based on metrics like word frequency (How often did you say “personhood”?) and clustering similar comments and giving reviewers a sense of what topic they relate to.

Think of this approach as collapsing public opinion. They take a big, hairy mass of comments from thousands of people and condense them into a tidy set of essential reading that generally suffices to represent the broad themes of community feedback. This is far easier for a small agency staff or legislative office to handle than it would be for staffers to actually read through that many individual perspectives.

But what’s lost in this collapsing is individuality, personality, and relationships. The reviewer of the condensed comments may miss the personal circumstances that led so many commenters to write in with a common point of view, and may overlook the arguments and anecdotes that might be the most persuasive content of the testimony.

Most importantly, the reviewers may miss out on the opportunity to recognize committed and knowledgeable advocates, whether interest groups or individuals, who could have long-term, productive relationships with the agency.

These drawbacks have real ramifications for the potential efficacy of those thousands of individual messages, undermining what all those people were doing it for. Still, practicality tips the balance toward of some kind of summarization approach. A passionate letter of advocacy doesn’t hold any value if regulators or legislators simply don’t have time to read it.

There is another approach. In addition to collapsing testimony through summarization, government staff can use modern AI techniques to explode it. They can automatically recover and recognize a distinctive argument from one piece of testimony that does not exist in the thousands of other testimonies received. They can discover the kinds of constituent stories and experiences that legislators love to repeat at hearings, town halls and campaign events. This approach can sustain the potential impact of individual public comment to shape legislation even as the volumes of testimony may rise exponentially.

In computing, there is a rich history of that type of automation task in what is called outlier detection. Traditional methods generally involve finding a simple model that explains most of the data in question, like a set of topics that well describe the vast majority of submitted comments. But then they go a step further by isolating those data points that fall outside the mold—comments that don’t use arguments that fit into the neat little clusters.

State-of-the-art AI language models aren’t necessary for identifying outliers in text document data sets, but using them could bring a greater degree of sophistication and flexibility to this procedure. AI language models can be tasked to identify novel perspectives within a large body of text through prompting alone. You simply need to tell the AI to find them.

In the absence of that ability to extract distinctive comments, lawmakers and regulators have no choice but to prioritize on other factors. If there is nothing better, “who donated the most to our campaign” or “which company employs the most of my former staffers” become reasonable metrics for prioritizing public comments. AI can help elected representatives do much better.

If Americans want AI to help revitalize the country’s ailing democracy, they need to think about how to align the incentives of elected leaders with those of individuals. Right now, as much as 90% of constituent communications are mass emails organized by advocacy groups, and they go largely ignored by staffers. People are channeling their passions into a vast digital warehouses where algorithms box up their expressions so they don’t have to be read. As a result, the incentive for citizens and advocacy groups is to fill that box up to the brim, so someone will notice it’s overflowing.

A talented, knowledgeable, engaged citizen should be able to articulate their ideas and share their personal experiences and distinctive points of view in a way that they can be both included with everyone else’s comments where they contribute to summarization and recognized individually among the other comments. An effective comment summarization process would extricate those unique points of view from the pile and put them into lawmakers’ hands.

This essay was written with Nathan Sanders, and previously appeared in the Conversation.

Amazon OpenSearch Service’s vector database capabilities explained

Post Syndicated from Jon Handler original https://aws.amazon.com/blogs/big-data/amazon-opensearch-services-vector-database-capabilities-explained/

OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, security monitoring, and observability applications, licensed under the Apache 2.0 license. It comprises a search engine, OpenSearch, which delivers low-latency search and aggregations, OpenSearch Dashboards, a visualization and dashboarding tool, and a suite of plugins that provide advanced capabilities like alerting, fine-grained access control, observability, security monitoring, and vector storage and processing. Amazon OpenSearch Service is a fully managed service that makes it simple to deploy, scale, and operate OpenSearch in the AWS Cloud.

As an end-user, when you use OpenSearch’s search capabilities, you generally have a goal in mind—something you want to accomplish. Along the way, you use OpenSearch to gather information in support of achieving that goal (or maybe the information is the original goal). We’ve all become used to the “search box” interface, where you type some words, and the search engine brings back results based on word-to-word matching. Let’s say you want to buy a couch in order to spend cozy evenings with your family around the fire. You go to Amazon.com, and you type “a cozy place to sit by the fire.” Unfortunately, if you run that search on Amazon.com, you get items like fire pits, heating fans, and home decorations—not what you intended. The problem is that couch manufacturers probably didn’t use the words “cozy,” “place,” “sit,” and “fire” in their product titles or descriptions.

In recent years, machine learning (ML) techniques have become increasingly popular to enhance search. Among them are the use of embedding models, a type of model that can encode a large body of data into an n-dimensional space where each entity is encoded into a vector, a data point in that space, and organized such that similar entities are closer together. An embedding model, for instance, could encode the semantics of a corpus. By searching for the vectors nearest to an encoded document — k-nearest neighbor (k-NN) search — you can find the most semantically similar documents. Sophisticated embedding models can support multiple modalities, for instance, encoding the image and text of a product catalog and enabling similarity matching on both modalities.

A vector database provides efficient vector similarity search by providing specialized indexes like k-NN indexes. It also provides other database functionality like managing vector data alongside other data types, workload management, access control and more. OpenSearch’s k-NN plugin provides core vector database functionality for OpenSearch, so when your customer searches for “a cozy place to sit by the fire” in your catalog, you can encode that prompt and use OpenSearch to perform a nearest neighbor query to surface that 8-foot, blue couch with designer arranged photographs in front of fireplaces.

Using OpenSearch Service as a vector database

With OpenSearch Service’s vector database capabilities, you can implement semantic search, Retrieval Augmented Generation (RAG) with LLMs, recommendation engines, and search rich media.

Semantic search

With semantic search, you improve the relevance of retrieved results using language-based embeddings on search documents. You enable your search customers to use natural language queries, like “a cozy place to sit by the fire” to find their 8-foot-long blue couch. For more information, refer to Building a semantic search engine in OpenSearch to learn how semantic search can deliver a 15% relevance improvement, as measured by normalized discounted cumulative gain (nDCG) metrics compared with keyword search. For a concrete example, our Improve search relevance with ML in Amazon OpenSearch Service workshop explores the difference between keyword and semantic search, based on a Bidirectional Encoder Representations from Transformers (BERT) model, hosted by Amazon SageMaker to generate vectors and store them in OpenSearch. The workshop uses product question answers as an example to show how keyword search using the keywords/phrases of the query leads to some irrelevant results. Semantic search is able to retrieve more relevant documents by matching the context and semantics of the query. The following diagram shows an example architecture for a semantic search application with OpenSearch Service as the vector database.

Architecture diagram showing how to use Amazon OpenSearch Service to perform semantic search to improve relevance

Retrieval Augmented Generation with LLMs

RAG is a method for building trustworthy generative AI chatbots using generative LLMs like OpenAI, ChatGPT, or Amazon Titan Text. With the rise of generative LLMs, application developers are looking for ways to take advantage of this innovative technology. One popular use case involves delivering conversational experiences through intelligent agents. Perhaps you’re a software provider with knowledge bases for product information, customer self-service, or industry domain knowledge like tax reporting rules or medical information about diseases and treatments. A conversational search experience provides an intuitive interface for users to sift through information through dialog and Q&A. Generative LLMs on their own are prone to hallucinations—a situation where the model generates a believable but factually incorrect response. RAG solves this problem by complementing generative LLMs with an external knowledge base that is typically built using a vector database hydrated with vector-encoded knowledge articles.

As illustrated in the following diagram, the query workflow starts with a question that is encoded and used to retrieve relevant knowledge articles from the vector database. Those results are sent to the generative LLM whose job is to augment those results, typically by summarizing the results as a conversational response. By complementing the generative model with a knowledge base, RAG grounds the model on facts to minimize hallucinations. You can learn more about building a RAG solution in the Retrieval Augmented Generation module of our semantic search workshop.

Architecture diagram showing how to use Amazon OpenSearch Service to perform retrieval-augmented generation

Recommendation engine

Recommendations are a common component in the search experience, especially for ecommerce applications. Adding a user experience feature like “more like this” or “customers who bought this also bought that” can drive additional revenue through getting customers what they want. Search architects employ many techniques and technologies to build recommendations, including Deep Neural Network (DNN) based recommendation algorithms such as the two-tower neural net model, YoutubeDNN. A trained embedding model encodes products, for example, into an embedding space where products that are frequently bought together are considered more similar, and therefore are represented as data points that are closer together in the embedding space. Another possibility
is that product embeddings are based on co-rating similarity instead of purchase activity. You can employ this affinity data through calculating the vector similarity between a particular user’s embedding and vectors in the database to return recommended items. The following diagram shows an example architecture of building a recommendation engine with OpenSearch as a vector store.

Architecture diagram showing how to use Amazon OpenSearch Service as a recommendation engine

Media search

Media search enables users to query the search engine with rich media like images, audio, and video. Its implementation is similar to semantic search—you create vector embeddings for your search documents and then query OpenSearch Service with a vector. The difference is you use a computer vision deep neural network (e.g. Convolutional Neural Network (CNN)) such as ResNet to convert images into vectors. The following diagram shows an example architecture of building an image search with OpenSearch as the vector store.

Architecture diagram showing how to use Amazon OpenSearch Service to search rich media like images, videos, and audio files

Understanding the technology

OpenSearch uses approximate nearest neighbor (ANN) algorithms from the NMSLIB, FAISS, and Lucene libraries to power k-NN search. These search methods employ ANN to improve search latency for large datasets. Of the three search methods the k-NN plugin provides, this method offers the best search scalability for large datasets. The engine details are as follows:

  • Non-Metric Space Library (NMSLIB) – NMSLIB implements the HNSW ANN algorithm
  • Facebook AI Similarity Search (FAISS) – FAISS implements both HNSW and IVF ANN algorithms
  • Lucene – Lucene implements the HNSW algorithm

Each of the three engines used for approximate k-NN search has its own attributes that make one more sensible to use than the others in a given situation. You can follow the general information in this section to help determine which engine will best meet your requirements.

In general, NMSLIB and FAISS should be selected for large-scale use cases. Lucene is a good option for smaller deployments, but offers benefits like smart filtering where the optimal filtering strategy—pre-filtering, post-filtering, or exact k-NN—is automatically applied depending on the situation. The following table summarizes the differences between each option.

.

NMSLIB-HNSW

FAISS-HNSW

FAISS-IVF

Lucene-HNSW

Max Dimension

16,000

16,000

16,000

1024

Filter

Post filter

Post filter

Post filter

Filter while search

Training Required

No

No

Yes

No

Similarity Metrics

l2, innerproduct, cosinesimil, l1, linf

l2, innerproduct

l2, innerproduct

l2, cosinesimil

Vector Volume

Tens of billions

Tens of billions

Tens of billions

< Ten million

Indexing latency

Low

Low

Lowest

Low

Query Latency & Quality

Low latency & high quality

Low latency & high quality

Low latency & low quality

High latency & high quality

Vector Compression

Flat

Flat

Product Quantization

Flat

Product Quantization

Flat

Memory Consumption

High

High

Low with PQ

Medium

Low with PQ

High

Approximate and exact nearest-neighbor search

The OpenSearch Service k-NN plugin supports three different methods for obtaining the k-nearest neighbors from an index of vectors: approximate k-NN, score script (exact k-NN), and painless extensions (exact k-NN).

Approximate k-NN

The first method takes an approximate nearest neighbor approach—it uses one of several algorithms to return the approximate k-nearest neighbors to a query vector. Usually, these algorithms sacrifice indexing speed and search accuracy in return for performance benefits such as lower latency, smaller memory footprints, and more scalable search. Approximate k-NN is the best choice for searches over large indexes (that is, hundreds of thousands of vectors or more) that require low latency. You should not use approximate k-NN if you want to apply a filter on the index before the k-NN search, which greatly reduces the number of vectors to be searched. In this case, you should use either the score script method or painless extensions.

Score script

The second method extends the OpenSearch Service score script functionality to run a brute force, exact k-NN search over knn_vector fields or fields that can represent binary objects. With this approach, you can run k-NN search on a subset of vectors in your index (sometimes referred to as a pre-filter search). This approach is preferred for searches over smaller bodies of documents or when a pre-filter is needed. Using this approach on large indexes may lead to high latencies.

Painless extensions

The third method adds the distance functions as painless extensions that you can use in more complex combinations. Similar to the k-NN score script, you can use this method to perform a brute force, exact k-NN search across an index, which also supports pre-filtering. This approach has slightly slower query performance compared to the k-NN score script. If your use case requires more customization over the final score, you should use this approach over score script k-NN.

Vector search algorithms

The simple way to find similar vectors is to use k-nearest neighbors (k-NN) algorithms, which compute the distance between a query vector and the other vectors in the vector database. As we mentioned earlier, the score script k-NN and painless extensions search methods use the exact k-NN algorithms under the hood. However, in the case of extremely large datasets with high dimensionality, this creates a scaling problem that reduces the efficiency of the search. Approximate nearest neighbor (ANN) search methods can overcome this by employing tools that restructure indexes more efficiently and reduce the dimensionality of searchable vectors. There are different ANN search algorithms; for example, locality sensitive hashing, tree-based, cluster-based, and graph-based. OpenSearch implements two ANN algorithms: Hierarchical Navigable Small Worlds (HNSW) and Inverted File System (IVF). For a more detailed explanation of how the HNSW and IVF algorithms work in OpenSearch, see blog post “Choose the k-NN algorithm for your billion-scale use case with OpenSearch”.

Hierarchical Navigable Small Worlds

The HNSW algorithm is one of the most popular algorithms out there for ANN search. The core idea of the algorithm is to build a graph with edges connecting index vectors that are close to each other. Then, on search, this graph is partially traversed to find the approximate nearest neighbors to the query vector. To steer the traversal towards the query’s nearest neighbors, the algorithm always visits the closest candidate to the query vector next.

Inverted File

The IVF algorithm separates your index vectors into a set of buckets, then, to reduce your search time, only searches through a subset of these buckets. However, if the algorithm just randomly split up your vectors into different buckets, and only searched a subset of them, it would yield a poor approximation. The IVF algorithm uses a more elegant approach. First, before indexing begins, it assigns each bucket a representative vector. When a vector is indexed, it gets added to the bucket that has the closest representative vector. This way, vectors that are closer to each other are placed roughly in the same or nearby buckets.

Vector similarity metrics

All search engines use a similarity metric to rank and sort results and bring the most relevant results to the top. When you use a plain text query, the similarity metric is called TF-IDF, which measures the importance of the terms in the query and generates a score based on the number of textual matches. When your query includes a vector, the similarity metrics are spatial in nature, taking advantage of proximity in the vector space. OpenSearch supports several similarity or distance measures:

  • Euclidean distance – The straight-line distance between points.
  • L1 (Manhattan) distance – The sum of the differences of all of the vector components. L1 distance measures how many orthogonal city blocks you need to traverse from point A to point B.
  • L-infinity (chessboard) distance – The number of moves a King would make on an n-dimensional chessboard. It’s different than Euclidean distance on the diagonals—a diagonal step on a 2-dimensional chessboard is 1.41 Euclidean units away, but 2 L-infinity units away.
  • Inner product – The product of the magnitudes of two vectors and the cosine of the angle between them. Usually used for natural language processing (NLP) vector similarity.
  • Cosine similarity – The cosine of the angle between two vectors in a vector space.
  • Hamming distance – For binary-coded vectors, the number of bits that differ between the two vectors.

Advantage of OpenSearch as a vector database

When you use OpenSearch Service as a vector database, you can take advantage of the service’s features like usability, scalability, availability, interoperability, and security. More importantly, you can use OpenSearch’s search features to enhance the search experience. For example, you can use Learning to Rank in OpenSearch to integrate user clickthrough behavior data into your search application and improve search relevance. You can also combine OpenSearch text search and vector search capabilities to search documents with keyword and semantic similarity. You can also use other fields in the index to filter documents to improve relevance. For advanced users, you can use a hybrid scoring model to combine OpenSearch’s text-based relevance score, computed with the Okapi BM25 function and its vector search score to improve the ranking of your search results.

Scale and limits

OpenSearch as vector database support billions of vector records. Keep in mind the following calculator regarding number of vectors and dimensions to size your cluster.

Number of vectors

OpenSearch VectorDB takes advantage of the sharding capabilities of OpenSearch and can scale to billions of vectors at single-digit millisecond latencies by sharding vectors and scale horizontally by adding more nodes. The number of vectors that can fit in a single machine is a function of the off-heap memory availability on the machine. The number of nodes required will depend on the amount of memory that can be used for the algorithm per node and the total amount of memory required by the algorithm. The more nodes, the more memory and better performance. The amount of memory available per node is computed as memory_available = (node_memoryjvm_size) * circuit_breaker_limit, with the following parameters:

  • node_memory – The total memory of the instance.
  • jvm_size – The OpenSearch JVM heap size. This is set to half of the instance’s RAM, capped at approximately 32 GB.
  • circuit_breaker_limit – The native memory usage threshold for the circuit breaker. This is set to 0.5.

Total cluster memory estimation depends on total number of vector records and algorithms. HNSW and IVF have different memory requirements. You can refer to Memory Estimation for more details.

Number of dimensions

OpenSearch’s current dimension limit for the vector field knn_vector is 16,000 dimensions. Each dimension is represented as a 32-bit float. The more dimensions, the more memory you’ll need to index and search. The number of dimensions is usually determined by the embedding models that translate the entity to a vector. There are a lot of options to choose from when building your knn_vector field. To determine the correct methods and parameters to choose, refer to Choosing the right method.

Customer stories:

Amazon Music

Amazon Music is always innovating to provide customers with unique and personalized experiences. One of Amazon Music’s approaches to music recommendations is a remix of a classic Amazon innovation, item-to-item collaborative filtering, and vector databases. Using data aggregated based on user listening behavior, Amazon Music has created an embedding model that encodes music tracks and customer representations into a vector space where neighboring vectors represent tracks that are similar. 100 million songs are encoded into vectors, indexed into OpenSearch, and served across multiple geographies to power real-time recommendations. OpenSearch currently manages 1.05 billion vectors and supports a peak load of 7,100 vector queries per second to power Amazon Music recommendations.

The item-to-item collaborative filter continues to be among the most popular methods for online product recommendations because of its effectiveness at scaling to large customer bases and product catalogs. OpenSearch makes it easier to operationalize and further the scalability of the recommender by providing scale-out infrastructure and k-NN indexes that grow linearly with respect to the number of tracks and similarity search in logarithmic time.

The following figure visualizes the high-dimensional space created by the vector embedding.

A visualization of the vector encoding of Amazon Music entries in the large vector space

Brand protection at Amazon

Amazon strives to deliver the world’s most trustworthy shopping experience, offering customers the widest possible selection of authentic products. To earn and maintain our customers’ trust, we strictly prohibit the sale of counterfeit products, and we continue to invest in innovations that ensure only authentic products reach our customers. Amazon’s brand protection programs build trust with brands by accurately representing and completely protecting their brand. We strive to ensure that public perception mirrors the trustworthy experience we deliver. Our brand protection strategy focuses on four pillars: (1) Proactive Controls (2) Powerful Tools to Protect Brands (3) Holding Bad Actors Accountable (4) Protecting and Educating Customers. Amazon OpenSearch Service is a key part of Amazon’s Proactive Controls.

In 2022, Amazon’s automated technology scanned more than 8 billion attempted changes daily to product detail pages for signs of potential abuse. Our proactive controls found more than 99% of blocked or removed listings before a brand ever had to find and report it. These listings were suspected of being fraudulent, infringing, counterfeit, or at risk of other forms of abuse. To perform these scans, Amazon created tooling that uses advanced and innovative techniques, including the use of advanced machine learning models to automate the detection of intellectual property infringements in listings across Amazon’s stores globally. A key technical challenge in implementing such automated system is the ability to search for protected intellectual property within a vast billion-vector corpus in a fast, scalable and cost effective manner. Leveraging Amazon OpenSearch Service’s scalable vector database capabilities and distributed architecture, we successfully developed an ingestion pipeline that has indexed a total of 68 billion, 128- and 1024-dimension vectors into OpenSearch Service to enable brands and automated systems to conduct infringement detection, in real-time, through a highly available and fast (sub-second) search API.

Conclusion

Whether you’re building a generative AI solution, searching rich media and audio, or bringing more semantic search to your existing search-based application, OpenSearch is a capable vector database. OpenSearch supports a variety of engines, algorithms, and distance measures that you can employ to build the right solution. OpenSearch provides a scalable engine that can support vector search at low latency and up to billions of vectors. With OpenSearch and its vector DB capabilities, your users can find that 8-foot-blue couch easily, and relax by a cozy fire.


About the Authors

Jon Handler is a Senior Principal Solutions Architect with AWSJon Handler is a Senior Principal Solutions Architect at Amazon Web Services based in Palo Alto, CA. Jon works closely with OpenSearch and Amazon OpenSearch Service, providing help and guidance to a broad range of customers who have search and log analytics workloads that they want to move to the AWS Cloud. Prior to joining AWS, Jon’s career as a software developer included four years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the University of Pennsylvania, and a Master of Science and a Ph. D. in Computer Science and Artificial Intelligence from Northwestern University.

Jianwei Li is a Principal Analytics Specialist TAM at Amazon Web Services. Jianwei provides consultant service for customers to help customer design and build modern data platform. Jianwei has been working in big data domain as software developer, consultant and tech leader.

Dylan Tong is a Senior Product Manager at AWS. He works with customers to help drive their success on the AWS platform through thought leadership and guidance on designing well architected solutions. He has spent most of his career building on his expertise in data management and analytics by working for leaders and innovators in the space.

Vamshi Vijay Nakkirtha is a Software Engineering Manager working on the OpenSearch Project and Amazon OpenSearch Service. His primary interests include distributed systems. He is an active contributor to various plugins, like k-NN, GeoSpatial, and dashboard-maps.

On the Need for an AI Public Option

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/on-the-need-for-an-ai-public-option.html

Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of US tech companies?

Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 US presidential election and paid a public relations firm to blame Google and George Soros instead.

These and countless other ethical lapses should prompt us to consider whether we want to give technology companies further abilities to learn our personal details and influence our day-to-day decisions. Tech companies can already access our daily whereabouts and search queries. Digital devices monitor more and more aspects of our lives: We have cameras in our homes and heartbeat sensors on our wrists sending what they detect to Silicon Valley.

Now, tech giants are developing ever more powerful AI systems that don’t merely monitor you; they actually interact with you—and with others on your behalf. If searching on Google in the 2010s was like being watched on a security camera, then using AI in the late 2020s will be like having a butler. You will willingly include them in every conversation you have, everything you write, every item you shop for, every want, every fear, everything. It will never forget. And, despite your reliance on it, it will be surreptitiously working to further the interests of one of these for-profit corporations.

There’s a reason Google, Microsoft, Facebook, and other large tech companies are leading the AI revolution: Building a competitive large language model (LLM) like the one powering ChatGPT is incredibly expensive. It requires upward of $100 million in computational costs for a single model training run, in addition to access to large amounts of data. It also requires technical expertise, which, while increasingly open and available, remains heavily concentrated in a small handful of companies. Efforts to disrupt the AI oligopoly by funding start-ups are self-defeating as Big Tech profits from the cloud computing services and AI models powering those start-ups—and often ends up acquiring the start-ups themselves.

Yet corporations aren’t the only entities large enough to absorb the cost of large-scale model training. Governments can do it, too. It’s time to start taking AI development out of the exclusive hands of private companies and bringing it into the public sector. The United States needs a government-funded-and-directed AI program to develop widely reusable models in the public interest, guided by technical expertise housed in federal agencies.

So far, the AI regulation debate in Washington has focused on the governance of private-sector activity—which the US Congress is in no hurry to advance. Congress should not only hurry up and push AI regulation forward but also go one step further and develop its own programs for AI. Legislators should reframe the AI debate from one about public regulation to one about public development.

The AI development program could be responsive to public input and subject to political oversight. It could be directed to respond to critical issues such as privacy protection, underpaid tech workers, AI’s horrendous carbon emissions, and the exploitation of unlicensed data. Compared to keeping AI in the hands of morally dubious tech companies, the public alternative is better both ethically and economically. And the switch should take place soon: By the time AI becomes critical infrastructure, essential to large swaths of economic activity and daily life, it will be too late to get started.

Other countries are already there. China has heavily prioritized public investment in AI research and development by betting on a handpicked set of giant companies that are ostensibly private but widely understood to be an extension of the state. The government has tasked Alibaba, Huawei, and others with creating products that support the larger ecosystem of state surveillance and authoritarianism.

The European Union is also aggressively pushing AI development. The European Commission already invests 1 billion euros per year in AI, with a plan to increase that figure to 20 billion euros annually by 2030. The money goes to a continent-wide network of public research labs, universities, and private companies jointly working on various parts of AI. The Europeans’ focus is on knowledge transfer, developing the technology sector, use of AI in public administration, mitigating safety risks, and preserving fundamental rights. The EU also continues to be at the cutting edge of aggressively regulating both data and AI.

Neither the Chinese nor the European model is necessarily right for the United States. State control of private enterprise remains anathema in American political culture and would struggle to gain mainstream traction. The tech companies—and their supporters in both US political parties—are opposed to robust public governance of AI. But Washington can take inspiration from China and Europe’;s long-range planning and leadership on regulation and public investment. With boosters pointing to hundreds of trillions of dollars of global economic value associated with AI, the stakes of international competition are compelling. As in energy and medical research, which have their own federal agencies in the Department of Energy and the National Institutes of Health, respectively, there is a place for AI research and development inside government.

Beside the moral argument against letting private companies develop AI, there’s a strong economic argument in favor of a public option as well. A publicly funded LLM could serve as an open platform for innovation, helping any small business, nonprofit, or individual entrepreneur to build AI-assisted applications.

There’s also a practical argument. Building AI is within public reach because governments don’t need to own and operate the entire AI supply chain. Chip and computer production, cloud data centers, and various value-added applications—such as those that integrate AI with consumer electronics devices or entertainment software—do not need to be publicly controlled or funded.

One reason to be skeptical of public funding for AI is that it might result in a lower quality and slower innovation, given greater ethical scrutiny, political constraints, and fewer incentives due to a lack of market competition. But even if that is the case, it would be worth broader access to the most important technology of the 21st century. And it is by no means certain that public AI has to be at a disadvantage. The open-source community is proof that it’s not always private companies that are the most innovative.

Those who worry about the quality trade-off might suggest a public buyer model, whereby Washington licenses or buys private language models from Big Tech instead of developing them itself. But that doesn’t go far enough to ensure that the tools are aligned with public priorities and responsive to public needs. It would not give the public detailed insight into or control of the inner workings and training procedures for these models, and it would still require strict and complex regulation.

There is political will to take action to develop AI via public, rather than private, funds—but this does not yet equate to the will to create a fully public AI development agency. A task force created by Congress recommended in January a $2.6 billion federal investment in computing and data resources to prime the AI research ecosystem in the United States. But this investment would largely serve to advance the interests of Big Tech, leaving the opportunity for public ownership and oversight unaddressed.

Nonprofit and academic organizations have already created open-access LLMs. While these should be celebrated, they are not a substitute for a public option. Nonprofit projects are still beholden to private interests, even if they are benevolent ones. These private interests can change without public input, as when OpenAI effectively abandoned its nonprofit origins, and we can’t be sure that their founding intentions or operations will survive market pressures, fickle donors, and changes in leadership.

The US government is by no means a perfect beacon of transparency, a secure and responsible store of our data, or a genuine reflection of the public’s interests. But the risks of placing AI development entirely in the hands of demonstrably untrustworthy Silicon Valley companies are too high. AI will impact the public like few other technologies, so it should also be developed by the public.

This essay was written with Nathan Sanders, and appeared in Foreign Policy.

Survey reveals AI’s impact on the developer experience

Post Syndicated from Inbal Shani original https://github.blog/2023-06-13-survey-reveals-ais-impact-on-the-developer-experience/


Developers today do more than just write and ship code—they’re expected to navigate a number of tools, environments, and technologies, including the new frontier of generative artificial intelligence (AI) coding tools. But the most important thing for developers isn’t story points or the speed of deployments. It’s the developer experience, which determines how efficiently and productively developers can exceed standards, enter a flow state, and drive impact.

I say this not only as GitHub’s chief product officer, but as a long-time developer who has worked across every part of the stack. Decades ago, when I earned my master’s in mechanical engineering, I became one of the first technologists to apply AI in the lab. Back then, it would take our models five days to process our larger datasets—which is striking considering the speed of today’s AI models. I yearned for tools that would make me more efficient and shorten my time to production. This is why I’m passionate about developer experience (DevEx) and have made it my focus as GitHub’s chief product officer.

Amid the rapid advancements in generative AI, we wanted to get a better understanding from developers about how new tools—and current workflows—are impacting the overall developer experience. As a starting point, we focused on some of the biggest components of the developer experience: developer productivity, team collaboration, AI, and how developers think they can best drive impact in enterprise environments.

To do so, we partnered with Wakefield Research to survey 500 U.S.-based developers at enterprise companies. In the following report, we’ll show how organizations can remove barriers to help enterprise engineering teams drive innovation and impact in this new age of software development. Ultimately, the way to innovate at scale is to empower developers by improving their productivity, increasing their satisfaction, and enabling them to do their best work—every day. After all, there can be no progress without developers who are empowered to drive impact.

Inbal Shani
Chief Product Officer // GitHub

Learn how generative AI is changing the developer experience

Discover how generative AI is changing software development in a pre-recorded session from GitHub.

Watch the video >

Why developer experience matters

At GitHub, we’re aware there’s often a significant gap between the day-to-day reality for most developers and “conversations about ‘what developers want.’”

With this survey, we wanted to better understand the typical experience for developers—and identify key ways companies can empower their developers and achieve greater success.

One big takeaway: It starts with investing in a great developer experience. And collaboration, as we learned from our research, is at the core of how developers want to work and what makes them most productive, satisfied, and impactful.

A diagram of a formula behind the developer experience that accounts for productivity, impact, satisfaction, and collaboration.
C = Collaboration, the multiplier across the entire developer experience.

DevEx is a formula that takes into account:

  • How simple and fast it is for a developer to implement a change on a codebase—or be productive.
  • How frictionless it is to move from idea through production to impact.
  • How positively or negatively the work environment, workflows, and tools affect developer satisfaction.

For leaders, developer experience is about creating a collaborative environment where developers can be their most productive, impactful, and satisfied at work. For developers, collaboration is one of the most important parts of the equation.

Current performance metrics fall short of developer expectations

Developers say performance metrics don’t meet expectations

The way developers are currently evaluated doesn’t align with how they think their performance should be measured.

  • For instance, the developers we surveyed say they’re currently measured by the number of incidents they resolve. But developers believe that how they handle those bugs and issues is more important to performance. This aligns with the belief that code quality over code quantity should remain a top performance metric.
  • Developers also believe collaboration and communication should be just as important as code quality in terms of performance measures. Their ability to collaborate and communicate with others is essential to their job, but only 33% of developers report that their companies use it as a performance metric.
Key survey findings showing what developer say their managers use to measure their performance and what developers think will matter more when they start using AI coding tools.
Metrics currently used to measure performance, compared with metrics developers think should be used to measure their performance.
More than output quantity and efficiency, code quality and collaboration are the most
important performance metrics, according to the developers we surveyed.
Twitter logo LinkedIn logo
A chart showing what developers say their teams spend the most time doing at work.
The top ranked responses that developers say their teams are working the most on including writing code and finding and fixing security vulnerabilities.

Developers want more opportunities to upskill and drive impact

When developers are asked about what makes a positive impact on their workday, they rank learning new skills (43%), getting feedback from end users (39%), and automated tests (38%), and designing solutions to novel problems (36%) as top contenders.

A ranked list of the tasks 500 U.S.-based developers say have the most positive impact on their workdays.
The top tasks developers say positively impact their workdays.

But developers say they’re spending most of their time writing code and tests, then waiting for that code to be reviewed or builds and tests to be executed.

On a typical day, the enterprise developers we surveyed report their teams are busy with a variety of tasks, including writing code, fixing security vulnerabilities, and getting feedback from end users, among other things. Developers also report that they spend a similar amount of time across these tasks, indicating that they’re stretched thin throughout the day.

A ranked list of the top tasks developers and software engineers say they spend the most time working on each day.
The tasks developers say they spend the most time working on each day.

Notably, developers say they spend the same amount of time waiting for builds and tests as they do writing new code.

  • This suggests that wait times for builds and tests are still a persistent problem despite investments in DevOps tools over the past decade.
  • Developers also continue to face obstacles, such as waiting on code review, builds, and test runs, which can hinder their ability to learn new skills and design solutions to novel problems, and our research suggests that these factors can have the biggest impact on their overall satisfaction.

Developers want feedback from end users, but face challenges

Developers say getting feedback from end users (39%) is the second-most important thing that positively impacts their workdays—but it’s often challenging for development teams to get that feedback directly.

  • Product managers and marketing teams often act as intermediaries, making it difficult for developers to directly receive end-user feedback.
  • Developers would ideally receive feedback from automated and validation tests to improve their work, but sometimes these tests are sent to other teams before being handed off to engineering teams.

The top two daily tasks for development teams include writing code (32%) and finding and fixing security vulnerabilities (31%).

  • This shows the increased importance developers have placed on security and underscores how companies are prioritizing security.
  • It also demonstrates the critical role that enterprise development teams play in meeting policy and board edicts around security.

The bottom line
Developers want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills. However, wait times on builds and tests, as well as the current performance metrics they’re evaluated on, are getting in the way.

Collaboration is the cornerstone of the developer experience

Developers thrive in collaborative environments

In our survey of enterprise engineers, developers say they work with an average of 21 other developers on a typical project—and 52% report working with other teams daily or weekly. Notably, they rank regular touchpoints as the most important factor for effective collaboration.

A survey finding that developers at enterprise companies often work with an average of 21 developers on other projects and often work on a daily or weekly basis with colleagues.
Developers in enterprise settings often work with an average of 21 other developers on a daily or weekly cadence.

But developers also have a holistic view of collaboration—it’s defined not only by talking and meeting with others, but also by uninterrupted work time, access to fully configured developer environments, and formal mentor-mentee relationships.

  • Specified blocks with no team communication give developers the time and space to write code and work towards team goals.
  • Access to fully configured developer environments promotes consistency throughout the development process. It also helps developers collaborate faster and avoid hearing the infamous line, “But it worked on my machine.”
  • Mentorships can help developers upskill and build interpersonal skills that are essential in a collaborative work environment.

It’s important to note these factors can also negatively impact a developer’s work day—which suggests that ineffective meetings can serve to distract rather than help developers (something we’ve found in previous research).

The key factors developers in a survey say contribute most highly to effective team collaboration including meetings, dedicated time for individual work, and access to fully configured dev environments.

Our survey indicates the factors most important to effective collaboration are so critical that when they’re not done effectively, they have a noticeable, negative impact on a developer’s work.

A ranked list of the top tasks developers in a survey reported as having a negative impact on their overall workday experience.
The tasks developers say most often have a negative impact on their workday experience.
Developers work with an average of 21 people on any given project. They need the time and tools for success—including regular touchpoints, heads-down time, access to fully-configured dev environments, and formal mentor-mentee relationships.
Twitter logo LinkedIn logo

We wanted to learn more about how developers collaborate

So, we sourced some answers from our followers on Twitter. We asked developers what tips they have for effective collaboration. Here’s what one developer had to say:

Twitter user Colby Ray had multiple points in response to our prompt. Click the image to read his tweet.

We also asked what makes for a productive and valuable meeting:

Twitter user kettenaito had several points in response to our prompt. Click the image to read on Twitter.

Twitter user Mateus Feira had several points in response to our prompt. Click the image to read on Twitter.

Effective collaboration improves code quality

As developer experience continues to be defined, so, too, will successful developer collaboration. Too many pings and messages can affect flow, but there’s still a need to stay in touch. In our survey, developers say effective collaboration results in improved test coverage and faster, cleaner, more secure code writing—which are best practices for any development team. This shows that when developers work effectively with others, they believe they build better and more secure software.

Developers in a survey report that collaboration positively impacts how they write code, how fast they can ship it, and more.
Developers widely view effective collaboration as helping to improve what they ship and how often they ship it.

Developers we surveyed believe collaboration and communication—along with code quality—should be the top priority for evaluation.

  • From DevOps to agile methodologies, developers and the greater business world have been talking about the importance of collaboration for a long time.
  • But developers are still not being measured on it.
Developers in a survey respond to a question about what metrics they believe their companies should use to measure their performance and productivity.
The metrics that developers think their managers should use to evaluate their performance and productivity.

We asked developers to share their ideas for measuring how well they collaborate. Here’s what one developer had to say:

Twitter user Andrew DiMola had several points in response to our prompt. Click to read on Twitter.

  • The takeaway: Companies and engineering managers should encourage regular team communication, and set time to check in–especially in remote environments–but respect developers’ need to work and focus.
Developers think regular touchpoints with their teams including meetings, asynchronous communication, and innersource practices help organizations collaborate at scale.
Developers believe that effective and regular touchpoints with their colleagues are critical for effective team collaboration.

4 tips for engineering managers to improve collaboration

At GitHub, our researchers, developers, product teams, and analysts are dedicated to studying and improving developer productivity and satisfaction. Here are their tips for engineering leaders who want to improve collaboration among developers:

  1. Make collaboration a goal in performance objectives. This builds the space and expectation that people will collaborate. This could be in the form of lunch and learns, joint projects, etc.
  2. Define and scope what collaboration looks like in your organization. Let people know when they’re being informed about something vs. being consulted about something. A matrix outlining roles and responsibilities helps define each person’s role and is something GitHub teams have implemented.
  3. Give developers time to converse and get to know one another. In particular, remote or hybrid organizations need to dedicate a portion of a developer’s time and virtual space to building relationships. Check out the GitHub guides to remote work.
  4. Identify principal and distinguished engineers. Academic research supports the positive impact of change agents in organizations—and how they should be the people who are exceptionally great at collaboration. It’s a matter of identifying your distinguished engineers and elevating them to a place where they can model desired behaviors.

The bottom line
Effective developer collaboration improves code quality and should be a performance measure. Regular touchpoints, heads-down time, access to fully configured dev environments, and formal mentor-mentee relationships result in improved test coverage and faster, cleaner, more secure code writing.

AI improves individual performance and team collaboration

Developers are already using AI coding tools at work

A staggering 92% of U.S.-based developers working in large companies report using an AI coding tool either at work or in their personal time—and 70% say they see significant benefits to using these tools.

  • AI is here to stay—and it’s already transforming how developers approach their day-to-day work. That makes it critical for businesses and engineering leaders to adopt enterprise-grade AI tools to avoid their developers using non-approved applications. Companies should also establish governance standards for using AI tools to ensure that they are used ethically and effectively.
92% of developers in a survey say they're already using AI coding tools at work.
Almost all developers are already using AI coding tools at and outside of work.

70% of developers see a benefit to using AI coding tools at work.

Almost all (92%) developers use AI coding tools at work—and a majority (67%) have used these tools in both a work setting and during their personal time. Curiously, only 6% of developers in our survey say they solely use these tools outside of work.
Twitter logo LinkedIn logo

Developers believe AI coding tools will enhance their performance

With most developers experimenting with AI tools in the workplace, our survey results suggest it’s not just idle interest leading developers to use AI. Rather, it’s a recognition that AI coding tools will help them meet performance standards.

  • In our survey, developers say AI coding tools can help them meet existing performance standards with improved code quality, faster outputs, and fewer production-level incidents. They also believe that these metrics should be used to measure their performance beyond code quantity.
The metrics developers say their managers use to measure their productivity vs. the metrics developers think their managers should use to measure their productivity if they use AI coding tools.
Developers widely think that AI coding tools will layer into their existing workflows and bring greater efficiencies—but they do not think AI will change how software is made.

Around one-third of developers report that their managers currently assess their performance based on the volume of code they produce—and an equal number anticipate that this will persist when they start using AI-based coding tools.

  • Notably, the quantity of code a developer produces may not necessarily correspond to its business value.
  • Stay smart. With the increase of AI tooling being used in software development—which often contributes to code volume—engineering leaders will need to ask whether measuring code volume is still the best way to measure productivity and output.

Developers think AI coding tools will lead to greater team collaboration

Beyond improving individual performance, more than 4 in 5 developers surveyed (81%) say AI coding tools will help increase collaboration within their teams and organizations.

  • In fact, security reviews, planning, and pair programming are the most significant points of collaboration and the tasks that development teams are expected to, and should, work on with the help of AI coding tools. This also indicates that code and security reviews will remain important as developers increase their use of AI coding tools in the workplace.
Developers believe that AI coding tools will make engineering teams more collaborative as the quality of code produced becomes ever more important.
Developers think their teams will need to become more collaborative as they start using AI coding tools.
Sometimes, developers can do the same thing with one line or multiple lines of code. Even still, one-third of developers in our survey say their managers measure their performance based on how much code they produce.
Twitter logo LinkedIn logo

Notably, developers believe AI coding tools will give them more time to focus on solution design. This has direct organizational benefits and means developers believe they’ll spend more time designing new features and products with AI instead of writing boilerplate code.

  • Developers are already using generative AI coding tools to automate parts of their workflow, which frees up time for more collaborative projects like security reviews, planning, and pair programming.
Developers think AI coding tools will help them upskill, become more productive, and focus on higher-value problem solving.
Developers believe that AI coding tools will help them focus on higher-value problem solving.

Developers think AI increases productivity and prevents burnout

Not only can AI coding tools help improve overall productivity, but they can also provide upskilling opportunities to help create a smarter workforce according to the developers we surveyed.

  • 57% of developers believe AI coding tools help them improve their coding language skills—which is the top benefit they see. Beyond the prospect of acting as an upskilling aid, developers also say AI coding tools can also help with reducing cognitive effort, and since mental capacity and time are both finite resources, 41% of developers believe that AI coding tools can help with preventing burnout.
  • In previous research we conducted, 87% of developers reported that the AI coding tool GitHub Copilot helped them preserve mental effort while completing more repetitive tasks. This shows that AI coding tools allow developers to preserve cognitive effort and focus on more challenging and innovative aspects of software development or research and development.
  • AI coding tools help developers upskill while they work. Across our survey, developers consistently rank learning new skills as the number one contributor to a positive workday. But 30% also say learning and development can have a negative impact on their overall workday, which suggests some developers view learning and development as adding more work to their workdays. Notably, developers say the top benefit of AI coding tools is learning new skills—and these tools can help developers learn while they work, instead of making learning and development an additional task.
Developers are already using generative AI coding tools to automate parts of their workflow, which frees up time for more collaborative projects like security reviews, planning, and pair programming.
Twitter logo LinkedIn logo

AI is improving the developer experience across the board

Developers in our survey suggest they can better meet standards around code quality, completion time, and the number of incidents when using AI coding tools—all of which are measures developers believe are key areas for evaluating their performance.

AI coding tools can also help reduce the likelihood of coding errors and improve the accuracy of code—which ultimately leads to more reliable software, increased application performance, and better performance numbers for developers. As AI technology continues to advance, it is likely that these coding tools will have an even greater impact on developer performance and upskilling.

AI coding tools are layering into existing developer workflows and creating greater efficiencies

Developers believe that AI coding tools will increase their productivity—but our survey suggests that developers don’t think these tools are fundamentally altering the software development lifecycle. Instead, developers suggest they’re bringing greater efficiencies to it.

  • The use of automation and AI has been a part of the developer workflow for a considerable amount of time, with developers already utilizing a range of automated and AI-powered tools, such as machine learning-based security checks and CI/CD pipelines.
  • Rather than completely overhauling operations, these tools create greater efficiencies within existing workflows, and that frees up more time for developers to concentrate on developing solutions.

The bottom line
Almost all developers (92%) are using AI coding at work—and they say these tools not only improve day-to-day tasks but enable upskilling opportunities, too. Developers see material benefits to using AI tools including improved performance and coding skills, as well as increased team collaboration.

The path forward

Developer satisfaction, productivity, and organizational impact are all positioned to get a boost from AI coding tools—and that will have a material impact on the overall developer experience.

92% of developers already saying they use AI coding tools at work and in their personal time, which makes it clear AI is here to stay. 70% of the developers we surveyed say they already see significant benefits when using AI coding tools, and 81% of the developers we surveyed expect AI coding tools to make their teams more collaborative—which is a net benefit for companies looking to improve both developer velocity and the developer experience.

Notably, 57% of developers believe that AI could help them upskill—and hold the potential to build learning and development into their daily workflow. With all of this in mind, technical leaders should start exploring AI as a solution to improve satisfaction, productivity, and the overall developer experience.

In addition to exploring AI tools, here are three takeaways engineering and business leaders should consider to improve the developer experience:

  1. Help your developers enter a flow state with tools, processes, and practices that help them be productive, drive impact, and do creative and meaningful work.
  2. Empower collaboration by breaking down organizational silos and providing developers with the opportunity to communicate efficiently.
  3. Make room for upskilling within developer workflows through key investments in AI to help your organization experiment and innovate for the future.

Methodology

This report draws on a survey conducted online by Wakefield Research on behalf of GitHub from March 14, 2023 through March 29, 2023 among 500 non-student, U.S.-based developers who are not managers and work at companies with 1,000-plus employees. For a complete survey methodology, please contact [email protected].

How we’re learning to explain AI terms for young people and educators

Post Syndicated from Veronica Cucuiat original https://www.raspberrypi.org/blog/explaining-ai-terms-young-people-educators/

What do we talk about when we talk about artificial intelligence (AI)? It’s becoming a cliche to point out that, because the term “AI” is used to describe so many different things nowadays, it’s difficult to know straight away what anyone means when they say “AI”. However, it’s true that without a shared understanding of what AI and related terms mean, we can’t talk about them, or educate young people about the field.

A group of young people demonstrate a project at Coolest Projects.

So when we started designing materials for the Experience AI learning programme in partnership with leading AI unit Google DeepMind, we decided to create short explanations of key AI and machine learning (ML) terms. The explanations are doubly useful:

  1. They ensure that we give learners and teachers a consistent and clear understanding of the key terms across all our Experience AI resources. Within the Experience AI Lessons for Key Stage 3 (age 11–14), these key terms are also correlated to the target concepts and learning objectives presented in the learning graph. 
  2. They help us talk about AI and AI education in our team. Thanks to sharing an understanding of what terms such as “AI”, “ML”, “model”, or “training” actually mean and how to best talk about AI, our conversations are much more productive.

As an example, here is our explanation of the term “artificial intelligence” for learners aged 11–14:

Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news. AI applications do not think. AI applications are built to carry out tasks in a way that appears to be intelligent.

You can find 32 explanations in the glossary that is part of the Experience AI Lessons. Here’s an insight into how we arrived at the explanations.

Reliable sources

In order to ensure the explanations are as precise as possible, we first identified reliable sources. These included among many others:

Explaining AI terms to Key Stage 3 learners: Some principles

Vocabulary is an important part of teaching and learning. When we use vocabulary correctly, we can support learners to develop their understanding. If we use it inconsistently, this can lead to alternate conceptions (misconceptions) that can interfere with learners’ understanding. You can read more about this in our Pedagogy Quick Read on alternate conceptions.

Some of our principles for writing explanations of AI terms were that the explanations need to: 

  • Be accurate
  • Be grounded in education research best practice
  • Be suitable for our target audience (Key Stage 3 learners, i.e. 11- to 14-year-olds)
  • Be free of terms that have alternative meanings in computer science, such as “algorithm”

We engaged in an iterative process of writing explanations, gathering feedback from our team and our Experience AI project partners at Google DeepMind, and adapting the explanations. Then we went through the feedback and adaptation cycle until we all agreed that the explanations met our principles.

A real banana and an image of a banana shown on the screen of a laptop are both labelled "Banana".
Image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

An important part of what emerged as a result, aside from the explanations of AI terms themselves, was a blueprint for how not to talk about AI. One aspect of this is avoiding anthropomorphism, detailed by Ben Garside from our team here.

As part of designing the the Experience AI Lessons, creating the explanations helped us to:

  • Decide which technical details we needed to include when introducing AI concepts in the lessons
  • Figure out how to best present these technical details
  • Settle debates about where it would be appropriate, given our understanding and our learners’ age group, to abstract or leave out details

Using education research to explain AI terms

One of the ways education research informed the explanations was that we used semantic waves to structure each term’s explanation in three parts: 

  1. Top of the wave: The first one or two sentences are a high-level abstract explanation of the term, kept as short as possible, while introducing key words and concepts.
  2. Bottom of the wave: The middle part of the explanation unpacks the meaning of the term using a common example, in a context that’s familiar to a young audience. 
  3. Top of the wave: The final one or two sentences repack what was explained in the example in a more abstract way again to reconnect with the term. The end part should be a repeat of the top of the wave at the beginning of the explanation. It should also add further information to lead to another concept. 

Most explanations also contain ‘middle of the wave’ sentences, which add additional abstract content, bridging the ‘bottom of the wave’ concrete example to the ‘top of the wave’ abstract content.

Here’s the “artificial intelligence” explanation broken up into the parts of the semantic wave:

  • Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. (top of the wave)
  • Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. (middle of the wave)
  • For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news (bottom of the wave)
  • AI applications do not think. (middle of the wave)
  • AI applications are built to carry out tasks in a way that appears to be intelligent. (top of the wave)
Our "artificial intelligence" explanation broken up into the parts of the semantic wave.
Our “artificial intelligence” explanation broken up into the parts of the semantic wave. Red = top of the wave; yellow = middle of the wave; green = bottom of the wave

Was it worth our time?

Some of the explanations went through 10 or more iterations before we agreed they were suitable for publication. After months of thinking about, writing, correcting, discussing, and justifying the explanations, it’s tempting to wonder whether I should have just prompted an AI chatbot to generate the explanations for me.

A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.
Rens Dimmendaal & Johann Siemens / Better Images of AI / Decision Tree reversed / CC-BY 4.0

I tested this idea by getting a chatbot to generate an explanation of “artificial intelligence” using the prompt “Explain what artificial intelligence is, using vocabulary suitable for KS3 students, avoiding anthropomorphism”. The result included quite a few inconsistencies with our principles, as well as a couple of technical inaccuracies. Perhaps I could have tweaked the prompt for the chatbot in order to get a better result. However, relying on a chatbot’s output would mean missing out on some of the value of doing the work of writing the explanations in collaboration with my team and our partners.

The visible result of that work is the explanations themselves. The invisible result is the knowledge we all gained, and the coherence we reached as a team, both of which enabled us to create high-quality resources for Experience AI. We wouldn’t have gotten to know what resources we wanted to write without writing the explanations ourselves and improving them over and over. So yes, it was worth our time.

What do you think about the explanations?

The process of creating and iterating the AI explanations highlights how opaque the field of AI still is, and how little we yet know about how best to teach and learn about it. At the Raspberry Pi Foundation, we now know just a bit more about that and are excited to share the results with teachers and young people.

You can access the Experience AI Lessons and the glossary with all our explanations at experience-ai.org. The glossary of AI explanations is just in its first published version: we will continue to improve it as we find out more about how to best support young people to learn about this field.

Let us know what you think about the explanations and whether they’re useful in your teaching. Onwards with the exciting work of establishing how to successfully engage young people in learning about and creating with AI technologies.

The post How we’re learning to explain AI terms for young people and educators appeared first on Raspberry Pi Foundation.

AI-Generated Steganography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/ai-generated-steganography.html

New research suggests that AIs can produce perfectly secure steganographic images:

Abstract: Steganography is the practice of encoding secret information into innocuous content in such a manner that an adversarial third party would not realize that there is hidden meaning. While this problem has classically been studied in security literature, recent advances in generative models have led to a shared interest among security and machine learning researchers in developing scalable steganography techniques. In this work, we show that a steganography procedure is perfectly secure under Cachin (1998)’s information theoretic-model of steganography if and only if it is induced by a coupling. Furthermore, we show that, among perfectly secure procedures, a procedure is maximally efficient if and only if it is induced by a minimum entropy coupling. These insights yield what are, to the best of our knowledge, the first steganography algorithms to achieve perfect security guarantees with non-trivial efficiency; additionally, these algorithms are highly scalable. To provide empirical validation, we compare a minimum entropy coupling-based approach to three modern baselines—arithmetic coding, Meteor, and adaptive dynamic grouping—using GPT-2, WaveRNN, and Image Transformer as communication channels. We find that the minimum entropy coupling-based approach achieves superior encoding efficiency, despite its stronger security constraints. In aggregate, these results suggest that it may be natural to view information-theoretic steganography through the lens of minimum entropy coupling.

News article.

EDITED TO ADD (6/13): Comments.

Open-Source LLMs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/open-source-llms.html

In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of AI research has dramatically changed.

This development hasn’t made the same splash as other corporate announcements, but its effects will be much greater. It will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape. The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments, and problematic uses of the open technology have already been documented. But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations.

The power shift comes from simplification. The LLMs built by OpenAI and Google rely on massive data sets, measured in the tens of billions of bytes, computed on by tens of thousands of powerful specialized processors producing models with billions of parameters. The received wisdom is that bigger data, bigger processing, and larger parameter sets were all needed to make a better model. Producing such a model requires the resources of a corporation with the money and computing power of a Google or Microsoft or Meta.

But building on public models like Meta’s LLaMa, the open-source community has innovated in ways that allow results nearly as good as the huge models—but run on home machines with common data sets. What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop. Bigger may be better, but the open-source community is showing that smaller is often good enough. This opens the door to more efficient, accessible, and resource-friendly LLMs.

More importantly, these smaller and faster LLMs are much more accessible and easier to experiment with. Rather than needing tens of thousands of machines and millions of dollars to train a new model, an existing model can now be customized on a mid-priced laptop in a few hours. This fosters rapid innovation.

It also takes control away from large companies like Google and OpenAI. By providing access to the underlying code and encouraging collaboration, open-source initiatives empower a diverse range of developers, researchers, and organizations to shape the technology. This diversification of control helps prevent undue influence, and ensures that the development and deployment of AI technologies align with a broader set of values and priorities. Much of the modern internet was built on open-source technologies from the LAMP (Linux, Apache, mySQL, and PHP/PERL/Python) stack—a suite of applications often used in web development. This enabled sophisticated websites to be easily constructed, all with open-source tools that were built by enthusiasts, not companies looking for profit. Facebook itself was originally built using open-source PHP.

But being open-source also means that there is no one to hold responsible for misuse of the technology. When vulnerabilities are discovered in obscure bits of open-source technology critical to the functioning of the internet, often there is no entity responsible for fixing the bug. Open-source communities span countries and cultures, making it difficult to ensure that any country’s laws will be respected by the community. And having the technology open-sourced means that those who wish to use it for unintended, illegal, or nefarious purposes have the same access to the technology as anyone else.

This, in turn, has significant implications for those who are looking to regulate this new and powerful technology. Now that the open-source community is remixing LLMs, it’s no longer possible to regulate the technology by dictating what research and development can be done; there are simply too many researchers doing too many different things in too many different countries. The only governance mechanism available to governments now is to regulate usage (and only for those who pay attention to the law), or to offer incentives to those (including startups, individuals, and small companies) who are now the drivers of innovation in the arena. Incentives for these communities could take the form of rewards for the production of particular uses of the technology, or hackathons to develop particularly useful applications. Sticks are hard to use—instead, we need appealing carrots.

It is important to remember that the open-source community is not always motivated by profit. The members of this community are often driven by curiosity, the desire to experiment, or the simple joys of building. While there are companies that profit from supporting software produced by open-source projects like Linux, Python, or the Apache web server, those communities are not profit driven.

And there are many open-source models to choose from. Alpaca, Cerebras-GPT, Dolly, HuggingChat, and StableLM have all been released in the past few months. Most of them are built on top of LLaMA, but some have other pedigrees. More are on their way.

The large tech monopolies that have been developing and fielding LLMs—Google, Microsoft, and Meta—are not ready for this. A few weeks ago, a Google employee leaked a memo in which an engineer tried to explain to his superiors what an open-source LLM means for their own proprietary tech. The memo concluded that the open-source community has lapped the major corporations and has an overwhelming lead on them.

This isn’t the first time companies have ignored the power of the open-source community. Sun never understood Linux. Netscape never understood the Apache web server. Open source isn’t very good at original innovations, but once an innovation is seen and picked up, the community can be a pretty overwhelming thing. The large companies may respond by trying to retrench and pulling their models back from the open-source community.

But it’s too late. We have entered an era of LLM democratization. By showing that smaller models can be highly effective, enabling easy experimentation, diversifying control, and providing incentives that are not profit motivated, open-source initiatives are moving us into a more dynamic and inclusive AI landscape. This doesn’t mean that some of these models won’t be biased, or wrong, or used to generate disinformation or abuse. But it does mean that controlling this technology is going to take an entirely different approach than regulating the large players.

This essay was written with Jim Waldo, and previously appeared on Slate.com.

EDITED TO ADD (6/4): Slashdot thread.

On the Catastrophic Risk of AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/06/on-the-catastrophic-risk-of-ai.html

Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The press coverage has been extensive, and surprising to me. The New York Times headline is “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” BBC: “Artificial intelligence could lead to extinction, experts warn.” Other headlines are similar.

I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said.

In my talk at the RSA Conference last month, I talked about the power level of our species becoming too great for our systems of governance. Talking about those systems, I said:

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

That was what I was thinking about when I agreed to sign on to the statement: “Pandemics, nuclear weapons, AI—yeah, I would put those three in the same bucket. Surely we can spend the same effort on AI risk as we do on future pandemics. That’s a really low bar.” Clearly I should have focused on the word “extinction,” and not the relative comparisons.

Seth Lazar, Jeremy Howard, and Arvind Narayanan wrote:

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there­—ne that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

I agree with that, and with their follow up:

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom.

This is what I wrote in Click Here to Kill Everybody (2018):

I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the culmination of several precursor technologies, like machine learning algorithms, automation, and autonomy. The security risks from those precursor technologies are already with us, and they’re increasing as the technologies become more powerful and more prevalent. So, while I am worried about intelligent and even driverless cars, most of the risks arealready prevalent in Internet-connected drivered cars. And while I am worried about robot soldiers, most of the risks are already prevalent in autonomous weapons systems.

Also, as roboticist Rodney Brooks pointed out, “Long before we see such machines arising there will be the somewhat less intelligent and belligerent machines. Before that there will be the really grumpy machines. Before that the quite annoying machines. And before them the arrogant unpleasant machines.” I think we’ll see any new security risks coming long before they get here.

I do think we should worry about catastrophic AI and robotics risk. It’s the fact that they affect the world in a direct, physical manner—and that they’re vulnerable to class breaks.

(Other things to read: David Chapman is good on scary AI. And Kieran Healy is good on the statement.)

Okay, enough. I should also learn not to sign on to group statements.

On the Poisoning of LLMs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/on-the-poisoning-of-llms.html

Interesting essay on the poisoning of LLMs—ChatGPT in particular:

Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.

They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.

Once they do update it, we only have their word—pinky-swear promises—that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google.

Real-time time series anomaly detection for streaming applications on Amazon Kinesis Data Analytics

Post Syndicated from Antonio Vespoli original https://aws.amazon.com/blogs/big-data/real-time-time-series-anomaly-detection-for-streaming-applications-on-amazon-kinesis-data-analytics/

Detecting anomalies in real time from high-throughput streams is key for informing on timely decisions in order to adapt and respond to unexpected scenarios. Stream processing frameworks such as Apache Flink empower users to design systems that can ingest and process continuous flows of data at scale. In this post, we present a streaming time series anomaly detection algorithm based on matrix profiles and left-discords, inspired by Lu et al., 2022, with Apache Flink, and provide a working example that will help you get started on a managed Apache Flink solution using Amazon Kinesis Data Analytics.

Challenges of anomaly detection

Anomaly detection plays a key role in a variety of real-world applications, such as fraud detection, sales analysis, cybersecurity, predictive maintenance, and fault detection, among others. The majority of these use cases require actions to be taken in near real-time. For instance, card payment networks must be able to identify and reject potentially fraudulent transactions before processing them. This raises the challenge to design near-real-time anomaly detection systems that are able to scale to ultra-fast arriving data streams.

Another key challenge that anomaly detection systems face is concept drift. The ever-changing nature of some use cases requires models to dynamically adapt to new scenarios. For instance, in a predictive maintenance scenario, you could use several Internet of Things (IoT) devices to monitor the vibrations produced by an electric motor with the objective of detecting anomalies and preventing unrecoverable damage. Sounds emitted by the vibrations of the motor can vary significantly over time due to different environmental conditions such as temperature variations, and this shift in pattern can invalidate the model. This class of scenarios creates the necessity for online learning—the ability of the model to continuously learn from new data.

Time series anomaly detection

Time series are a particular class of data that incorporates time in their structuring. The data points that characterize a time series are recorded in an orderly fashion and are chronological in nature. This class of data is present in every industry and is common at the core of many business requirements or key performance indicators (KPIs). Natural sources of time series data include credit card transactions, sales, sensor measurements, machine logs, and user analytics.

In the time series domain, an anomaly can be defined as a deviation from the expected patterns that characterize the time series. For instance, a time series can be characterized by its expected ranges, trends, seasonal, or cyclic patterns. Any significant alteration of this normal flow of data points is considered an anomaly.

Detecting anomalies can be more or less challenging depending on the domain. For instance, a threshold-based approach might be suitable for time series that are informed of their expected ranges, such as the working temperature of a machine or CPU utilization. On the other hand, applications such as fraud detection, cybersecurity, and predictive maintenance can’t be classified via simple rule-based approaches and require a more fine-grained mechanism to capture unexpected observations. Thanks to their parallelizable and event-driven setup, streaming engines such as Apache Flink provide an excellent environment for scaling real-time anomaly detection to fast-arriving data streams.

Solution overview

Apache Flink is a distributed processing engine for stateful computations over streams. A Flink program can be implemented in Java, Scala, or Python. It supports ingestion, manipulation, and delivery of data to the desired destinations. Kinesis Data Analytics allows you to run Flink applications in a fully managed environment on AWS.

Distance-based anomaly detection is a popular approach where a model is characterized by a number of internally stored data points that are used for comparison against the new incoming data points. At inference time, these methods compute distances and classify new data points according to how dissimilar they are from the past observations. In spite of the plethora of algorithms in literature, there is increasing evidence that distance-based anomaly detection algorithms are still competitive with the state of the art (Nakamura et al., 2020).

In this post, we present a streaming version of a distance-based unsupervised anomaly detection algorithm called time series discords, and explore some of the optimizations introduced by the Discord Aware Matrix Profile (DAMP) algorithm (Lu et al., 2022), which further develops the discords method to scale to trillions of data points.

Understanding the algorithm

A left-discord is a subsequence that is significantly dissimilar from all the subsequences that precede it. In this post, we demonstrate how to use the concept of left-discords to identify time series anomalies in streams using Kinesis Data Analytics for Apache Flink.

Let’s consider an unbounded stream and all its subsequences of length n. The m most recent subsequences will be stored and used for inference. When a new data point arrives, a new subsequence that includes the new event is formed. The algorithm compares this latest subsequence (query) to the m subsequences retained from the model, with the exclusion of the latest n subsequences because they overlap with the query and would therefore characterize a self-match. After computing these distances, the algorithm classifies the query as an anomaly if its distance from its closest non-self-matching subsequence is above a certain moving threshold.

For this post, we use a Kinesis data stream to ingest the input data, a Kinesis Data Analytics application to run the Flink anomaly detection program, and another Kinesis data stream to ingest the output produced by your application. For visualization purposes, we consume from the output stream using Kinesis Data Analytics Studio, which provides an Apache Zeppelin Notebook that we use to visualize and interact with the data in real time.

Implementation details

The Java application code for this example is available on GitHub. To download the application code, complete the following steps:

  1. Clone the remote repository using the following command:
    git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-java-examples

  2. Navigate to the amazon-kinesis-data-analytics-java-examples/AnomalyDetection/LeftDiscords directory:

Let’s walk through the code step by step.

The MPStreamingJob class defines the data flow of the application, and the MPProcessFunction class defines the logic of the function that detects anomalies.

The implementation is best described by three core components:

  • The Kinesis data stream source, used to read from the input stream
  • The anomaly detection process function
  • The Kinesis data stream sink, used to deliver the output into the output stream

The anomaly detection function is implemented as a ProcessFunction<String, String>. Its method MPProcessFunction#processElement is called for every data point:

@Override
public void processElement(String dataPoint, ProcessFunction<String, OutputWithLabel>.Context context,
                            Collector<OutputWithLabel> collector) {

   Double record = Double.parseDouble(dataPoint);

   int currentIndex = timeSeriesData.add(record);

   Double minDistance = 0.0;
   String anomalyTag = "INITIALISING";

   if (timeSeriesData.readyToCompute()) {
       minDistance = timeSeriesData.computeNearestNeighbourDistance();
       threshold.update(minDistance);
   }

   /*
   * Algorithm will wait for initializationPeriods * sequenceLength data points until starting
   * to compute the Matrix Profile (MP).
   */
   if (timeSeriesData.readyToInfer()) {
       anomalyTag = minDistance > threshold.getThreshold() ? "IS_ANOMALY" : "IS_NOT_ANOMALY";
   }

   OutputWithLabel output = new OutputWithLabel(currentIndex, record, minDistance, anomalyTag);

   collector.collect(output);
}

For every incoming data point, the anomaly detection algorithm takes the following actions:

  1. Adds the record to the timeSeriesData.
  2. If it has observed at least 2 * sequenceLength data points, starts computing the matrix profile.
  3. If it has observed at least initializationPeriods * sequenceLength data points, starts outputting anomaly labels.

Following these actions, the MPProcessFunction function outputs an OutputWithLabel object with four attributes:

  • index – The index of the data point in the time series
  • input – The input data without any transformation (identity function)
  • mp – The distance to the closest non-self-matching subsequence for the subsequence ending in index
  • anomalyTag – A binary label that indicates whether the subsequence is an anomaly

In the provided implementation, the threshold is learned online by fitting a normal distribution to the matrix profile data:

/*
 * Computes the threshold as two standard deviations away from the mean (p = 0.02)
 *
 * @return an estimated threshold
 */
public Double getThreshold() {
   Double mean = sum/counter;

   return mean + 2 * Math.sqrt(squaredSum/counter - mean*mean);
}

In this example, the algorithm classifies as anomalies those subsequences whose distance from their nearest neighbor deviates significantly from the average minimum distance (more than two standard deviations away from the mean).

The TimeSeries class implements the data structure that retains the context window, namely, the internally stored records that are used for comparison against the new incoming records. In the provided implementation, the n most recent records are retained, and when the TimeSeries object is at capacity, the oldest records are overridden.

Prerequisites

Before you create a Kinesis Data Analytics application for this exercise, create two Kinesis data streams: InputStream and OutputStream in us-east-1. The Flink application will use these streams as its respective source and destination streams. To create these resources, launch the following AWS CloudFormation stack:

Launch Stack

Alternatively, follow the instructions in Creating and Updating Data Streams.

Create the application

To create your application, complete the following steps:

  1. Clone the remote repository using the following command:
    git clone https://github.com/aws-samples/amazon-kinesis-data-analytics-java-examples 

  2. Navigate to the amazon-kinesis-data-analytics-java-examples/AnomalyDetection/LeftDiscords/core directory.
    cd amazon-kinesis-data-analytics-java-examples/AnomalyDetection/LeftDiscords/core

  3. Create your JAR file by running the following Maven command in the core directory, which contains the pom.xml file:
    mvn package -Dflink.version=1.15.4
  4. Create an Amazon Simple Storage Service (Amazon S3) bucket and upload the file target/left-discords-1.0.0.jar.
  5. Create and run a Kinesis Data Analytics application as described in Create and Run the Kinesis Data Analytics Application:
    1. Use the target/left-discords-1.0.0.jar.
    2. Note that the input and output streams are called InputStream and OutputStream, respectively.
    3. The provided example is set up to run in us-east-1.

Populate the input stream

You can populate InputStream by running the script.py file from the cloned repository, using the command python script.py. By editing the last two lines, you can populate the stream with synthetic data or with real data from a CSV dataset.

Visualize data on Kinesis Data Analytics Studio

Kinesis Data Analytics Studio provides the perfect setup for observing data in real time. The following screenshot shows sample visualizations. The first plot shows the incoming time series data, the second plot shows the matrix profile, and the third plot shows which data points have been classified as anomalies.

To visualize the data, complete the following steps:

  1. Create a notebook.
  2. Add the following paragraphs to the Zeppelin note:

Create a table and define the shape of the records generated by the application:

%flink.ssql

CREATE TABLE data (
index INT,
input VARCHAR(6),
mp VARCHAR(6),
anomalyTag VARCHAR(20)
)
PARTITIONED BY (index)
WITH (
'connector' = 'kinesis',
'stream' = 'OutputStream',
'aws.region' = 'us-east-1',
'scan.stream.initpos' = 'LATEST',
'format' = 'json',
'json.timestamp-format.standard' = 'ISO-8601'
)

Visualize the input data (choose Line Chart from the visualization options):

%flink.ssql(type=update)

SELECT index, input FROM data;

Visualize the output matrix profile data (choose Scatter Chart from the visualization options):

%flink.ssql(type=update)

SELECT index, mp FROM data;

Visualize the labeled data (choose Scatter Chart from the visualization options):

%flink.ssql(type=update)

SELECT index, anomalyTag FROM data;

Clean up

To delete all the resources that you created, follow the instructions in Clean Up AWS Resources.

Future developments

In this section, we discuss future developments for this solution.

Optimize for speed

The online time series discords algorithm is further developed and optimized for speed in Lu et al., 2022. The proposed optimizations include:

  • Early stopping – If the algorithm finds a subsequence that is similar enough (below the threshold), it stops searching and marks the query as non-anomaly.
  • Look-ahead windowing – Look at some amount of data in the future and compare it to the current query to cheaply discover and prune future subsequences that could not be left-discords. Note that this introduces some delay. The reason why disqualifying improves performance is that data points that are close in time are more likely to be similar than data points that are distant in time.
  • Use of MASS – The MASS (Mueen’s Algorithm for Similarity Search) search algorithm is designed for efficiently discovering the most similar subsequence in the past.

Parallelization

The algorithm above operates with parallelism 1, which means that when a single worker is enough to handle the data stream throughput, the above algorithm can be directly used. This design can be enhanced with further distribution logic for handling high throughput scenarios. In order to parallelise this algorithm, you may to design a partitioner operator that ensures that the anomaly detection operators would have at their disposal the relevant past data points. The algorithm can maintain a set of the most recent records to which it compares the query. Efficiency and accuracy trade-offs of approximate solutions are interesting to explore. Since the best solution for parallelising the algorithm depends largely on the nature of the data, we recommend experimenting with various approaches using your domain-specific knowledge.

Conclusion

In this post, we presented a streaming version of an anomaly detection algorithm based on left-discords. By implementing this solution, you learned how to deploy an Apache Flink-based anomaly detection solution on Kinesis Data Analytics, and you explored the potential of Kinesis Data Analytics Studio for visualizing and interacting with streaming data in real time. For more details on how to implement anomaly detection solutions in Apache Flink, refer to the GitHub repository that accompanies this post. To learn more about Kinesis Data Analytics and Apache Flink, explore the Amazon Kinesis Data Analytics Developer Guide.

Give it a try and share your feedback in the comments section.


About the Authors

Antonio Vespoli is a Software Development Engineer in AWS. He works on Amazon Kinesis Data Analytics, the managed offering for running Apache Flink applications on AWS.

Samuel Siebenmann is a Software Development Engineer in AWS. He works on Amazon Kinesis Data Analytics, the managed offering for running Apache Flink applications on AWS.

Nuno Afonso is a Software Development Engineer in AWS. He works on Amazon Kinesis Data Analytics, the managed offering for running Apache Flink applications on AWS.

Credible Handwriting Machine

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/credible-handwriting-machine.html

In case you don’t have enough to worry about, someone has built a credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation ­ something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.

Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.

Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.

Stronger together: Highlights from RSA Conference 2023

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/stronger-together-highlights-from-rsa-conference-2023/

Golden Gate bridge

RSA Conference 2023 brought thousands of cybersecurity professionals to the Moscone Center in San Francisco, California from April 24 through 27.

The keynote lineup was eclectic, with more than 30 presentations across two stages featuring speakers ranging from renowned theoretical physicist and futurist Dr. Michio Kaku to Grammy-winning musician Chris Stapleton. Topics aligned with this year’s conference theme, “Stronger Together,” and focused on actions that can be taken by everyone, from the C-suite to those of us on the front lines of security, to strengthen collaboration, establish new best practices, and make our defenses more diverse and effective.

With over 400 sessions and 500 exhibitors discussing the latest trends and technologies, it’s impossible to recap every highlight. Now that the dust has settled and we’ve had time to reflect, here’s a glimpse of what caught our attention.

Noteworthy announcements

Hundreds of companies — including Amazon Web Services (AWS) — made new product and service announcements during the conference.

We announced three new capabilities for our Amazon GuardDuty threat detection service to help customers secure container, database, and serverless workloads. These include GuardDuty Elastic Kubernetes Service (EKS) Runtime Monitoring, GuardDuty RDS Protection for data stored in Amazon Aurora, and GuardDuty Lambda Protection for serverless applications. The new capabilities are designed to provide actionable, contextual, and timely security findings with resource-specific details.

Artificial intelligence

It was hard to find a single keynote, session, or conversation that didn’t touch on the impact of artificial intelligence (AI).

In “AI: Law, Policy and Common Sense Suggestions on How to Stay Out of Trouble,” privacy and gaming attorney Behnam Dayanim highlighted ambiguity around the definition of AI. Referencing a quote from University of Washington School of Law’s Ryan Calo, Dayanim pointed out that AI may be best described as “…a set of techniques aimed at approximating some aspect of cognition,” and should therefore be thought of differently than a discrete “thing” or industry sector.

Dayanim noted examples of skepticism around the benefits of AI. A recent Monmouth University poll, for example, found that 73% of Americans believe AI will make jobs less available and harm the economy, and a surprising 55% believe AI may one day threaten humanity’s existence.

Equally skeptical, he noted, is a joint statement made by the Federal Trade Commission (FTC) and three other federal agencies during the conference reminding the public that enforcement authority applies to AI. The statement takes a pessimistic view, saying that AI is “…often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” but has the potential to produce negative outcomes.

Dayanim covered existing and upcoming legal frameworks around the world that are aimed at addressing AI-related risks related to intellectual property (IP), misinformation, and bias, and how organizations can design AI governance mechanisms to promote fairness, competence, transparency, and accountability.

Many other discussions focused on the immense potential of AI to automate and improve security practices. RSA Security CEO Rohit Ghai explored the intersection of progress in AI with human identity in his keynote. “Access management and identity management are now table stakes features”, he said. In the AI era, we need an identity security solution that will secure the entire identity lifecycle—not just access. To be successful, he believes, the next generation of identity technology needs to be powered by AI, open and integrated at the data layer, and pursue a security-first approach. “Without good AI,” he said, “zero trust has zero chance.”

Mark Ryland, director at the Office of the CISO at AWS, spoke with Infosecurity about improving threat detection with generative AI.

“We’re very focused on meaningful data and minimizing false positives. And the only way to do that effectively is with machine learning (ML), so that’s been a core part of our security services,” he noted.

We recently announced several new innovations—including Amazon Bedrock, the Amazon Titan foundation model, the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn1n instances powered by AWS Trainium, Amazon EC2 Inf2 instances powered by AWS Inferentia2, and the general availability of Amazon CodeWhisperer—that will make it practical for customers to use generative AI in their businesses.

“Machine learning and artificial intelligence will add a critical layer of automation to cloud security. AI/ML will help augment developers’ workstreams, helping them create more reliable code and drive continuous security improvement. — CJ Moses, CISO and VP of security engineering at AWS

The human element

Dozens of sessions focused on the human element of security, with topics ranging from the psychology of DevSecOps to the NIST Phish Scale. In “How to Create a Breach-Deterrent Culture of Cybersecurity, from Board Down,” Andrzej Cetnarski, founder, chairman, and CEO of Cyber Nation Central and Marcus Sachs, deputy director for research at Auburn University, made a data-driven case for CEOs, boards, and business leaders to set a tone of security in their organizations, so they can address “cyber insecure behaviors that lead to social engineering” and keep up with the pace of cybercrime.

Lisa Plaggemier, executive director of the National Cybersecurity Alliance, and Jenny Brinkley, director of Amazon Security, stressed the importance of compelling security awareness training in “Engagement Through Entertainment: How To Make Security Behaviors Stick.” Education is critical to building a strong security posture, but as Plaggemier and Brinkley pointed out, we’re “living through an epidemic of boringness” in cybersecurity training.

According to a recent report, just 28% of employees say security awareness training is engaging, and only 36% say they pay full attention during such training.

Citing a United Airlines preflight safety video and Amazon’s Protect and Connect public service announcement (PSA) as examples, they emphasized the need to make emotional connections with users through humor and unexpected elements in order to create memorable training that drives behavioral change.

Plaggemeier and Brinkley detailed five actionable steps for security teams to improve their awareness training:

  • Brainstorm with staff throughout the company (not just the security people)
  • Find ideas and inspiration from everywhere else (TV episodes, movies… anywhere but existing security training)
  • Be relatable, and include insights that are relevant to your company and teams
  • Start small; you don’t need a large budget to add interest to your training
  • Don’t let naysayers deter you — change often prompts resistance
“You’ve got to make people care. And so you’ve got to find out what their personal motivators are, and how to develop the type of content that can make them care to click through the training and…remember things as they’re walking through an office.” — Jenny Brinkley, director of Amazon Security

Cloud security

Cloud security was another popular topic. In “Architecting Security for Regulated Workloads in Hybrid Cloud,” Mark Buckwell, cloud security architect at IBM, discussed the architectural thinking practices—including zero trust—required to integrate security and compliance into regulated workloads in a hybrid cloud environment.

Mitiga co-founder and CTO Ofer Maor told real-world stories of SaaS attacks and incident response in “It’s Getting Real & Hitting the Fan 2023 Edition.”

Maor highlighted common tactics focused on identity theft, including MFA push fatigue, phishing, business email compromise, and adversary-in-the middle attacks. After detailing techniques that are used to establish persistence in SaaS environments and deliver ransomware, Maor emphasized the importance of forensic investigation and threat hunting to gaining the knowledge needed to reduce the impact of SaaS security incidents.

Sarah Currey, security practice manager, and Anna McAbee, senior solutions architect at AWS, provided complementary guidance in “Top 10 Ways to Evolve Cloud Native Incident Response Maturity.” Currey and McAbee highlighted best practices for addressing incident response (IR) challenges in the cloud — no matter who your provider is:

  1. Define roles and responsibilities in your IR plan
  2. Train staff on AWS (or your provider)
  3. Develop cloud incident response playbooks
  4. Develop account structure and tagging strategy
  5. Run simulations (red team, purple team, tabletop)
  6. Prepare access
  7. Select and set up logs
  8. Enable managed detection services in all available AWS Regions
  9. Determine containment strategy for resource types
  10. Develop cloud forensics capabilities

Speaking to BizTech, Clarke Rodgers, director of enterprise strategy at AWS, noted that tools and services such as Amazon GuardDuty and AWS Key Management Service (AWS KMS) are available to help advance security in the cloud. When organizations take advantage of these services and use partners to augment security programs, they can gain the confidence they need to take more risks, and accelerate digital transformation and product development.

Security takes a village

There are more highlights than we can mention on a variety of other topics, including post-quantum cryptography, data privacy, and diversity, equity, and inclusion. We’ve barely scratched the surface of RSA Conference 2023. If there is one key takeaway, it is that no single organization or individual can address cybersecurity challenges alone. By working together and sharing best practices as an industry, we can develop more effective security solutions and stay ahead of emerging threats.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Danielle Ruderman

Danielle Ruderman

Danielle is a Senior Manager for the AWS Worldwide Security Specialist Organization, where she leads a team that enables global CISOs and security leaders to better secure their cloud environments. Danielle is passionate about improving security by building company security culture that starts with employee engagement.

Amazon SageMaker Geospatial Capabilities Now Generally Available with Security Updates and More Use Case Samples

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-sagemaker-geospatial-capabilities-now-generally-available-with-security-updates-and-more-use-case-samples/

At AWS re:Invent 2022, we previewed Amazon SageMaker geospatial capabilities, allowing data scientists and machine learning (ML) engineers to build, train, and deploy ML models using geospatial data. Geospatial ML with Amazon SageMaker supports access to readily available geospatial data, purpose-built processing operations and open source libraries, pre-trained ML models, and built-in visualization tools with Amazon SageMaker’s geospatial capabilities.

During the preview, we had lots of interest and great feedback from customers. Today, Amazon SageMaker geospatial capabilities are generally available with new security updates and additional sample use cases.

Introducing Geospatial ML features with SageMaker Studio
To get started, use the quick setup to launch Amazon SageMaker Studio in the US West (Oregon) Region. Make sure to use the default Jupyter Lab 3 version when you create a new user in the Studio. Now you can navigate to the homepage in SageMaker Studio. Then select the Data menu and click on Geospatial.

Here is an overview of three key Amazon SageMaker geospatial capabilities:

  • Earth Observation jobs – Acquire, transform, and visualize satellite imagery data using purpose-built geospatial operations or pre-trained ML models to make predictions and get useful insights.
  • Vector Enrichment jobs – Enrich your data with operations, such as converting geographical coordinates to readable addresses.
  • Map Visualization – Visualize satellite images or map data uploaded from a CSV, JSON, or GeoJSON file.

You can create all Earth Observation Jobs (EOJ) in the SageMaker Studio notebook to process satellite data using purpose-built geospatial operations. Here is a list of purpose-built geospatial operations that are supported by the SageMaker Studio notebook:

  • Band Stacking – Combine multiple spectral properties to create a single image.
  • Cloud Masking – Identify cloud and cloud-free pixels to get improved and accurate satellite imagery.
  • Cloud Removal – Remove pixels containing parts of a cloud from satellite imagery.
  • Geomosaic – Combine multiple images for greater fidelity.
  • Land Cover Segmentation – Identify land cover types such as vegetation and water in satellite imagery.
  • Resampling – Scale images to different resolutions.
  • Spectral Index – Obtain a combination of spectral bands that indicate the abundance of features of interest.
  • Temporal Statistics – Calculate statistics through time for multiple GeoTIFFs in the same area.
  • Zonal Statistics – Calculate statistics on user-defined regions.

A Vector Enrichment Job (VEJ) enriches your location data through purpose-built operations for reverse geocoding and map matching. While you need to use a SageMaker Studio notebook to execute a VEJ, you can view all the jobs you create using the user interface. To use the visualization in the notebook, you first need to export your output to your Amazon S3 bucket.

  • Reverse Geocoding – Convert coordinates (latitude and longitude) to human-readable addresses.
  • Map Matching – Snap inaccurate GPS coordinates to road segments.

Using the Map Visualization, you can visualize geospatial data, the inputs to your EOJ or VEJ jobs as well as the outputs exported from your Amazon Simple Storage Service (Amazon S3) bucket.

Security Updates
At GA, we have two major security updates—AWS Key Management Service (AWS KMS) for customer managed AWS KMS key support and Amazon Virtual Private Cloud (Amazon VPC) for geospatial operations in the customer Amazon VPC environment.

AWS KMS customer managed keys offer increased flexibility and control by enabling customers to use their own keys to encrypt geospatial workloads.

You can use KmsKeyId to specify your own key in StartEarthObservationJob and StartVectorEnrichmentJob as an optional parameter. If the customer doesn’t provide KmsKeyId, a service owned key will be used to encrypt the customer content. To learn more, see SageMaker geospatial capabilities AWS KMS Support in the AWS documentation.

Using Amazon VPC, you have full control over your network environment and can more securely connect to your geospatial workloads on AWS. You can use SageMaker Studio or Notebook in your Amazon VPC environment for SageMaker geospatial operations and execute SageMaker geospatial API operations through an interface VPC endpoint in SageMaker geospatial operations.

To get started with Amazon VPC support, configure Amazon VPC on SageMaker Studio Domain and create a SageMaker geospatial VPC endpoint in your VPC in the Amazon VPC console. Choose the service name as com.amazonaws.us-west-2.sagemaker-geospatial and select the VPC in which to create the VPC endpoint.

All Amazon S3 resources that are used for input or output in EOJ and VEJ operations should have internet access enabled. If you have no direct access to those Amazon S3 resources via the internet, you can grant SageMaker geospatial VPC endpoint ID access to it by changing the corresponding S3 bucket policy. To learn more, see SageMaker geospatial capabilities Amazon VPC Support in the AWS documentation.

Example Use Case for Geospatial ML
Customers across various industries use Amazon SageMaker geospatial capabilities for real-world applications.

Maximize Harvest Yield and Food Security
Digital farming consists of applying digital solutions to help farmers optimize crop production in agriculture through the use of advanced analytics and machine learning. Digital farming applications require working with geospatial data, including satellite imagery of the areas where farmers have their fields located.

You can use SageMaker to identify farm field boundaries in satellite imagery through pre-trained models for land cover classification. Learn about How Xarvio accelerated pipelines of spatial data for digital farming with Amazon SageMaker Geospatial in the AWS Machine Learning Blog. You can find an end-to-end digital farming example notebook via the GitHub repository.

Damage Assessment
As the frequency and severity of natural disasters increase, it’s important that we equip decision-makers and first responders with fast and accurate damage assessment. You can use geospatial imagery to predict natural disaster damage and geospatial data in the immediate aftermath of a natural disaster to rapidly identify damage to buildings, roads, or other critical infrastructure.

From an example notebook, you can train, deploy, and predict natural disaster damage from the floods in Rochester, Australia, in mid-October 2022. We use images from before and after the disaster as input to its trained ML model. The results of the segmentation mask for the Rochester floods are shown in the following images. Here we can see that the model has identified locations within the flooded region as likely damaged.

You can train and deploy a geospatial segmentation model to assess wildfire damages using multi-temporal Sentinel-2 satellite data via GitHub repository. The area of interest for this example is located in Northern California, from a region that was affected by the Dixie Wildfire in 2021.

Monitor Climate Change
Earth’s climate change increases the risk of drought due to global warming. You can see how to acquire data, perform analysis, and visualize the changes with SageMaker geospatial capabilities to monitor shrinking shoreline caused by climate change in the Lake Mead example, the largest reservoir in the US.

Lake Mead surface area animation

You can find the notebook code for this example in the GitHub repository.

Predict Retail Demand
The new notebook example demonstrates how to use SageMaker geospatial capabilities to perform a vector-based map-matching operation and visualize the results. Map matching allows you to snap noisy GPS coordinates to road segments. With Amazon SageMaker geospatial capabilities, it is possible to perform a VEJ for map matching. This type of job takes a CSV file with route information (such as longitude, latitude, and timestamps of GPS measurements) as input and produces a GeoJSON file that contains the predicted route.

Support Sustainable Urban Development
Arup, one of our customers, uses digital technologies like machine learning to explore the impact of heat on urban areas and the factors that influence local temperatures to deliver better design and support sustainable outcomes. Urban Heat Islands and the associated risks and discomforts are one of the biggest challenges cities are facing today.

Using Amazon SageMaker geospatial capabilities, Arup identifies and measures urban heat factors with earth observation data, which significantly accelerated their ability to counsel clients. It enabled its engineering teams to carry out analytics that weren’t possible previously by providing access to increased volumes, types, and analysis of larger datasets. To learn more, see Facilitating Sustainable City Design Using Amazon SageMaker with Arup in AWS customer stories.

Now Available
Amazon SageMaker geospatial capabilities are now generally available in the US West (Oregon) Region. As part of the AWS Free Tier, you can get started with SageMaker geospatial capabilities for free. The Free Tier lasts 30 days and includes 10 free ml.geospatial.interactive compute hours, up to 10 GB of free storage, and no $150 monthly user fee.

After the 30-day free trial period is complete, or if you exceed the Free Tier limits defined above, you pay for the components outlined on the pricing page.

To learn more, see Amazon SageMaker geospatial capabilities and the Developer Guide. Give it a try and send feedback to AWS re:Post for Amazon SageMaker or through your usual AWS support contacts.

Channy

Ted Chiang on the Risks of AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/ted-chiang-on-the-risks-of-ai.html

Ted Chiang has an excellent essay in the New Yorker: “Will A.I. Become the New McKinsey?”

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans­—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

EDITED TO ADD: Ted Chiang’s previous essay, “ChatGPT Is a Blurry JPEG of the Web” is also worth reading.

Building Trustworthy AI

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/05/building-trustworthy-ai.html

We will all soon get into the habit of using AI tools for help with everyday problems and tasks. We should get in the habit of questioning the motives, incentives, and capabilities behind them, too.

Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?

For AI to truly be our assistant, it needs to be trustworthy. For it to be trustworthy, it must be under our control; it can’t be working behind the scenes for some tech monopoly. This means, at a minimum, the technology needs to be transparent. And we all need to understand how it works, at least a little bit.

Amid the myriad warnings about creepy risks to well-being, threats to democracy, and even existential doom that have accompanied stunning recent developments in artificial intelligence (AI)—and large language models (LLMs) like ChatGPT and GPT-4—one optimistic vision is abundantly clear: this technology is useful. It can help you find information, express your thoughts, correct errors in your writing, and much more. If we can navigate the pitfalls, its assistive benefit to humanity could be epoch-defining. But we’re not there yet.

Let’s pause for a moment and imagine the possibilities of a trusted AI assistant. It could write the first draft of anything: emails, reports, essays, even wedding vows. You would have to give it background information and edit its output, of course, but that draft would be written by a model trained on your personal beliefs, knowledge, and style. It could act as your tutor, answering questions interactively on topics you want to learn about—in the manner that suits you best and taking into account what you already know. It could assist you in planning, organizing, and communicating: again, based on your personal preferences. It could advocate on your behalf with third parties: either other humans or other bots. And it could moderate conversations on social media for you, flagging misinformation, removing hate or trolling, translating for speakers of different languages, and keeping discussions on topic; or even mediate conversations in physical spaces, interacting through speech recognition and synthesis capabilities.

Today’s AIs aren’t up for the task. The problem isn’t the technology—that’s advancing faster than even the experts had guessed—it’s who owns it. Today’s AIs are primarily created and run by large technology companies, for their benefit and profit. Sometimes we are permitted to interact with the chatbots, but they’re never truly ours. That’s a conflict of interest, and one that destroys trust.

The transition from awe and eager utilization to suspicion to disillusionment is a well worn one in the technology sector. Twenty years ago, Google’s search engine rapidly rose to monopolistic dominance because of its transformative information retrieval capability. Over time, the company’s dependence on revenue from search advertising led them to degrade that capability. Today, many observers look forward to the death of the search paradigm entirely. Amazon has walked the same path, from honest marketplace to one riddled with lousy products whose vendors have paid to have the company show them to you. We can do better than this. If each of us are going to have an AI assistant helping us with essential activities daily and even advocating on our behalf, we each need to know that it has our interests in mind. Building trustworthy AI will require systemic change.

First, a trustworthy AI system must be controllable by the user. That means that the model should be able to run on a user’s owned electronic devices (perhaps in a simplified form) or within a cloud service that they control. It should show the user how it responds to them, such as when it makes queries to search the web or external services, when it directs other software to do things like sending an email on a user’s behalf, or modifies the user’s prompts to better express what the company that made it thinks the user wants. It should be able to explain its reasoning to users and cite its sources. These requirements are all well within the technical capabilities of AI systems.

Furthermore, users should be in control of the data used to train and fine-tune the AI system. When modern LLMs are built, they are first trained on massive, generic corpora of textual data typically sourced from across the Internet. Many systems go a step further by fine-tuning on more specific datasets purpose built for a narrow application, such as speaking in the language of a medical doctor, or mimicking the manner and style of their individual user. In the near future, corporate AIs will be routinely fed your data, probably without your awareness or your consent. Any trustworthy AI system should transparently allow users to control what data it uses.

Many of us would welcome an AI-assisted writing application fine tuned with knowledge of which edits we have accepted in the past and which we did not. We would be more skeptical of a chatbot knowledgeable about which of their search results led to purchases and which did not.

You should also be informed of what an AI system can do on your behalf. Can it access other apps on your phone, and the data stored with them? Can it retrieve information from external sources, mixing your inputs with details from other places you may or may not trust? Can it send a message in your name (hopefully based on your input)? Weighing these types of risks and benefits will become an inherent part of our daily lives as AI-assistive tools become integrated with everything we do.

Realistically, we should all be preparing for a world where AI is not trustworthy. Because AI tools can be so incredibly useful, they will increasingly pervade our lives, whether we trust them or not. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.

In the world’s first few months of widespread use of models like ChatGPT, we’ve learned a lot about how AI creates risks for users. Everyone has heard by now that LLMs “hallucinate,” meaning that they make up “facts” in their outputs, because their predictive text generation systems are not constrained to fact check their own emanations. Many users learned in March that information they submit as prompts to systems like ChatGPT may not be kept private after a bug revealed users’ chats. Your chat histories are stored in systems that may be insecure.

Researchers have found numerous clever ways to trick chatbots into breaking their safety controls; these work largely because many of the “rules” applied to these systems are soft, like instructions given to a person, rather than hard, like coded limitations on a product’s functions. It’s as if we are trying to keep AI safe by asking it nicely to drive carefully, a hopeful instruction, rather than taking away its keys and placing definite constraints on its abilities.

These risks will grow as companies grant chatbot systems more capabilities. OpenAI is providing developers wide access to build tools on top of GPT: tools that give their AI systems access to your email, to your personal account information on websites, and to computer code. While OpenAI is applying safety protocols to these integrations, it’s not hard to imagine those being relaxed in a drive to make the tools more useful. It seems likewise inevitable that other companies will come along with less bashful strategies for securing AI market share.

Just like with any human, building trust with an AI will be hard won through interaction over time. We will need to test these systems in different contexts, observe their behavior, and build a mental model for how they will respond to our actions. Building trust in that way is only possible if these systems are transparent about their capabilities, what inputs they use and when they will share them, and whose interests they are evolving to represent.

This essay was written with Nathan Sanders, and previously appeared on Gizmodo.com.