Automate Document Processing in Logistics using AI

Post Syndicated from Manikanth Pasumarti original https://aws.amazon.com/blogs/architecture/automate-document-processing-in-logistics-using-ai/

Multi-modal transportation is one of the biggest developments in the logistics industry. There has been a successful collaboration across different transportation partners in supply chain freight forwarding for many decades. But there’s still a considerable overhead of paperwork processing for each leg of the trip. Tens of billions of documents are processed in ocean freight forwarding alone. Using manual labor to process these documents (purchase orders, invoices, bills of lading, delivery receipts, and more) is both expensive and error-prone.

In this blog post, we’ll address how to automate the document processing in the logistics industry. We’ll also show you how to integrate it with a centralized workflow management.

Automated document processing architecture

Figure 1. Architecture of document processing workflow

Figure 1. Architecture of document processing workflow

The solution workflow shown in Figure 1 is as follows:

  1. Documents that belong to the same transaction are collected in an S3 bucket
  2. The document processing workflow is initiated
  3. The workflow orchestration is as follows:
    • Document is processed via automation
    • Relevant entities are extracted
    • Extracted data is reviewed
    • Order data is consolidated

This architecture uses Amazon Simple Storage Service (S3) for document storage, and Amazon Simple Queue Service (SQS) for workflow initiation. Amazon Textract is used for text extraction, Amazon Comprehend for entity extraction, and Amazon Augmented AI (A2I) for human review. This will ensure correct results in cases of low confidence predictions.

We use AWS Step Functions for the orchestration of document processing workflow. Step functions also help to improve the application resiliency with less code.

AWS Lambda functions are used to:

  • Detect if all required documents for a given transaction are available in Amazon S3
  • Kick off the process by creating an Amazon SQS message
  • Detect a new processing job from a generated SQS message
  • Extract text from PDFs using a Step Function
  • Extract entities from generated text using a Step Function
  • Control data completeness and accuracy
  • Initiate a human loop when needed using a Step Function
  • Consolidate the data collected from documents
  • Store the data into the database

Document ingestion and classification

There are several data ingestion options available such as AWS Transfer Family, AWS DataSync, and Amazon Kinesis Data Firehose. Choose the appropriate ingestion blueprints based on the type of data sources. Typical real-time ingestion blueprints include AWS Lambda processing and an Amazon CloudWatch event. The batch pipeline can leverage AWS Step Functions. This can be used to orchestrate the Lambda function that initiates the document processing workflow.

Here are some things to consider when building your document ingestion and storage solution:

  • Choose your bucket strategy. Amazon S3 is an object store. Analyze your data pipeline ingestion carefully and choose the correct S3 bucket strategy for each document type (bills, supplier invoices, and others.)
  • Organize your data. The data is organized in S3 buckets by layers: Raw, Staging, and Processed. Each has their own respective bucket policy and access control.
  • Build a creation tool. This is an automated data lake bucket/folder structure tool, based on your data ingestion requirements. You can use this same structure for user-created data.
  • Define data security requirements. Do this before you begin the ingestion process. Before ingesting new or current data sources into AWS, secure access to the data.
  • Review security credentials needed for access. After copying these credentials into AWS Systems Manager (SSM), apply an AWS Key Management Service (KMS) key to encrypt the file. This encrypted key string is stored in SSM to use for authentication.

Document processing workflow

Overview

The workflow checks the input buckets until it detects all the documents types necessary for a complete dataset. In our case, it is the invoice document and customs authorization form. Once both are detected, it generates a job request as a message in Amazon SQS. A Lambda function then processes the message and kicks off the Step Function flow (see Figure 2). The state machine then initiates the document processing, text extraction, and optional human review steps. AWS Step Functions are well suited for our use case due to its ability to manage long-running workflows.

Figure 2. Visual workflow of document processing in AWS Step Functions

Figure 2. Visual workflow of document processing in AWS Step Functions

Entity extraction

For each document, entities are extracted using Amazon Textract and Amazon Comprehend. These entities can include date, company, address, bill of materials, total cost, and invoice number.

Following is a sample invoice document that is fed to Amazon Textract, which extracts the form data and creates key-value pairs.

Figure 3. Highlighted different entities in the sample invoice document

Figure 3. Highlighted different entities in the sample invoice document

See Figure 4 for an example of the key-value pairs extracted for the sample invoice. The keys here represent the form labels (“SHIP TO”) and the values represent form values (shipping address).

Figure 4. Key-value pairs of the invoice data, extracted by Amazon Textract

Figure 4. Key-value pairs of the invoice data, extracted by Amazon Textract

Amazon Textract also generates a raw text output that contains the entire text, as shown in Figure 5 following.

Figure 5. Raw text output of the invoice data extracted by Amazon Textract

Figure 5. Raw text output of the invoice data extracted by Amazon Textract

To achieve a higher degree of confidence, Amazon Comprehend is used to identify and extract the custom entities. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to identify and extracts insights and entities from text data. You can train Amazon Comprehend to identify entities relevant to your organization. These can be product names, part numbers, department names, or other entities. You can also train Amazon Comprehend to categorize documents or assign relevant labels to text.

An Amazon Comprehend entity recognizer comes with a set of pre-built entity types. Amazon Comprehend can introduce custom entities to match our specific business needs. Some of the entities we want to identify are address and company name. We trained a custom recognizer to detect company names and addresses, see Figure 6.

Figure 6. Training details of custom entity recognizer

Figure 6. Training details of custom entity recognizer

Figure 7 shows the resulting output from Amazon Comprehend:

Figure 7. Amazon Comprehend entity recognition output

Figure 7. Amazon Comprehend entity recognition output

The document is processed top-down, from left to right, from the sample invoice in Figure 3. We know that the first company and first address belongs to the Billing Company. And the second set belongs to the Shipment recipient. Along with detecting custom entities, Amazon Comprehend also outputs the confidence score of the extracted result.

Confidence scores can vary depending on how close training data is to actual data. In the example preceding, the first company entity came back with a score of 0.941. Let’s assume that we have set a minimum confidence score of 0.95. Anything below that threshold should be reviewed by a human. The following section describes the last step of our workflow.

Human review

Amazon Augmented AI (A2I) allows you to create and manage human loops. A human loop is a manual review task that gets assigned to a workforce. The workforce can be public, such as Mechanical Turk, or private, such as internal team or a paid contractor. In our example, we created a private workforce to review the entities we were not confident about. Figure 8 shows an example of the user interface that the reviewers use to assign entities to the proper text sections.

Figure 8. Manual review interface of Amazon A2I

Figure 8. Manual review interface of Amazon A2I

Review tasks can be automatically submitted to the workforce based on dynamic criteria, after both AI-related steps are completed. It can be used to review the text detected by Amazon Textract when key data elements are missing (such as order amount or quantity). It can also review entities after invoking Amazon Comprehend.

Figure 9. Consolidated dataset of processed invoice and customs authorization data

Figure 9. Consolidated dataset of processed invoice and customs authorization data

After the manual review step, data can be consolidated (as shown in Figure 9) and stored into a relational database. It can also be shared with other business units such as Accounting or Customer Services. You can apply the same process to other document types such as custom forms, which are linked to the same transaction. This allows us to process and combine information that comes from disparate paper sources more efficiently.

Conclusion

This post demonstrates how document processing can be automated to process business documentation by using Amazon Textract, Amazon Comprehend and Amazon Augmented AI.

Deploying an automated solution in the logistics industry takes away the undifferentiated heavy lifting involved in manual document processing. This helps to cut down the delivery delays and track any missed deliveries. By providing a comprehensive view of the shipment, it increases the efficiency of back-office processing. It can also further simplify the data collection for audit purposes.

To learn more:

“The kernel report” online, August 26

Post Syndicated from original https://lwn.net/Articles/866701/rss

As part of the ramp-up to the 2021
Linux Plumbers Conference
, LWN editor Jonathan Corbet will be
presenting a version of “The kernel report” at 9:00AM US/Mountain time
(15:00 UTC) on Thursday, August 26. Registration for LPC is not
required; all are welcome for an update on the state of kernel development
and a perspective on 30 years of the Linux kernel. Please come for an
interesting discussion and to help the LPC crew stress-test the 2021
infrastructure.

The talk will be happening at meet.lpc.events; the more the merrier.

[$] PEP 649 revisited

Post Syndicated from original https://lwn.net/Articles/866613/rss

Back in June, we looked at a change to
Python annotations, which provide a way to associate metadata, such as type
information, with functions. That change
was planned for the upcoming Python 3.10 release, but was deferred due to
questions about it and its impact on run-time uses of the feature.
The Python steering council felt
that more time was needed to consider all of the different aspects of the
problem before deciding on the right approach; the feature freeze for Python 3.10 was only
around two weeks off when the decision was announced on April 20. But now, there is most of a year
before another feature freeze, which gives the council (and the greater
Python development community) some time to discuss it at a more leisurely pace.

Query SAP HANA using Athena Federated Query and join with data in your Amazon S3 data lake

Post Syndicated from Navnit Shukla original https://aws.amazon.com/blogs/big-data/query-sap-hana-using-athena-federated-query-and-join-with-data-in-your-amazon-s3-data-lake/

If you use data lakes in Amazon Simple Storage Service (Amazon S3) and use SAP HANA as your transactional data store, you may need to join the data in your data lake with SAP HANA in the cloud, SAP HANA running on Amazon Elastic Compute Cloud (Amazon EC2), or with an on-premises SAP HANA, for example to build a dashboard or create consolidated reporting.

In such use cases, Amazon Athena Federated Query allows you to seamlessly access the data from SAP HANA database without building ETL pipelines to copy or unload the data to the S3 data lake or SAP HANA. This removes the overhead of creating additional extract, transform, and load (ETL) processes and shortens the development cycle.

In this post, we walk you through a step-by-step configuration to set up Amazon Athena Federated Query using AWS Lambda to access data in a SAP HANA database running on AWS.

For this post, we will be using the SAP HANA Athena Federated query connector developed by Trianz. You can deploy the Athena Federated query connector developed by Trianz available in the AWS Serverless Application Repository.

Let’s start with discussing the solution and then detailing the steps involved.

Solution overview

Data federation is the capability to integrate data in another data store using a single interface (Athena). The following diagram depicts how Athena federation works by using Lambda to integrate with a federated data source.

Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. If you have data in sources other than Amazon S3, you can use Athena Federated Query to query the data in place or build pipelines to extract data from multiple data sources and store them in Amazon S3. With Athena Federated Query, you can run SQL queries across data stored in relational, non-relational, object, and custom data sources.

When a federated query is run, Athena identifies the parts of the query that should be routed to the data source connector and executes them with Lambda. The data source connector makes the connection to the source, runs the query, and returns the results to Athena. If the data doesn’t fit into Lambda RAM runtime memory, it spills the data to Amazon S3 and is later accessed by Athena.

Athena uses data source connectors which internally use Lambda to run federated queries. Data source connectors are pre-built and can be deployed from the Athena console or from the Serverless Application Repository. Based on the user submitting the query, connectors can provide or restrict access to specific data elements.

To implement this solution, we complete the following steps:

  1. Create a secret for the SAP HANA instance using AWS Secrets Manager.
  2. Create an S3 bucket and subfolder for Lambda to use.
  3. Configure Athena federation with the SAP HANA instance.
  4. Run federated queries with Athena.

Prerequisites

Before getting started, make sure you have a SAP HANA database up and running on AWS.

Create a secret for the SAP HANA instance

Our first step is to create a secret for the SAP HANA instance with a username and password using Secrets Manager.

  1. On the Secrets Manager console, choose Secrets.
  2. Choose Store a new secret.
  3. Select Other types of secrets.
  4. Set the credentials as key-value pairs (username, password) for your SAP HANA instance.
  5. For Secret name, enter a name for your secret. Use the prefix SAP HANAAFQ so it’s easy to find.
  6. Leave the remaining fields at their defaults and choose Next.
  7. Complete your secret creation.

Setting up your S3 bucket for Lambda

On the Amazon S3 console, create a new S3 bucket and subfolder for Lambda to use

For this post, we have used (Amazon S3 bucket name/folder) athena-accelerator/saphana.

Configure Athena federation with the SAP HANA instance

To configure Athena federation with your SAP HANA instance, complete the following steps:

  1. On the AWS Serverless Application Repository console, choose Available applications.
  2. In the search field, enter TrianzSAPHANAAthenaJDBC.

In the Application settings section, provide the following details:

  1. For Application name, enter TrianzSAPHANAAthenaJDBC.
  2. For SecretNamePrefix, enter trianz-saphana-athena-jdbc.
  3. For SpillBucket, enter Athena-accelerator/saphana.

For JDBCConnectorConfig, use the format saphana://jdbc:sap://{saphana_instance_url}/?${secretname}.

  1. For DisableSpillEncyption, choose False.
  2. For LambdaFunctionName, enter trsaphana.
  3. For SecurityGroupID, use the security group id using which lambda can connect to the SAP HANA

Make sure to apply valid inbound and outbound rules based on your connection.

  1. For SpillPrefix, create a folder under the S3 bucket you created and specify the name (for example, athena-spill).
  2. For Subnetids – Use the subnets using which lambda can connect to SAP HANA instance with comma separation.

Make sure the subnet is in a VPC and has a NAT gateway and internet gateway attached.

  1. Select the I acknowledge check box.
  2. Choose Deploy.

Make sure that the AWS Identity and Access Management (IAM) roles have permissions to access AWS Serverless Application Repository, AWS CloudFormation, Amazon S3, Amazon CloudWatch, Amazon CloudTrail, Secrets Manager, Lambda, and Athena. For more information, see Example IAM Permissions Policies to Allow Athena Federated Query.

Run federated queries with Athena

Run your federated queries using lambda:trsaphana to run against tables in the SAP HANA database. trsaphana is the name of lambda function which we have created in step 7 of previous section of this blog.

lambda:trsaphana is a reference data source connector Lambda function using the format lambda:MyLambdaFunctionName. For more information, see Writing Federated Queries.

The following screenshot demonstrates joining the dataset between SAP HANA and the data lake.

Key performance best practice considerations

If you’re considering Athena federation with a SAP HANA database, we recommend the following best practices:

  • Athena federation works great for queries with predicate filtering because the predicates are pushed down to the SAP HANA database. Use filter and limited-range scans in your queries to avoid full table scans.
  • If your SQL query requires returning a large volume of data from the SAP HANA database to Athena (which could lead to query timeouts or slow performance), unload the large tables in your query from SAP HANA to your S3 data lake.
  • Star schema is a commonly used data model in SAP HANA databases. In the star schema model, unload your large fact tables into your S3 data lake and leave the dimension tables in your SAP HANA database. If large dimension tables are contributing to slow performance or query timeouts, unload those tables to your S3 data lake.
  • When you run federated queries, Athena spins up multiple Lambda functions, which causes a spike in database connections. It’s important to monitor the SAP HANA database WLM queue slots to ensure there is no queuing. Additionally, you can use concurrency scaling on your SAP HANA database cluster to benefit from concurrent connections to queue up.

Conclusion

In this post, you learned how to configure and use Athena Federated query with SAP HANA using Lambda. Now you don’t need to wait for all the data in your SAP HANA data warehouse to be unloaded to Amazon S3 and maintained on a day-to-day basis to run your queries.

You can use the best practice considerations outlined in the post to help minimize the data transferred from SAP HANA for better performance. When queries are well-written for Federated query, the performance penalties are negligible.

For more information, see the Athena User Guide and Using Amazon Athena Federated Query.


About the Author

Navnit Shukla is AWS Specialist Solution Architect in Analytics. He is passionate about helping customers uncover insights from their data. He has been building solutions to help organizations make data-driven decisions.

[Security Nation] Daniel Crowley on Running a Cybersecurity Internship

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/18/security-nation-daniel-crowley/

[Security Nation] Daniel Crowley on Running a Cybersecurity Internship

On the latest episode of Security Nation, we’re joined by Daniel Crowley, IBM X-Force Red’s Research Director — aka Global Research Baron (a title that delights Jen Ellis’s British sensibilities). Daniel tells Jen and Tod all about his team’s security research internship program, which gets undergrad and grad students involved in pentesting and other forms of research in real-world environments through a series of bootcamps. He also divulges some research project ideas for those looking to uncover vulnerabilities in hidden places — including your calendar invites.

Stick around for the Rapid Rundown, where Jen and Tod talk about DEF CON highlights, the Cyber Symposium non-findings, and — you guessed it — ransomware.

Daniel Crowley

[Security Nation] Daniel Crowley on Running a Cybersecurity Internship

Daniel is the primary author of the Magical Code Injection Rainbow, a configurable vulnerability testbed, and FeatherDuster, an automated cryptanalysis tool. In the security industry since 2004, he is a frequent speaker at conferences like Black Hat, DEF CON, Shmoocon and SOURCE. Daniel also holds the noble title of Baron in the Principality of Sealand.

Show notes

Interview Links:

Rapid Rundown Links:

Want More Inspiring Stories From the Security Community?

Subscribe to Security Nation Today

Apple’s NeuralHash Algorithm Has Been Reverse-Engineered

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/apples-neuralhash-algorithm-has-been-reverse-engineered.html

Apple’s NeuralHash algorithm — the one it’s using for client-side scanning on the iPhone — has been reverse-engineered.

Turns out it was already in iOS 14.3, and someone noticed:

Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.

We also have the first collision: two images that hash to the same value.

The next step is to generate innocuous images that NeuralHash classifies as prohibited content.

This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.

How to run massively multiplayer games with EC2 Spot using Aurora Serverless

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/how-to-run-massively-multiplayer-games-with-ec2-spot-using-aurora-serverless/

This post is written by Yahav Biran, Principal Solutions Architect, and Pritam Pal, Sr. EC2 Spot Specialist SA

Massively multiplayer online (MMO) game servers must dynamically scale their compute and storage to create a world-scale persistence simulation with millions of dynamic objects, such as complex AR/VR synthetic environments that match real-world fidelity. The Elastic Kubernetes Service (EKS) powered by Amazon EC2 Spot and Aurora Serverless allow customers to create world-scale persistence simulations processed by numerous cost-effective compute chipsets, such as ARM, x86, or Nvidia GPU. It also persists them On-Demand — an automatic scaling configuration of open-source database engines like MySQL or PostgreSQL without managing any database capacity. This post proposes a fully open-sourced option to build, deploy, and manage MMOs on AWS. We use a Python-base game server to demonstrate the MMOs.

Challenge

Increasing competition in the gaming industry has driven game developers to architect cost-effective game servers that can scale up to meet player demands and scale down to meet cost goals. AWS enables MMO developers to scale their game beyond the limits of a single server. The game state (world) can spatially partition across many Regions based on the requested number of sessions or simulations using Amazon EC2 compute power. As the game progresses over many sessions, you must track the simulation’s global state. Amazon Aurora maintains the global game state in memory to manage complex interactions, such as hand-offs across instances and Regions. Amazon Aurora Serverless powers all of these for PostgreSQL or MySQL.

This blog shows how to use a commodity server using an ephemeral disk and decouple the game state from the game server. We store the game state in Aurora for PostgreSQL, but you can also use DynamoDB or KeySpaces for the NoSQL case.

 

Game overview

We use a Minecraft clone to demonstrate a distributed persistence simulation. The game server is python-based deployed on Agones, an open-source multiplayer dedicated game-server platform on Kubernetes. The Kubernetes cluster is powered by EC2 Spot Instances and configured with EC2 instances to auto-scale that expands and shrinks the compute seamlessly upon game-server allocation. We add a git-ops-based continuous delivery system that stores the game-server binaries and config in a git repository and deploys the game in a cluster deploy in one or more Regions to allow global compute scale. The following image is a diagram of the architecture.

The game server persists every object in an Aurora Serverless PostgreSQL-compatible edition. The serverless database configuration aids automatic start-up and scales capacity up or down as per player demand. The world is divided into 32×32 block chunks in the XYZ plane (Y is up). This allows it to be “infinite” (PostgreSQL Bigint type) and eases data management. Only visible chunks must be queried from the database.

The central database table is named “block” and has the columns p, q, x, y, z, w. (p, q) identifies the chunk, (x, y, z) identifies the block position, and (w) identifies the block type. 0 represents an empty block (air).

In the game, the chunks store their blocks in a hash map. An (x, y, z) key maps to a (w) value.

The y positions of blocks are limited to 0 <= y < 256. The upper limit is essentially an artificial limitation that prevents users from building tall structures. Users cannot destroy blocks at y = 0 to avoid falling underneath the world.

Solution overview

Kubernetes allows dedicated game server scaling and orchestration without limiting the compute platform spanning across many Regions and staying closer to the player. For simplicity, we use EKS to reduce operational overhead.

Amazon EC2 runs the compute simulation, which might require different EC2 instance types. These include compute-optimized instances for compute-bound applications benefiting from high-performance processors or accelerated compute (GPU), using hardware accelerators to perform functions like graphics processing or data pattern matching. In addition, the EC2 Auto Scaling  runs the game-server configured to use Amazon EC2 Spot Instances in order to allow up to 90% discount as compared to On-Demand Instance prices. However, Amazon EC2 can interrupt your Spot Instance when the demand for Spot Instances rises, when the supply of Spot Instances decreases, or when the Spot price exceeds your maximum price.

The following two mechanisms minimize the EC2 reclaim compute capacity impact:

  1. Pull interruption notifications and notify the game-server to replicate the session to another game server.
  2. Prioritize compute capacity based on availability.

Two Auto Scaling groups are deployed for method two. The first Auto Scaling group uses latest generation Spot Instances (C5, C5a, C5n, M5, M5a, and M5n) instances, and the second uses all generations x86-based instances (C4 and M4). We configure the cluster-autoscaler that controls the Auto Scaling group size with the Expander priority option in order to favor the latest generation Spot Auto Scaling group. The priority should be a positive value, and the highest value wins. For each priority value, a list of regular expressions should be given. The following example assumes an EKS cluster craft-us-west-2 and two ASGs. The craft-us-west-2-nodegroup-spot5 Auto Scaling group wins the priority. Therefore, new instances will be spawned from the EC2 Spot Auto Scaling group.

.…
- --expander=priority
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/craft-us-west-2
….
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-autoscaler-priority-expander
  namespace: kube-system
data:
  priorities: |-
    10:
      - .*craft-us-west-2-nodegroup-spot4*
    50:
      - .*craft-us-west-2-nodegroup-spot5*
---

The complete working spec is available in https://github.com/aws-samples/spotable-game-server/blob/master/specs/cluster_autoscaler.yml.

We propose the following two options to minimize player impact during interruption. The first is based on two-minute interruption notification, and the second on rebalance recommendations.

  1. Notify that that the instance will be shut down within two minutes.
  2. Notify when a Spot Instance is at elevated risk of interruption, this signal can arrive sooner than the two-minute Spot Instance interruption notice.

Choosing between the two depends on the transition you want for the player. Both cases deploy DaemonSet that listens to either notification and notifies every game server running on the EC2 instance. It also prevents new game-servers from running on this instance.

The daemon set pulls from the instance metadata every five seconds denoted by POLL_INTERVAL as follows:

while http_status=$(curl -o /dev/null -w '%{http_code}' -sL ${NOTICE_URL}); [ ${http_status} -ne 200 ]; do
  echo $(date): ${http_status}
  sleep ${POLL_INTERVAL}
done

where NOTICE_URL can be either

NOTICE_URL=”http://169.254.169.254/latest/meta-data/spot/termination-time”

Or, for the second option:

NOTICE_URL=”http://169.254.169.254/latest/meta-data/events/recommendations/rebalance”

The command that notifies all the game-servers about the interruption is:

kubectl drain ${NODE_NAME} --force --ignore-daemonsets --delete-local-data

From that point, every game server that runs on the instance gets notified by the Unix signal SIGTERM.

In our example server.py, we tell the OS to signal the Python process and complete the sig_handler function. The example prompts a message to every connected player regarding the incoming interruption.

def sig_handler(signum, frameframe):
  log('Signal handler called with signal',signum)
  model.send_talk("WARN game server maintenance is pending - your universe is saved")

def main():
    ..
    signal.signal(signal.SIGTERM,sig_handler)

 

Why Agones?

Agones orchestrates game servers via declarative configuration in order to manage groups of ready game-servers to play. It also offers integrated SDK for managing game server lifecycle, health, and configuration. Finally, it runs on Kubernetes, so it is an all-up open-source platform that runs anywhere. The Agones SDK is easily implemented. Furthermore, combining the compute platform AWS and Agones offers the most secure, resilient, scalable, and cost-effective method for running an MMO.

 

In our example, we implemented the /health in agones_health and /allocate in agones_allocate calls. Then, the agones_health() called upon the server init to indicate that it is ready to assign new players.

def agones_allocate(model):
  url="http://localhost:"+agones_port+"/allocate"
  req = urllib2.Request(url)
  req.add_header('Content-Type','application/json')
  req.add_data('')
  r = urllib2.urlopen(req)
  resp=r.getcode()
  log('agones- Response code from agones allocate was:',resp)
  model.send_talk("new player joined - reporting to agones the server is allocated") 

The agones_health() using the native health and relay its health to keep the game-server in a viable state.

def agones_health(model):
  url="http://localhost:"+agones_port+"/health"
  req = urllib2.Request(url)
  req.add_header('Content-Type','application/json')
  req.add_data('')
  while True:
    model.ishealthy()
    r = urllib2.urlopen(req)
    resp=r.getcode()
    log('agones- Response code from agones health was:',resp)
    time.sleep(10)

The main() function forks a new process that reports health. Agones manages the port allocation and maintains the game state, e.g., Allocated, Scheduled, Shutdown, Creating, and Unhealthy.

def main():
    …
    server = Server((host, port), Handler)
    server.model = model
    newpid=os.fork()
    if newpid ==0:
      log('agones-in child process about to call agones_health()')
      agones_health()
      log('agones-in child process called agones_health()')
    else:
      pids = (os.getpid(), newpid)
      log('agones server pid and health pid',pids)
    log('SERV', host, port)
    server.serve_forever()

 

Other ways than Agones on EKS?

Configuring an Agones group of ready game-servers behind a load balancer is difficult. Agones game-servers endpoint must be published so that players’ clients can connect and play. Agones creates game-servers endpoints that are an IP and port pair. The IP is the public IP of the EC2 Instance. The port results from PortPolicy, which generates a non-predictable port number. Hence, make it impossible to use with load balancer such as Amazon Network Load Balancer (NLB).

Suppose you want to use a load balancer to route players via a predictable endpoint. You could use the Kubernetes Deployment construct and configure a Kubernetes Service construct with NLB and no need to implement additional SDK or install any additional components on your EKS cluster.

The following example defines a service craft-svc that creates an NLB that will route TCP connections to Pod targets carrying the selector craft and listening on port 4080.

apiVersion: v1
kind: Service
metadata:
  name: craft-svc
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
  selector:
    app: craft
  ports:
    - protocol: TCP
      port: 4080
      targetPort: 4080
  type: LoadBalancer

The game server Deployment set the metadata label for the Service load balancer and the port.

Furthermore, the Kubernetes readinessProbe and livenessProbe offer similar features as the Agones SDK /allocate and /health implemented prior, making the Deployment option parity with the Agones option.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: craft
  name: craft
spec:
…
    metadata:
      labels:
        app: craft
    spec:
      …
        readinessProbe:
          tcpSocket:
            port: 4080
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: 4080
          initialDelaySeconds: 5
          periodSeconds: 10

Overview of the Database

The compute layer running the game could be reclaimed at any time within minutes, so it is imperative to continuously store the game state as players progress without impacting the experience. Furthermore, it is essential to read the game state quickly from scratch upon session recovery from an unexpected interruption. Therefore, the game requires fast reads and lazy writes. More considerations are consistency and isolation. Developers could handle inconsistency via the game in order to relax hard consistency requirements. As for isolation, in our Craft example players can build a structure without other players seeing it and publish it globally only when they choose.

Choosing the proper database for the MMO depends on the game state structure, e.g., a set of related objects such as our Craft example or a single denormalized table. The former fits the relational model used with RDS or Aurora open-source databases such as MySQL or PostgreSQL. While the latter can be used with Keyspaces, an AWS managed Cassandra, or a key-value store such as DynamoDB. Our example includes two options to store the game state with Aurora Serverless for PostgreSQL or DynamoDB. We chose those because of the ACID support. PostgreSQL offers four isolation levels: dirty read, nonrepeatable read, phantom read, and serialization anomaly. DynamoDB offers two isolation levels: serializable and read-committed. Both databases’ options allow the game developer to implement the best player experience and avoid additional implementation efforts.

Moreover, both engines offer Restful connection methods to the database. Aurora uses Data API. The Data API doesn’t require a persistent DB cluster connection. Instead, it provides a secure HTTP endpoint and integration with AWS SDKs. Game developers can run SQL statements via the endpoint without managing connections. DynamoDB only supports a Restful connection. Finally, Aurora Serverless and DynamoDB scale the compute and storage to reduce operational overhead and pay only for what was played.

 

Conclusion

MMOs are unique because they require infrastructure features similar to other game types like FPS or casual games, and reliable persistence storage for the game-state. Unfortunately, this leads to expensive choices that make monetizing the game difficult. Therefore, we proposed an option with a fun game to help you, the developer, analyze what is best for your MMO. Our option is built upon open-source projects that allow you to build it anywhere, but we also show that AWS offers the most cost-effective and scalable option. We encourage you to read recent announcements about these topics, including several at AWS re:Invent 2021.

 

Security updates for Wednesday

Post Syndicated from original https://lwn.net/Articles/866669/rss

Security updates have been issued by Debian (haproxy), Fedora (c-ares, hivex, kernel, libtpms, newsflash, python-django, rust-gettext-rs, and rust-gettext-sys), openSUSE (c-ares and libsndfile), Scientific Linux (cloud-init, edk2, exiv2, firefox, kernel, kpatch-patch, microcode_ctl, sssd, and thunderbird), SUSE (c-ares, fetchmail, haproxy, kernel, libmspack, libsndfile, rubygem-puma, spice-vdagent, and webkit2gtk3), and Ubuntu (exiv2, haproxy, linux, linux-aws, linux-aws-5.4, linux-azure, linux-azure-5.4, linux-gcp, linux-gcp-5.4, linux-gke, linux-gke-5.4, linux-gkeop, linux-gkeop-5.4, linux-hwe-5.4, linux-kvm, linux-oracle, linux-oracle-5.4, linux-raspi, linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-gcp, linux-gcp-4.15, linux-hwe, linux-kvm, linux-oracle, linux-raspi2, linux-snapdragon, and linux, linux-aws, linux-azure, linux-gcp, linux-hwe-5.11, linux-oracle, linux-raspi).

Field Notes: Three Steps to Port Your Containerized Application to AWS Lambda

Post Syndicated from Arthi Jaganathan original https://aws.amazon.com/blogs/architecture/field-notes-three-steps-to-port-your-containerized-application-to-aws-lambda/

AWS Lambda support for container images allows porting containerized web applications to run in a serverless environment. This gives you automatic scaling, built-in high availability, and a pay-for-value billing model so you don’t pay for over-provisioned resources. If you are currently using containers, container image support for AWS Lambda means you can use these benefits without additional upfront engineering efforts to adopt new tooling or development workflows. You can continue to use your team’s familiarity with containers while gaining the benefits from the operational simplicity and cost effectiveness of Serverless computing.

This blog post describes the steps to take a containerized web application and run it on Lambda with only minor changes to the development, packaging, and deployment process. We use Ruby, as an example, but the steps are the same for any programming language.

Overview of solution

The following sample application is a containerized web service that generates PDF invoices. We will migrate this application to run the business logic in a Lambda function, and use Amazon API Gateway to provide a Serverless RESTful web API. API Gateway is a managed service to create and run API operations at scale.

Figure 1: Generating PDF invoice with Lambda

Figure 1: Generating PDF invoice with Lambda

Walkthrough

In this blog post, you will learn how to port the containerized web application to run in a serverless environment using Lambda.

At a high level, you are going to:

  1. Get the containerized application running locally for testing
  2. Port the application to run on Lambda
    1. Create a Lambda function handler
    2. Modify the container image for Lambda
    3. Test the containerized Lambda function locally
  3. Deploy and test on Amazon Web Services (AWS)

Prerequisites

For this walkthrough, you need the following:

1. Get the containerized application running locally for testing

The sample code for this application is available on GitHub. Clone the repository to follow along.

```bash

git clone https://github.com/aws-samples/aws-lambda-containerized-custom-runtime-blog.git

```

1.1. Build the Docker image

Review the Dockerfile in the root of the cloned repository. The Dockerfile uses Bitnami’s Ruby 3.0 image from the Amazon ECR Public Gallery as the base. It follows security best practices by running the application as a non-root user and exposes the invoice generator service on port 4567. Open your terminal and navigate to the folder where you cloned the GitHub repository. Build the Docker image using the following command.

```bash
docker build -t ruby-invoice-generator .
```

1.2 Test locally

Run the application locally using the Docker run command.

```bash
docker run -p 4567:4567 ruby-invoice-generator
```

In a real-world scenario, the order and customer details for the invoice would be passed as POST request body or GET request query string parameters. To keep things simple, we are randomly selecting from a few hard coded values inside lib/utils.rb. Open another terminal, and test invoice creation using the following command.

```bash
curl "http://localhost:4567/invoice" \
  --output invoice.pdf \
  --header 'Accept: application/pdf'
```

This command creates the file invoice.pdf in the folder where you ran the curl command. You can also test the URL directly in a browser. Press Ctrl+C to stop the container. At this point, we know our application works and we are ready to port it to run on Lambda as a container.

2. Port the application to run on Lambda

There is no change to the Lambda operational model and request plane. This means the function handler is still the entry point to application logic when you package a Lambda function as a container image. Also, by moving our business logic to a Lambda function, we get to separate out two concerns and replace the web server code from the container image with an HTTP API powered by API Gateway. You can focus on the business logic in the container with API Gateway acting as a proxy to route requests.

2.1. Create the Lambda function handler

The code for our Lambda function is defined in function.rb, and the handler function from that file will be described shortly. The main difference to note between the original Sintra-powered code and our Lambda handler version is the need to base64 encode the PDF. This is a requirement for returning binary media from API Gateway Lambda proxy integration. API Gateway will automatically decode this to return the PDF file to the client.

```ruby

def self.process(event:, context:)

  self.logger.debug(JSON.generate(event))

  invoice_pdf = Base64.encode64(Invoice.new.generate)

  { 'headers' => { 'Content-Type': 'application/pdf' },

    'statusCode' => 200,

    'body' => invoice_pdf,

    'isBase64Encoded' => true

  }

end

```

If you need a reminder on the basics of Lambda function handlers, review the documentation on writing a Lambda handler in Ruby. This completes the new addition to the development workflow—creating a Lambda function handler as the wrapper for the business logic.

2.2 Modify the container image for Lambda

AWS provides open source base images for Lambda. At this time, these base images are only available for Ruby runtime versions 2.5 and 2.7. But, you can bring any version of your preferred runtime (Ruby 3.0 in this case) by packaging it with your Docker image. We will use Bitnami’s Ruby 3.0 image from the Amazon ECR Public Gallery as the base. Amazon ECR is a fully managed container registry, and it is important to note that Lambda only supports running container images that are stored in Amazon ECR; you can’t upload an arbitrary container image to Lambda directly.

Because the function handler is the entry point to business logic, the Dockerfile CMD must be set to the function handler instead of starting the web server. In our case, because we are using our own base image to bring a custom runtime, there is an additional change we must make. Custom images need the runtime interface client to manage the interaction between the Lambda service and our function’s code.

The runtime interface client is an open-source lightweight interface that receives requests from Lambda, passes the requests to the function handler, and returns the runtime results back to the Lambda service. The following are the relevant changes to the Dockerfile.

```bash

ENTRYPOINT ["aws_lambda_ric"]

CMD [ "function.Billing::InvoiceGenerator.process" ]

```

The Docker command that is implemented when the container runs is: aws_lambda_ric function.Billing::InvoiceGenerator.process.

Refer to Dockerfile.lambda in the cloned repository for the complete code. This image follows best practices for optimizing Lambda container images by using a multi-stage build. The final image uses  tag named 3.0-prod as its base. This does not include development dependencies and helps keep the image size down. Create the Lambda-compatible container image using the following command.

```bash

docker build -f Dockerfile.lambda -t lambda-ruby-invoice-generator .

```

This concludes changes to the Dockerfile. We have introduced a new dependency on the runtime interface client and used it as our container’s entrypoint.

2.3. Test the containerized Lambda function locally

The runtime interface client expects requests from the Lambda Runtime API. But when we test on our local development workstation, we don’t run the Lambda service. So, we need a way to proxy the Runtime API for local testing. Because local testing is an integral part of most development workflows, AWS provides the Lambda runtime interface emulator. The emulator is a lightweight web server running on port 8080 that converts HTTP requests to Lambda-compatible JSON events. The flow for local testing is shown in Figure 2.

Figure 2: Testing the Lambda container image locally

Figure 2: Testing the Lambda container image locally

When we want to perform local testing, the runtime interface emulator becomes the entrypoint. Consequently, the Docker command that is executed when the container runs locally is: aws-lambda-rie aws_lambda_ric function.Billing::InvoiceGenerator.process.

You can package the emulator with the image. The Lambda container image support launch blog documents steps for this approach. However, to keep the image as slim as possible, we recommend that you install it locally, and mount it while running the container instead. Here is the installation command for Linux platforms.

``` bash

mkdir -p
~/.aws-lambda-rie && curl -Lo ~/.aws-lambda-rie/aws-lambda-rie
https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie
&& chmod +x ~/.aws-lambda-rie/aws-lambda-rie

```

Use the Docker run command with the appropriate overrides to entrypoint and cmd to start the Lambda container. The emulator is mapped to local port 9000.

```bash

docker run \

  -v ~/.aws-lambda-rie:/aws-lambda \

  -p 9000:8080 \

  -e LOG_LEVEL=DEBUG \

  --entrypoint /aws-lambda/aws-lambda-rie \

  lambda-ruby-invoice-generator \

  /opt/bitnami/ruby/bin/aws_lambda_ric function.Billing::InvoiceGenerator.process

```

Open another terminal and run the curl command below to simulate a Lambda request.

```bash

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

```
You will see a JSON output with the base64 encoded value of the invoice for body and isBase64Encoded property set to true. After Lambda is integrated with API Gateway, the API endpoint uses the flag to decode the text before returning the response to the caller. The client will receive the decoded PDF invoice. Push Ctrl+C to stop the container. This concludes changes to the local testing workflow.

3. Deploy and test on AWS

The final step is to deploy the invoice generator service. Set your AWS Region and AWS Account ID as environment variables.

```bash

export AWS_REGION=Region

export AWS_ACCOUNT_ID=account

```

3.1. Push the Docker image to Amazon ECR

Create an Amazon ECR repository for the image.

```bash

ECR_REPOSITORY=`aws ecr create-repository \

  --region $AWS_REGION \

  --repository-name lambda-ruby-invoice-generator \

  --tags Key=Project,Value=lambda-ruby-invoice-generator-blog \

  --query "repository.repositoryName" \

  --output text`

```

Login to Amazon ECR, and push the container image to the newly-created repository.

```bash

aws ecr get-login-password \

  --region $AWS_REGION | docker login \

  --username AWS \

  --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com

 

docker tag \

  lambda-ruby-invoice-generator:latest \

  $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ECR_REPOSITORY:latest

 

docker push

$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ECR_REPOSITORY:latest

```

3.2. Create the Lambda function

After the push succeeds, you can create the Lambda function. You need to create a Lambda execution role first and attach the managed IAM policy named AWSLambdaBasicExecutionRole. This gives the function access to Amazon CloudWatch for logging and monitoring.

```bash

LAMBDA_ROLE=`aws iam create-role \

  --region $AWS_REGION \

  --role-name ruby-invoice-generator-lambda-role \

  --assume-role-policy-document file://lambda-role-trust-policy.json \

  --tags Key=Project,Value=lambda-ruby-invoice-generator-blog \

  --query "Role.Arn" \

  --output text`

 

aws iam attach-role-policy \

  --region $AWS_REGION \

  --role-name ruby-invoice-generator-lambda-role \

  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

 

LAMBDA_ARN=`aws lambda create-function \

  --region $AWS_REGION \

  --function-name ruby-invoice-generator \

  --description "[AWS Blog] Lambda Ruby Invoice Generator" \

  --role $LAMBDA_ROLE \

  --package-type Image \

  --code ImageUri=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ECR_REPOSITORY:latest \

  --timeout 15 \

  --memory-size 256 \

  --tags Project=lambda-ruby-invoice-generator-blog \

  --query "FunctionArn" \
  --output text`
```

Wait for the function to be ready. Use the following command to verify that the function state is set to Active.

```bash

aws lambda get-function \

  --region $AWS_REGION \

  --function-name ruby-invoice-generator \

  --query "Configuration.State"

```

3.3. Integrate with API Gateway

API Gateway offers two option to create RESTful APIs: HTTP and REST. We will use HTTP API because it offers lower cost and latency when compared to REST API. REST API provides additional features that we don’t need for this demo.

```bash

aws apigatewayv2 create-api \

  --region $AWS_REGION \

  --name invoice-generator-api \

  --protocol-type HTTP \

  --target $LAMBDA_ARN \

  --route-key "GET /invoice" \

  --tags Key=Project,Value=lambda-ruby-invoice-generator-blog \

  --query "{ApiEndpoint: ApiEndpoint, ApiId: ApiId}" \

  --output json

```

Record the ApiEndpoint and ApiId from the earlier command, and substitute them for the placeholders in the following command. You need to update the Lambda resource policy to allow HTTP API to invoke it.

```bash

export API_ENDPOINT="<ApiEndpoint>"

export API_ID="<ApiId>"

 

aws lambda add-permission \

  --region $AWS_REGION \

  --statement-id invoice-generator-api \

  --action lambda:InvokeFunction \

  --function-name $LAMBDA_ARN \

  --principal apigateway.amazonaws.com \

  --source-arn "arn:aws:execute-api:$AWS_REGION:$AWS_ACCOUNT_ID:$API_ID/*/*/invoice"

```

3.4. Verify the AWS deployment

Use the curl command to generate a PDF invoice.

```bash

curl "$API_ENDPOINT/invoice" \
  --output lambda-invoice.pdf \
  --header 'Accept: application/pdf'

```

This creates the file lambda-invoice.pdf in the local folder. You can also test directly from a browser.

This concludes the final step for porting the containerized invoice web service to Lambda. This deployment workflow is very similar to any other containerized application where you first build the image and then create or update your application to the new image as part of deployment. Only the actual deploy command has changed because we are deploying to Lambda instead of a container platform.

Cleaning up

To avoid incurring future charges, delete the resources. Follow cleanup instructions in the README file on the GitHub repo.

Conclusion

In this blog post, we learned how to port an existing containerized application to Lambda with only minor changes to development, packaging, and testing workflows. For teams with limited time, this accelerates the adoption of Serverless by using container domain knowledge and promoting reuse of existing tooling. We also saw how you can easily bring your own runtime by packaging it in your image and how you can simplify your application by eliminating the web application framework.

For more Serverless learning resources, visit https://serverlessland.com.

 

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

The First Three Winners from Cloudflare’s Project Jengo 2 Share $20,000

Post Syndicated from Ethan Park original https://blog.cloudflare.com/project-jengo-2-first-three-winners/

The First Three Winners from Cloudflare’s Project Jengo 2 Share $20,000

The First Three Winners from Cloudflare’s Project Jengo 2 Share $20,000

This past April we announced the revival of Project Jengo in response to a patent troll called Sable Networks that sued Cloudflare even though our technology and products are nothing like what’s described in Sable’s patents. This is only one part of Sable’s larger campaign against innovative technology companies — Sable sued five other technology companies earlier this year, and had sued seven other technology companies under the same patents last year.

Just as we have done in the past, we decided to fight back rather than feed the troll — which would only make it stronger. You see, unlike Cloudflare and other operating companies that were sued, Sable Networks isn’t in the business of providing products and services to the market. Rather, it exists to extract settlements out of productive companies that are creating value to the society.

Project Jengo is a prior art search contest where we ask the Cloudflare community for help in finding evidence (“prior art”) that shows Sable’s patents are invalid because they claim something that was already known at the time the patent application was filed. We committed $100,000 in cash prizes to be shared by the winners who were successful in finding such prior art.

The first chapter of this contest has now ended, and we received almost 400 prior art references on Sable’s ten patents. And over 80% of those references were submitted to kill off the four patents that Sable asserted against us. Let me first say thank you to everyone who submitted these! Here at Cloudflare, we are constantly energized and motivated by the support from our community, and we are heartened by our community’s participation in Project Jengo.

Three winners = $20,000 in total cash prizes (so far)

We reviewed every eligible submission and scored them based on the strength of the prior art reference(s), difficulty, the story provided by the entrant, and the clarity and thoroughness of the explanation. Today, we are announcing the selection of three great submissions as cash prize winners in this round. The first winner will receive \$10,000, and the other two will each receive \$5,000.

Keep ’em coming

As you’ll recall, Sable is using a set of patents from the early 2000’s related to a flow-based router called Apeiro, which was never widely adopted and eventually failed in the marketplace over a decade ago. Sable is now stretching those patents way beyond what they were meant to cover. Sable’s flawed reading of these patents extends infringement to basic routing and switching functionality known long before any alleged invention date, including “conventional routers” from the 2000s that routed each packet independently without any regard to flows. The way Sable is interpreting its patents so broadly and the fact that Sable has gone after numerous technology companies selling a wide range of different products worry us — by its current standard, Sable could target anyone using a router, and that would include anyone with a WiFi router in their home. You can help stop this madness by participating in this contest and finding prior art on any of Sable’s ten patents that we’ve identified — not just on the four patents that Sable asserted against us.

The contest is ongoing, and we still have $80,000 to give out. The sooner you send us quality prior-art references, the more helpful they will be in invalidating Sable’s patents, so please make your submissions here as soon as possible. Already made a submission, but you aren’t one of the three winners today? It isn’t over — we will consider your submission again when we announce our next round of winners in November (and you can, of course, also enter a new and better submission). We have many more rounds to go as long as Sable’s case is pending against us, which means earlier submissions benefit from being considered in multiple rounds of this contest, so please don’t delay and make your submission now.

If this were a boxing match, then we would still be in the first round, sparring in the ring, with Sable up against the ropes. So jump in, win some cash, and help us KO Sable Networks.

The First Three Winners from Cloudflare’s Project Jengo 2 Share $20,000

Matthew M. “can’t stand blockers to true American ingenuity” and now has $10,000

IETF RFC 2702 (Requirements for Traffic Engineering Over MPLS)

The First Three Winners from Cloudflare’s Project Jengo 2 Share $20,000

This document is one of the 17 prior-art references sent to us by Matthew M. We are highlighting the Internet Engineering Task Force (“IETF”) Request for Comments (“RFC”) 2702. An RFC is something like a step in an industry-wide brainstorming session where an idea or ideas are floated to others in the industry with the aim of building something great everyone can use. The RFC found by Matthew M. is a good example of the process in which engineers from MCI Worldcom are building on the work of engineers from companies like Cisco, IBM, Juniper, and Ascend. All of these companies were well aware of the concepts Sable is now trying to claim its patents cover, including “micro-flows,” label-switched paths, and the basic concept of choosing a path through the network based on QoS information in a packet. The IETF’s records of these industry-wide brainstorming sessions are great evidence of Sable’s overreaching interpretations of its patents.

As for the winner, Matthew has a degree in Computer Networking & Cybersecurity, and manages a small team of developers at a FinTech company. Like most Project Jengo participants, he wasn’t motivated solely by the money. He told us, “I quickly learned just how vague you can make a patent and it’s quite disgusting.” We agree! Matthew shared that he thinks patent trolls “provide no value to society and only exist to strip true business men and women of their hard-earned craft.” He underscored this point when we reached out to congratulate him on the $10,000 prize:

I can’t stand blockers to true American ingenuity and patent trolls stand to destroy hard-earned work using minuscule technicalities in our broken justice system. I am happy to do my part in helping Cloudflare get the upper hand in this lawsuit and hopefully flip it on its head and take down Sable Networks altogether.

Matthew also highlighted the tremendous damage that patent trolls can do to companies and innovation:

I think patent trolls are one of the biggest stiflers of innovation. Companies like Cloudflare could be spending all their time working on new products that provide value but instead, they must allocate resources to fight off cowardly lawsuits like this one.

This submission will be put to good use in our fight against Sable, and we were happy to award Matthew with his prize money. We smiled when Matthew exclaimed he was “speechless” after finding out that he won. Congratulations, Matthew!

Pedro S. enjoyed a trip down memory lane and now gets to enjoy $5,000

Ascend/Cascade products and software implementing IETF RFC 1953 (Ipsilon Flow Management Protocol Specification for IPv4)

The First Three Winners from Cloudflare’s Project Jengo 2 Share $20,000

This winning submission also included an RFC from the IETF. These records from the important work of the IETF members ensure Sable won’t be able to take credit for things it simply didn’t invent. The neat thing about RFC 1953 is that the submitter connected it with actual products from companies called Ipsilon Networks, Inc. and Ascend Communications. There appear to be similarities between those products and the technology Sable is now claiming to have made, like the inventions of the ’919 patent. The problem for Sable is that Ipsilon and Ascend appear to have done it first! Ipsilon and Ascend were acquired by other companies so finding details about their innovative products and technology may be difficult, but we’re going to try. It may be that the submitter or someone reading this post can help.

The submission came from Pedro S., who heard about Project Jengo on the Security Now podcast. Pedro has spent 20 years in various technical roles, including a position at Ascend Communications in the 1990’s where he became familiar with their products. Today he is working at a cybersecurity company in the Bay Area. When asked why he chose to participate, he bluntly responded with “I hate patent trolls — I also enjoyed the trip down memory lane as some of the stuff I submitted comes from my days at Lucent.” We are happy to hear he enjoyed participating in the fight! We’re enjoying it too, thanks to the participation of folks like Pedro.

Congratulations, Pedro!

Stephen “learned a LOT” and earned $5,000

United States Patent No. 7,107,356 (Translator for enabling logical partitioning of a network switch)

The First Three Winners from Cloudflare’s Project Jengo 2 Share $20,000

This submission pointed us to U.S. Patent No. 7,107,356 (“’356 patent”) for its relevance to Sable’s U.S. Patent No. 7,630,358 (Mechanism for implementing multiple logical routers within a single physical router). The ’358 patent hasn’t been asserted against Cloudflare (at least not yet), and this is a good example of the enthusiasm of our community to comprehensively search for prior art relevant to both Sable’s currently asserted patents and those it may use to sue other companies later down the road. The ’356 patent found by our final winner in this round of Project Jengo carefully describes the guts of a router, and this relatively straight-forward patent makes short work of Sable’s overreaching claims of invention. Rest assured, if Sable is looking to assert the ’358 patent against Cloudflare or some other company down the road, our community is well-prepared to meet the challenge.

The submission comes from Stephen F., a web developer at a managed IT company working on custom websites. When his IT coworker shared the Project Jengo announcement in the office chat, he decided to participate. Stephen has never done prior art search before, but he was eager to participate and spent an entire day looking for prior art:

I’m submitting because I love Cloudflare and how it has made my life easier as a web developer … I’ve had to learn a lot more about routing and patent law today, and spent several hours looking at documents, scholarly journals, etc.

Even though this was his first time researching something like this, he understood the importance of beating Sable Networks:

I stand firmly against patent trolls like Sable, and decided to spend my day looking into prior art for the sake of Cloudflare’s continued success.

Not only did Stephen understand the importance of beating Sable, he also understood the challenges in doing a prior art search, but that didn’t stop him! We admire his tenacity and are impressed with his efforts, and some of the challenges he had to overcome during his search:

I needed to find where the patents could have been stolen/read from each other and what the offending points of the patents would be. During my research I learned a LOT about patent law, which was pretty challenging!

Congratulations, Stephen!

All the references we’ve received are now available

We received almost 400 prior art references thus far, and as promised we are making them public. As a reminder, we are collecting prior art on any of the patents owned by Sable, not just the ones asserted against Cloudflare. You can go to the webpage we’ve set up to see them, and we hope this information is of use to anyone who is sued by Sable down the road.

The fight continues

Project Jengo continues to capture the attention of tech media. It’s important to us to keep the narrative alive — the more awareness we can spread about the true harm of patent trolls, the more likely we are to inspire other well-resourced companies to refuse to capitulate. If we do this together, we’re far more likely to set a new standard, and ultimately, find the antidote to the ever-growing number of patent trolls that plague productive companies. If we enter the battle alone we are strong, but if we fight together we are unstoppable! Here’s what we’ve seen since our initial blog post was published (news outlets appear to be just as enthusiastic as we are!):

  • “Cloudflare offers $100,000 for prior art to nuke networking patents a troll has accused it of ripping off,” – The Register
  • “Instead of just saying it wouldn’t settle, Cloudflare set out to completely destroy the patent troll who sued it, ” – Techdirt
  • “The idea is to deal a big enough blow to Sable that not only is its case against Cloudflare hobbled but also future cases against other entities.” – Techcrunch

Litigation update — where are we five months in?

It has been five months since Sable Networks sued us in Waco, Texas, and we want to share a quick update on the litigation front. Since then, we have filed four petitions with the U.S. Patent and Trademark Office for inter partes review (“IPR”), seeking to invalidate the four asserted patents. As we previously explained, IPR is a procedure for challenging the validity of a patent before the U.S. Patent and Trademark Office, and it is supposed to be faster and cheaper than litigating before a U.S. district court. Of course, faster and cheaper are relative — it still takes about 18 months to invalidate a patent using this procedure, and the filing fees alone for the four petitions were over $200,000. It is easy to see how the exorbitant cost involved in patent litigation allows patent trolls to flourish and why so many companies are willing to quickly settle.

Aside from the IPRs, our litigation is moving forward in district court. Sable served us with its preliminary infringement contentions, which are supposed to explain why it believes we infringe its four asserted patents, and we will be serving our preliminary invalidity contentions to Sable soon.

Sable went after six companies this year, and two of them have already settled and dropped out of the fight. We will continue to fight against Sable, and we hope to see our peers continue fighting against this patent troll with us. Please stay tuned for our next update in three months!

Zero Trust controls for your SaaS applications

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-saas-integrations/

Zero Trust controls for your SaaS applications

Zero Trust controls for your SaaS applications

Most teams start that journey by moving the applications that lived on their private networks into this Zero Trust model. Instead of a private network where any user on the network is assumed to be trusted, the applications that use Cloudflare Access now check every attempt against the rules you create. For your end users, this makes these applications just feel like regular SaaS apps, while your security teams have full control and logs.

However, we kept hearing from teams that wanted to use their Access control plane to apply consistent security controls to their SaaS apps, and consolidate logs from self-hosted and SaaS in one place.

We’re excited to give your team the tools to solve that challenge. With Access in front of your SaaS applications, you can build Zero Trust rules that determine who can reach your SaaS applications in the same place where your rules for self-hosted applications and network access live. To make that easier, we are launching guided integrations with the Amazon Web Services (AWS) management console, Zendesk, and Salesforce. In just a few minutes, your team can apply a Zero Trust layer over every resource you use and ensure your logs never miss a request.

How it works

Cloudflare Access secures applications that you host by becoming the authoritative DNS for the application itself. All DNS queries, and subsequent HTTP requests, hit Cloudflare’s network first. Once there, Cloudflare can apply the types of identity-aware and context-driven rules that make it possible to move to a Zero Trust model. Enforcing these rules in our network means your application doesn’t need to change. You can secure it on Cloudflare, integrate your single sign-on (SSO) provider and other systems like Crowdstrike and Tanium, and begin building rules.

Zero Trust controls for your SaaS applications

SaaS applications pose a different type of challenge. You do not control where your SaaS applications are hosted — and that’s a big part of the value. You don’t need to worry about maintaining the hardware or software of the application.

However, that also means that your team cannot control how users reach those resources. In most cases, any user on the Internet can attempt to log in. Even if you incorporate SSO authentication or IP-based allowlisting, you might not have the ability to add location or device rules. You also have no way to centrally capture logs of user behavior on a per-request basis. Logging and permissions vary across SaaS applications — some are quite granular while others have non-existent controls and logging.

Cloudflare Access for SaaS solves that problem by injecting Zero Trust checks into the SSO flow for any application that supports SAML authentication. When users visit your SaaS application and attempt to log in, they are redirected through Cloudflare and then to your identity provider. They authenticate with your identity provider and are sent back to Cloudflare, where we layer on additional rules like device posture, multi factor method, and country of login. If the user meets all the requirements, Cloudflare converts the user’s authentication with your identity provider into a SAML assertion that we send to the SaaS application.

Zero Trust controls for your SaaS applications

We built support for SaaS applications by using Workers to take the JWT and convert its content into SAML assertions that are sent to the SaaS application. The application thinks that Cloudflare Access is the identity provider, even though we’re just aggregating identity signals from your SSO provider and other sources into the JWT, and sending that summary to the app via SAML. All of this leverages Cloudflare’s global network and ensures users do not see a performance penalty.

Enforcing managed devices and Gateway for SaaS applications

COVID-19 made it commonplace for employees to work from anywhere and, more concerning, from any device. Many SaaS applications contain sensitive data that should only be accessed with a corporately managed device. A benefit of SaaS tools is that they’re readily available from any device, it’s up to security administrators to enforce which devices can be used to log in.

Once Access for SaaS has been configured as the SSO provider for SaaS applications, policies that verify a device can be configured. You can then lock a tool like Salesforce down to only users with a device that has a known serial number, hard auth key plugged in, an up to date operating system and much more.

Zero Trust controls for your SaaS applications

Cloudflare Gateway keeps your users and data safe from threats on the Internet by filtering Internet-bound connections that leave laptops and offices. Gateway gives administrators the ability to block, allow, or log every connection and request to SaaS applications.

However, users are connecting from personal devices and home WiFi networks, potentially bypassing Internet security filtering available on corporate networks. If users have their password and MFA token, they can bypass security requirements and reach into SaaS applications from their own, unprotected devices at home.

Zero Trust controls for your SaaS applications

To ensure traffic to your SaaS apps only connects over Gateway-protected devices, Cloudflare Access will add a new rule type that requires Gateway when users login to your SaaS applications. Once enabled, users will only be able to connect to your SaaS applications when they use Cloudflare Gateway. Gateway will log those connections and provide visibility into every action within SaaS apps and the Internet.

Getting started and what’s next

It’s easy to get started with setting up Access for SaaS application. Visit the Cloudflare for Teams Dashboard and follow one of our published guides.

We will make it easier to protect SaaS applications and will soon be supporting configuration via metadata files. We will also continue to publish SaaS app specific integration guides. Are there specific applications you’ve been trying to integrate? Let us know in the community!

Capturing Purpose Justification in Cloudflare Access

Post Syndicated from Molly Cinnamon original https://blog.cloudflare.com/access-purpose-justification/

Capturing Purpose Justification in Cloudflare Access

The digital world often takes its cues from the real world. For example, there’s a standard question every guard or agent asks when you cross a border—whether it’s a building, a neighborhood, or a country: “What’s the purpose of your visit?” It’s a logical question: sure, the guard knows some information—like who you are (thanks to your ID) and when you’ve arrived—but the context of “why” is equally important. It can set expectations around behavior during your visit, as well as what spaces you should or should not have access to.

Capturing Purpose Justification in Cloudflare Access
The purpose justification prompt appears upon login, asking users to specify their use case before hitting submit and proceeding.

Digital access follows suit. Recent data protection regulations, such as the GDPR, have formalized concepts of purpose limitation and data proportionality: people should only access data necessary for a specific stated reason. System owners know people need access to do their job, but especially for particularly sensitive applications, knowing why a login was needed is just as vital as knowing who, when, and how.

Starting today, Cloudflare for Teams administrators can prompt users to enter a justification for accessing an application prior to login. Administrators can add this prompt to any existing or new Access application with just two clicks, giving them the ability to:

  • Log and review employee justifications for accessing sensitive applications
  • Add additional layers of security to applications they deem sensitive
  • Customize modal text to communicate data use & sharing principles
  • Help meet regulatory requirements for data access control (such as GDPR)

Starting with Zero Trust access control

Cloudflare Access has been built with access management at its core: rather than trusting anyone on a private network, Access checks for identity, context and device posture every time someone attempts to reach an application or resource.

Behind the scenes, administrators build rules to decide who should be able to reach the tools protected by Access. When users need to connect to those tools, they are prompted to authenticate with one of the identity provider options. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Some applications and workflows contain data so sensitive that the user should have to prove who they are and why they need to reach that service. In this next phase of Zero Trust security, access to data should be limited to specific business use cases or needs, rather than generic all-or-nothing access.

Deploying Zero Trust purpose justification

We created this functionality because we, too, wanted to make sure we had these provisions in place at Cloudflare. We have sensitive internal tools that help our team members serve our customers, and we’ve written before about how we use Cloudflare Access to lock down those tools in a Zero Trust manner.

However, we were not satisfied with just restricting access in the least privileged model. We are accountable to the trust our customers put in our services, and we feel it is important to always have an explicit business reason when connecting to some data sets or tools.

We built purpose justification capture in Cloudflare Access to solve that problem. When team members connect to certain resources, Access prompts them to justify why. Cloudflare’s network logs that rationale and allows the user to proceed.

Purpose justification capture in Access helps fulfill policy requirements, but even for enterprises who don’t need to comply with specific regulations, it also enables a thoughtful privacy and security framework for access controls. Prompting employees to justify their use case helps solve the data management challenge of balancing transparency with security — helping to ensure that sensitive data is used the right way.

Capturing Purpose Justification in Cloudflare Access
Purpose justification capture adds an additional layer of context for enterprise administrators.

Distinguishing Sensitive Domains

So how do you distinguish if something is sensitive? There are two main categories of  applications that may be considered “sensitive.” First: does it contain personally identifiable information or sensitive financials? Second, do all the employees who have access actually need access? The flexibility of the configuration of Access policies helps effectively distinguish sensitive domains for specific user groups.

Purpose justification in Cloudflare Access enables Teams administrators to configure the language of the prompt itself by domain. This is a helpful place to remind employees of the sensitivity of the data, such as, “This application contains PII. Please be mindful of company policies and provide a justification for access,” or “Please enter the case number corresponding to your need for access.” The language can proactively ensure that employees with access to an internal tool are using it as intended.

Additionally, Access identity management allows Teams customers to configure purpose capture for only specific, more sensitive employee groups. For example, some employees need daily access to an application and should be considered “trusted.” But other employees may still have access, but should only rarely need to use the tool— security teams or data protection officers may view their access as higher risk. The policies enable flexible logical constructions that equate to actions such as “ask everyone but the following employees for a purpose.”

This distinction of sensitive applications and “trusted” employees enables friction to the benefit of data protection, rather than a loss of efficiency for employees.

Capturing Purpose Justification in Cloudflare Access
Purpose justification is configurable as an Access policy, allowing for maximum flexibility in configuring and layering rules to protect sensitive applications.

Auditing justification records

As a Teams administrator, enterprise data protection officer, or security analyst, you can view purpose justification logs for a specific application to better understand how it has been accessed and used. Auditing the logs can reveal insights about security threats, the need for improved data classification training, or even potential application development to more appropriately address employees’ use cases.

The justifications are seamlessly integrated with other Access audit logs — they are viewable in the Teams dashboard as an additional column in the table of login events, and exportable to a SIEM for further data analysis.

Capturing Purpose Justification in Cloudflare Access
Teams administrators can review the purpose justifications submitted upon application login by their employees.

Getting started

You can start adding purpose justification prompts to your application access policies in Cloudflare Access today. The purpose justification feature is available in all plans, and with the Cloudflare for Teams free plan, you can use it for up to 50 users at no cost.

We’re excited to continue adding new features that give you more flexibility over purpose justification in Access… Have feedback for us? Let us know in this community post.

IoT gets a machine learning boost, from edge to cloud

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/iot-gets-a-machine-learning-boost-from-edge-to-cloud/

Today, it’s easy to run Edge Impulse machine learning on any operating system, like Raspberry Pi OS, and on every cloud, like Microsoft’s Azure IoT. Evan Rust, Technology Ambassador for Edge Impulse, walks us through it.

Building enterprise-grade IoT solutions takes a lot of practical effort and a healthy dose of imagination. As a foundation, you start with a highly secure and reliable communication between your IoT application and the devices it manages. We picked our favorite integration, the Microsoft Azure IoT Hub, which provides us with a cloud-hosted solution backend to connect virtually any device. For our hardware, we selected the ubiquitous Raspberry Pi 4, and of course Edge Impulse, which will connect to both platforms and extend our showcased solution from cloud to edge, including device authentication, out-of-box device management, and model provisioning.

From edge to cloud – getting started 

Edge machine learning devices fall into two categories: some are able to run very simple models locally, and others have more advanced capabilities that allow them to be more powerful and have cloud connectivity. The second group is often expensive to develop and maintain, as training and deploying models can be an arduous process. That’s where Edge Impulse comes in to help to simplify the pipeline, as data can be gathered remotely, used effortlessly to train models, downloaded to the devices directly from the Azure IoT Hub, and then run – fast.

This reference project will serve you as a guide for quickly getting started with Edge Impulse on Raspberry Pi 4 and Azure IoT, to train a model that detects lug nuts on a wheel and sends alerts to the cloud.

Setting up the hardware

Hardware setup for Edge Impulse Machine Learning
Raspberry Pi 4 forms the base for the Edge Impulse machine learning setup

To begin, you’ll need a Raspberry Pi 4 with an up-to-date Raspberry Pi OS image which can be found here. After flashing this image to an SD card and adding a file named wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=<Insert 2 letter ISO 3166-1 country code here>

network={
	ssid="<Name of your wireless LAN>"
	psk="<Password for your wireless LAN>"
}

along with an empty file named ssh (both within the /boot directory), you can go ahead and power up the board. Once you’ve successfully SSH’d into the device with 

$ ssh pi@<IP_ADDRESS>

and the password raspberry, it’s time to install the dependencies for the Edge Impulse Linux SDK. Simply run the next three commands to set up the NodeJS environment and everything else that’s required for the edge-impulse-linux wizard:

$ curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
$ sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
$ npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm

Since this project deals with images, we’ll need some way to capture them. The wizard supports both the Pi Camera modules and standard USB webcams, so make sure to enable the camera module first with 

$ sudo raspi-config

if you plan on using one. With that completed, go to the Edge Impulse Studio and create a new project, then run the wizard with 

$ edge-impulse-linux

and make sure your device appears within the Edge Impulse Studio’s device section after logging in and selecting your project.

Edge Impulse Machine Learning screengrab

Capturing your data

Training accurate machine learning models requires feeding plenty of varied data, which means a lot of images are required. For this use case, I captured around 50 images of a wheel that had lug nuts on it. After I was done, I headed to the Labeling queue in the Data Acquisition page and added bounding boxes around each lug nut within every image, along with every wheel.

Edge Impulse Machine Learning screengrab

To add some test data, I went back to the main Dashboard page and clicked the Rebalance dataset button, which moves 20% of the training data to the test data bin. 

Training your models

So now that we have plenty of training data, it’s time to do something with it, namely train a model. The first block in the impulse is an Image Data block, and it scales each image to a size of 320 by 320 pixels. Next, image data is fed to the Image processing block which takes the raw RGB data and derives features from it.

Edge Impulse Machine Learning screengrab

Finally, these features are sent to the Transfer Learning Object Detection model which learns to recognize the objects. I set my model to train for 30 cycles at a learning rate of .15, but this can be adjusted to fine-tune the accuracy.

As you can see from the screenshot below, the model I trained was able to achieve an initial accuracy of 35.4%, but after some fine-tuning, it was able to correctly recognize objects at an accuracy of 73.5%.

Edge Impulse Machine Learning screengrab

Testing and deploying your models

In order to verify that the model works correctly in the real world, we’ll need to deploy it to our Raspberry Pi 4. This is a simple task thanks to the Edge Impulse CLI, as all we have to do is run 

$ edge-impulse-linux-runner

which downloads the model and creates a local webserver. From here, we can open a browser tab and visit the address listed after we run the command to see a live camera feed and any objects that are currently detected. 

Integrating your models with Microsoft Azure IoT 

With the model working locally on the device, let’s add an integration with an Azure IoT Hub that will allow our Raspberry Pi to send messages to the cloud. First, make sure you’ve installed the Azure CLI and have signed in using az login. Then get the name of the resource group you’ll be using for the project. If you don’t have one, you can follow this guide on how to create a new resource group. After that, return to the terminal and run the following commands to create a new IoT Hub and register a new device ID:

$ az iot hub create --resource-group <your resource group> --name <your IoT Hub name>
$ az extension add --name azure-iot
$ az iot hub device-identity create --hub-name <your IoT Hub name> --device-id <your device id>

Retrieve the connection string with 

$ az iot hub device-identity connection-string show --device-id <your device id> --hub-name <your IoT Hub name>
Edge Impulse Machine Learning screengrab

and set it as an environment variable with 

$ export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>" 

in your Raspberry Pi’s SSH session, as well as 

$ pip install azure-iot-device

to add the necessary libraries. (Note: if you do not set the environment variable or pass it in as an argument, the program will not work!) The connection string contains the information required for the device to establish a connection with the IoT Hub service and communicate with it. You can then monitor output in the Hub with 

$ az iot hub monitor-events --hub-name <your IoT Hub name> --output table

 or in the Azure Portal.

To make sure it works, download and run this example to make sure you can see the test message. For the second half of deployment, we’ll need a way to customize how our model is used within the code. Thankfully, Edge Impulse provides a Python SDK for this purpose. Install it with 

$ sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev
$ pip3 install edge_impulse_linux -i https://pypi.python.org/simple

There’s some simple code that can be found here on Github, and it works by setting up a connection to the Azure IoT Hub and then running the model.

Edge Impulse Machine Learning screengrab

Once you’ve either downloaded the zip file or cloned the repo into a folder, get the model file by running

$ edge-impulse-linux-runner --download modelfile.eim

inside of the folder you just created from the cloning process. This will download a file called modelfile.eim. Now, run the Python program with 

$ python lug_nut_counter.py ./modelfile.eim -c <LUG_NUT_COUNT>

where <LUG_NUT_COUNT> is the correct number of lug nuts that should be attached to the wheel (you might have to use python3 if both Python 2 and 3 are installed).

Now whenever a wheel is detected the number of lug nuts is calculated. If this number falls short of the target, a message is sent to the Azure IoT Hub.

And by only sending messages when there’s something wrong, we can prevent an excess amount of bandwidth from being taken due to empty payloads.

The possibilities are endless

Imagine utilizing object detection for an industrial task such as quality control on an assembly line, or identifying ripe fruit amongst rows of crops, or detecting machinery malfunction, or remote, battery-powered inferencing devices. Between Edge Impulse, hardware like Raspberry Pi, and the Microsoft Azure IoT Hub, you can design endless models and deploy them on every device, while authenticating each and every device with built-in security.

You can set up individual identities and credentials for each of your connected devices to help retain the confidentiality of both cloud-to-device and device-to-cloud messages, revoke access rights for specific devices, transmit code and services between the cloud and the edge, and benefit from advanced analytics on devices running offline or with intermittent connectivity. And if you’re really looking to scale your operation and enjoy a complete dashboard view of the device fleets you manage, it is also possible to receive IoT alerts in Microsoft’s Connected Field Service from Azure IoT Central – directly.

Feel free to take the code for this project hosted here on GitHub and create a fork or add to it.

The complete project is available here. Let us know your thoughts at [email protected]. There are no limits, just your imagination at work.

The post IoT gets a machine learning boost, from edge to cloud appeared first on Raspberry Pi.

[$] STARTTLS considered harmful

Post Syndicated from original https://lwn.net/Articles/866481/rss

The use of Transport
Layer Security
(TLS) encryption is ubiquitous on today’s internet,
though that has largely happened over the last 20 years or so; the first
public version of its predecessor, Secure Sockets Layer (SSL), appeared in
1995. Before then, internet protocols were generally not encrypted, thus providing
fertile ground for various types of “meddler-in-the-middle” (MitM) attacks.
Later on, the
STARTTLS command was added to some protocols as a
backward-compatible way to add TLS support, but the mechanism has suffered from a
number of flaws and vulnerabilities over the years. Some recent research,
going by the name “NO STARTTLS“, describes more, similar
vulnerabilities and concludes that it is probably time to avoid using
STARTTLS altogether.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close