Google v. Oracle Explained: The Fight for Interoperable Software

Post Syndicated from Rina Diane Caballar original https://spectrum.ieee.org/tech-talk/computing/software/google-v-oracle-explained-supreme-court-news-apis-software

Application programming interfaces (APIs) are the building blocks of software interoperability. APIs provide the specifications for different software programs to communicate and interact with each other. For instance, when a travel aggregator website sends a request for flight information to an airline’s API, the API would send flight details back for the website to display.

Keeping APIs open, meaning they’re publicly listed and available or shared through a partnership, enables developers to freely build applications that work together. That practice is the basis of how software works today. But a decade-long fight between Google and Oracle over API copyright and fair use could upend the status quo.

Особености на Българския „Патриот“

Post Syndicated from Татяна Кристи original https://bivol.bg/patreoti-putinoidi.html

сряда 19 февруари 2020


Слагам кавички, защото българският патриот не е баш патриот, а често е един много объркан човечец, закърмен с примитивни мантри, примитивни медии, примитивни политици и много примитивни представи за света. Българският патриот много иска България да бъде независима от никой, ама в същото време все иска някой да я оправя и да я спасява и освобождава „от лошите.“ Българският „патриот“ се гордее, че е бил роб безкрайно дълго време. Доставя му мазохистично удоволствие да се хвали с робството си навсякъде за щяло и нещяло – дори и в Лувъра не си даде иконите заради турското робство. Иска си робството и това е! То му дава сили да си лепне поредната татуировка на бицепса с лика на Левски или Ботев, да надуе гайдата, да развее байряка и да влезе в някой водоем да играе хоро, обикновено пиян…

Но особеностите на по-голямата част от „патриотите“ в България е че са от русофило-путиноидната порода. Те, по необясними за логиката причини, свързват „независимостта“ на България със „зависимостта“ й от Русия и по необясними причини смятат че от там – от клетата оглозгана от руската политическа мафия територия с многомилионно бедстващо население и с човеконенавистен авторитарен режим, ще му дойде спасението! Тези хора, по необясними дори и за самите тях причини мразят богатия и проспериращ Запад и обожават Русия – с цялата й беднотия, феодалщина, безперспективност, корупция, политически произвол, вече официално превърнал се в хунта.

Тези същите развяват българското и руското знаме като равнопоставени емблеми на „българщината.“ Скорошно изследване показа, че българите харесват Путин най-много от всички останали нации по света. Причината, разбира се, е в дългогодишното вменяване на населението, че е „длъжно на Русия.“ Вменяване, което се е изродило в смешна патология увредило главния мозък на „патриота.“ Тези хора твърдят, че корупцията в България идва от Запада и че няма нищо общо Русия. Инатът и невежеството им да прозрат какво точно се случва в държавата и че част от причините за мега корупцията е точно Русия, са просто шокиращи.

Пиша тези редове, защото проследих една дискусия под статия на сайта Биволъ относно руската държавна окупация на комплекса Камчия край Варна. Наивността на много, съчетана с агресията им в защита на руското присъствие са просто плашещи. Точно под тази дискусия се разви дебат с армии от путинофили, които директно признават, че предпочитат режима на Путин пред проспериращия Западен свят. Че имаме нужда „точно от такива прекрасни бази, за разлика от НАТО…“

За тези индивиди демокрацията, уви, е абстрактно и мразено понятие, защото не минава през стомашно-чревния им тракт – също както компютърните програми за читателите на в. Дума. А носталгията по социализма още тлее в душите на тези клети участниците в проваления преход, които не си дават сметка, че те самите са виновни за този провал – с неинформираността си, заблудите си и най-вече с престъпната си липса на критично мислене, проявяваща се на избори.

Тези надъхани с отровна руска пропаганда хора, не могат да разберат, че причината за провала на демократичните реформи е точно в многобройните партийни разновидности на БКП-ДС в България, т.е. на БКП-обединени. Не могат да схванат, че заради тях, и най-вече заради управляващата партия ГЕРБ, имаме пробита национална сигурност и служби, с помощта на които емисарите на Путин се чувстват в България като у дома си.

И все пак е изумителна тази самоунищожителна логика на путинофила да се присламчи към бедния авторитарен и крадлив режим на Кремъл, който освен с природни ресурси, с друго нищо не може да се похвали, нищо иновативно не може да предложи на света, освен експорт на диверсии, дрънкане на оръжие и зомбирано население, което ходи във военни униформи от Втората световна война целогодишно.

За много „патриоти“ сатрапът Путин се оказа някакъв идол, чрез който те си изживяват своята съмнителна мъжественост. Освен многобройните хора на преклонна възраст, Путин има много обожателите от силовите структури в България – военни и полицаи често скачат като агресивни хрътки да жалят за това как Путин „не може да оправи България.“ Те, заедно с младите лумпени-неудачици, слушали от бабите се „колко хубаво било бай Тошо,“ първи скачат на бой и буквално сеят заплахи за саморазправа. Те, плюс каносаните лелки, снимали се на фона на мизерни тапети и стари нафтови печки, уви, не виждат сянката на Кремъл в действията на българските продажни политици и че точно Кремъл е виновен България да е най-бедната и корумпирана държава в ЕС и на последно място по почти всичко.

Това показва, че многогодишната пропаганда на про-руското платено политическо лоби в България както и самия Путинов режим с хибридните си действия в страната ни, си вършат добре работата в зомбирането на населението с мантри от миналото и с лъжи за настоящето и бъдещето. Това показва как озверялото, неинформирано, заблудено и окаяно население, няма да спре да се обяздва от всеки политически шарлатанин, защото тези хора са ужасно лесна мишена за манипулация и уви, заради тях и тъпото им гласуване на избори България не може да мръдне напред –заради тяхната обърканост и агресия винаги в грешната посока и точно към хората, които искат да им помогнат да се доближат към цивилизования и проспериращ свят.

Снимка: Посрещането на рокерите на Путин “Нощни вълци” Bulphoto ©

Court Orders Cloudflare to Prevent Access to Pirated Music or Face Fines or Prison

Post Syndicated from Andy original https://torrentfreak.com/court-orders-cloudflare-to-prevent-access-to-pirated-music-or-face-fines-or-prison-200219/

Earlier this week, Germany-focused music piracy site DDL-Music.to suddenly became inaccessible to the public. The site had been using the services of Cloudflare but an unusual error message suggested that the US-based company had stepped in disrupt the site’s activities.

‘Error HTTP 451’ is displayed by Cloudflare when a site is “Unavailable For Legal Reasons” and at least as far as pirate sites are concerned, its appearance is very rare indeed. Cloudflare’s documentation indicates that the message should be accompanied by a reason for the response, noting that it “should include an explanation in the response body with details of the legal demand.”

As the image above shows, no explanation was provided by Cloudflare but an investigation by Tarnkappe, details of which were shared with TorrentFreak, now reveals the unusual circumstances behind DDL-Music’s disconnection.

Early June 2019, Universal Music GmbH (Germany) reportedly sent a copyright infringement complaint to Cloudflare after finding links on DDL-Music to tracks from the album Herz Kraft Werke by German singer Sarah Connor. The tracks themselves were not hosted by DDL-Music but could be found on a third-party hosting site. Universal wanted the tracks to be rendered inaccessible within 24 hours but Cloudflare didn’t immediately comply.

Universal Music reportedly followed up with a warning to Cloudflare on June 19, 2019, demanding information about DDL-Music and its operators. A day later, the CDN company responded by declaring that it’s not responsible for its customers’ activities and Universal should deal with the website’s operator and/or webhost. However, Cloudflare did provide Universal with an email address along with details of DDL-Music’s hosting provider, supposedly in Pakistan.

With an obvious dispute underway, a hearing took place at the Cologne District Court (Landgericht Köln) on December 5, 2019. Lars Sobiraj of Tarnkappe, who obtained documentation relating to the hearing, informs TF that the Court ultimately determined that Cloudflare could be held liable for infringement of Universal Music’s copyrights by facilitating access to the tracks via DDL-Music, if it failed to take action.

This “liability as a disturber” (Störerhaftung) comes into play when a service (in this case, Cloudflare) contributes to a third-party’s infringement, without the element of intent. Under German law, however, the service can be held liable for infringement, if it fails to take reasonable action to prevent infringement in future.

On January 30, 2020, the Cologne District Court handed down a preliminary injunction against Cloudflare. This was received at the Hamburg offices of Cloudflare’s law firm TaylorWessig on February 4, 2020. It informed Cloudflare that should it continue to facilitate access to the Universal Music content detailed above, it could be ordered to pay a fine of up to 250,000 euros ($270,000) or, in the alternative, the managing director of Cloudflare could serve up to six months in prison.

In the event, however, Cloudflare appears to have taken the decision to jettison DDL-Music completely, as indicated by the Error 451 message that appeared a few days ago. The district court’s decision can be appealed but whether Cloudflare will take that route is currently unknown. Despite requests from TF for comment, the company has remained silent.

Meanwhile, DDL-Music appears to be migrating to DDoS-Guard, a CDN and DDoS mitigation platform that according to its website is registered in Scotland but is most probably based in Russia. Or the Netherlands, if its Twitter account is to be believed.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Cook: security things in Linux v5.4

Post Syndicated from jake original https://lwn.net/Articles/812782/rss

A bit belatedly, Kees Cook looks at some security-relevant changes in Linux 5.4 in a blog post. He lists a small handful of changes, including:

After something on the order of 8 years, Linux can now draw a bright line between ‘ring 0’ (kernel memory) and ‘uid 0’ (highest privilege level in userspace). The ‘kernel lockdown’ feature, which has been an out-of-tree patch series in most Linux distros for almost as many years, attempts to enumerate all the intentional ways (i.e. interfaces not flaws) userspace might be able to read or modify kernel memory (or execute in kernel space), and disable them. While Matthew Garrett made the internal details fine-grained controllable, the basic lockdown LSM can be set to either disabled, ‘integrity’ (kernel memory can be read but not written), or ‘confidentiality’ (no kernel memory reads or writes). Beyond closing the many holes between userspace and the kernel, if new interfaces are added to the kernel that might violate kernel integrity or confidentiality, now there is a place to put the access control to make everyone happy and there doesn’t need to be a rehashing of the age old fight between ‘but root has full kernel access’ vs ‘not in some system configurations’.

Multi-Perspective Validation Improves Domain Validation Security

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2020/02/19/multi-perspective-validation.html

At Let’s Encrypt we’re always looking for ways to improve the security and integrity of the Web PKI. We’re proud to launch multi-perspective domain validation today because we believe it’s an important step forward for the domain validation process. To our knowledge we are the first CA to announce multi-perspective validation deployment at scale.

Domain validation is a process that all CAs use to ensure that a certificate applicant actually controls the domain they want a certificate for. Typically the domain validation process involves asking the applicant to place a particular file or token at a controlled location for the domain, such as a particular path or a DNS entry. Then the CA will check that the applicant was able to do so. Historically it looks something like this:

System Architecture Diagram

A potential issue with this process is that if a network attacker can hijack or redirect network traffic along the validation path (for the challenge request, or associated DNS queries), then the attacker can trick a CA into incorrectly issuing a certificate. This is precisely what a research team from Princeton demonstrated can be done with an attack on BGP. Such attacks are rare today, but we are concerned that these attacks will become more numerous in the future.

The Border Gateway Protocol (BGP) and most deployments of it are not secure. While there are ongoing efforts to secure BGP, such as RPKI and BGPsec, it may be a long time until BGP hijacking is a thing of the past. We don’t want to wait until we can depend on BGP being secure, so we’ve worked with the research team from Princeton to devise a way to make such attacks more difficult. Instead of validating from one network perspective, we now validate from multiple perspectives as well as from our own data centers:

System Architecture Diagram

Today we are validating from multiple regions within a single cloud provider. We plan to diversify network perspectives to other cloud providers in the future.

This makes the kind of attack described earlier more difficult because an attacker must successfully compromise three different network paths at the same time (the primary path from our data center, and at least two of the three remote paths). It also increases the likelihood that such an attack will be detected by the Internet topology community.

We’d like to thank the research groups of Prof. Prateek Mittal and Prof. Jennifer Rexford at Princeton University for their partnership in developing this work. We will continue to work with them to refine the effectiveness of our multiple perspective validation design and implementation. We’d also like to thank Open Technology Fund for supporting this work.

We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at [email protected]. We ask that you make an individual contribution if it is within your means.

Deploy and publish to an Amazon MQ broker using AWS serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/deploy-and-publish-to-an-amazon-mq-broker-using-aws-serverless/

If you’re managing a broker on premises or in the cloud with a dependent existing infrastructure, Amazon MQ can provide easily deployed, managed ActiveMQ brokers. These support a variety of messaging protocols that can offload operational overhead. That can be useful when deploying a serverless application that communicates with one or more external applications that also communicate with each other.

This post walks through deploying a serverless backend and an Amazon MQ broker in one step using the AWS Serverless Application Model (AWS SAM). It shows you how to publish to a topic using AWS Lambda and then how to create a client application to consume messages from the topic, using a supported protocol. As a result, the AWS services and features supported by AWS Lambda can now be delivered to an external application connected to an Amazon MQ broker using STOMP, AMQP, MQTT, OpenWire, or WSS.

Although many protocols are supported by Amazon MQ, this walkthrough focuses on one. MQTT is a lightweight publish–subscribe messaging protocol. It is built to work in a small code footprint and is one of the most well-supported messaging protocols across programming languages. The protocol also introduced quality of service (QoS) to ensure message delivery when a device goes offline. Using QoS features, you can limit failure states in an interdependent network of applications.

To simplify this configuration, I’ve provided an AWS Serverless Application Repository application that deploys AWS resources using AWS CloudFormation. Two resources are deployed, a single instance Amazon MQ broker and a Lambda function. The Lambda function uses Node.js and an MQTT library to act as a producer and publish to a message topic on the Amazon MQ broker. A provided sample Node.js client app can act as an MQTT client and subscribe to the topic to receive messages.

Prerequisites

The following resources are required to complete the walkthrough:

Required steps

To complete the walkthrough, follow these steps:

  • Clone the Aws-sar-lambda-publish-amazonmq GitHub repository.
  • Deploy the AWS Serverless Application Repository application.
  • Run a Node.js MQTT client application.
  • Send a test message from an AWS Lambda function.
  • Use composite destinations.

Clone the GitHub repository

Before beginning, clone or download the project repository from GitHub. It contains the sample Node.js client application used later in this walkthrough.

Deploy the AWS Serverless Application Repository application

  1. Navigate to the page for the lambda-publish-amazonmq AWS Serverless Application Repository application.
  2. In Application settings, fill the following fields:

    – AdminUsername
    – AdminPassword
    – ClientUsername
    – ClientPassword

    These are the credentials for the Amazon MQ broker. The admin credentials are assigned to environment variables used by the Lambda function to publish messages to the Amazon MQ broker. The client credentials are used in the Node.js client application.

  3. Choose Deploy.

Creation can take up to 10 minutes. When completed, proceed to the next section.

Run a Node.js MQTT client application

The Amazon MQ broker supports OpenWire, AMQP, STOMP, MQTT, and WSS connections. This allows any supported programming language to publish and consume messages from an Amazon MQ queue or topic.

To demonstrate this, you can deploy the sample Node.js MQTT client application included in the GitHub project for the AWS Serverless Application Repository app. The client credentials created in the previous section are used here.

  1. Open a terminal application and change to the client-app directory in the GitHub project folder by running the following command:
    cd ~/some-project-path/aws-sar-lambda-publish-amazonmq/client-app
  2. Install the Node.js dependencies for the client application:
    npm install
  3. The app requires a WSS endpoint to create an Amazon MQ broker MQTT WebSocket connection. This can be found on the broker page in the Amazon MQ console, under Connections.
  4. The node app takes four arguments separated by spaces. Provide the user name and password of the client created on deployment, followed by the WSS endpoint and a topic, some/topic.
    node app.js "username" "password" "wss://endpoint:port" "some/topic"
  5. After connected prints in the terminal, leave this app running, and proceed to the next section.

There are three important components run by this code to subscribe and receive messages:

  • Connecting to the MQTT broker.
  • Subscribing to the topic on a successful connection.
  • Creating a handler for any message events.

The following code example shows connecting to the MQTT broker.

const args = process.argv.slice(2)

let options = {
  username: args[0],
  password: args[1],
  clientId: 'mqttLambda_' + uuidv1()
}

let mqEndpoint = args[2]
let topic = args[3]

let client = mqtt.connect( mqEndpoint, options)

The following code example shows subscribing to the topic on a successful connection.

// When connected, subscribe to the topic

client.on('connect', function() {
  console.log("connected")

  client.subscribe(topic, function (err) {
    if(err) console.log(err)
  })
})

The following code example shows creating a handler for any message events.

// Log messages

client.on('message', function (topic, message) {
  console.log(`message received on ${topic}: ${message.toString()}`)
})

Send a test message from an AWS Lambda function

Now that the Amazon MQ broker, PublishMessage Lambda function, and the Node.js client application are running, you can test consuming messages from a serverless application.

  1. In the Lambda console, select the newly created PublishMessage Lambda function. Its name begins with the name given to the AWS Serverless Application Repository application on deployment.
  2. Choose Test.
  3. Give the new test event a name, and optionally modify the message. Choose Create.
  4. Choose Test to invoke the Lambda function with the test event.
  5. If the execution is successful, the message appears in the terminal where the Node.js client-app is running.

Using composite destinations

The Amazon MQ broker uses an XML configuration to enable and configure ActiveMQ features. One of these features, composite destinations, makes one-to-many relationships on a single destination possible. This means that a queue or topic can be configured to forward to another queue, topic, or combination.

This is useful when fanning out to a number of clients, some of whom are consuming queues while others are consuming topics. The following steps demonstrate how you can easily modify the broker configuration and define multiple destinations for a topic.

  1. On the Amazon MQ Configurations page, select the matching configuration from the list. It has the same stack name prefix as your broker.
  2. Choose Edit configuration.
  3. After the broker tag, add the following code example. It creates a new virtual composite destination where messages published to “some/topic” publishes to a queue “A.queue” and a topic “foo.”
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <broker schedulePeriodForDestinationPurge="10000" xmlns="http://activemq.apache.org/schema/core">
      
      <destinationInterceptors>
        <virtualDestinationInterceptor>
          <virtualDestinations>
            <compositeTopic name="some.topic">
              <forwardTo>
                <queue physicalName="A.Queue"/>
                <topic physicalName="foo" />
              </forwardTo>
            </compositeTopic>
          </virtualDestinations>
        </virtualDestinationInterceptor>
      </destinationInterceptors>
      <destinationPolicy>
  4. Choose Save, add a description for this revision, and then choose Save.
  5. In the left navigation pane, choose Brokers, and select the broker with the stack name prefix.
  6. Under Details, choose Edit.
  7. Under Configuration, select the latest configuration revision that you just created.
  8. Choose Schedule modifications, Immediately, Apply.

After the reboot is complete, run another test of the Lambda function. Then, open and log in to the ActiveMQ broker web console, which can be found under Connections on the broker page. To log in, use the admin credentials created on deployment.

On the Queues page, a new queue “A.Queue” was generated because you published to some/topic, which has a composite destination configured.

Conclusion

It can be difficult to tackle architecting a solution with multiple client destinations and networked applications. Although there are many ways to go about solving this problem, this post showed you how to deploy a robust solution using ActiveMQ with a serverless workflow. The workflow publishes messages to a client application using MQTT, a well-supported and lightweight messaging protocol.

To accomplish this, you deployed a serverless application and an Amazon MQ broker in one step using the AWS Serverless Application Repository. You also ran a Node.js MQTT client application authenticated as a registered user in the Amazon MQ broker. You then used Lambda to test publishing a message to a topic on the Amazon MQ broker. Finally, you extended functionality by modifying the broker configuration to support a virtual composite destination, allowing delivery to multiple topic and queue destinations.

With the completion of this project, you can take things further by integrating other AWS services and third-party or custom client applications. Amazon MQ provides multiple protocol endpoints that are widely used across the software and platform landscape. Using serverless as an in-between, you can deliver features from services like Amazon EventBridge to your external applications, wherever they might be. You can also explore how to invoke an Lambda function from Amazon MQ.

 

Creating a Seamless Handoff Between Amazon Pinpoint and Amazon Connect

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-a-seamless-handoff-between-amazon-pinpoint-and-amazon-connect/

Note: This post was written by Ilya Pupko, Senior Consultant for the AWS Digital User Engagement team.


Time to read5 minutes
Learning levelIntermediate (200)
Services usedAmazon Pinpoint, Amazon SNS, AWS Lambda, Amazon Lex, Amazon Connect

Your customers deserve to have helpful communications with your brand, regardless of the channel that you use to interact with them. There are many situations in which you might have to move customers from one channel to another—for example, when a customer is interacting with a chatbot over SMS, but their needs suddenly change to require voice assistance. To create a great customer experience, your communications with your customers should be seamless across all communication channels.

Welcome aboard Customer Obsessed Airlines

In this post, we look at a scenario that involves our fictitious airline, Customer Obsessed Airlines. Severe storms in one area of the country have caused Customer Obsessed Airlines to cancel a large number of flights. Customer Obsessed Airlines has to notify all of the affected customers of the cancellations right away. But most importantly, to keep customers as happy as possible in this unfortunate and unavoidable situation, Customer Obsessed Airlines has to make it easy for customers to rebook their flights.

Fortunately, Customer Obsessed Airlines has implemented the solution that’s outlined later in this post. This solution uses Amazon Pinpoint to send messages to a targeted segment of customers—in this case, the specific customers who were booked on the affected flights. Some of these customers might have straightforward travel itineraries that can simply be rebooked through interactions with a chatbot. Other customers who have more complex itineraries, or those who simply prefer to interact with a human over the phone, can be handed off to an agent in your call center.

About the solution

The solution that we’ll build to handle this scenario can be deployed in under an hour. The following diagram illustrates the interactions in this solution.

At a high level, this solution uses the following workflow:

  1. An event occurs. Automated impact analysis systems trigger the creation of custom segments—in this case, all passengers whose flights were cancelled.
  2. Amazon Pinpoint sends a message to the affected passengers through their preferred channels. Amazon Pinpoint supports the email, SMS, push, and voice channels, but in this example, we focus exclusively on SMS.
  3. Passengers who receive the message can respond. When they do, they interact with a chatbot that helps them book a different flight.
  4. If a passenger requests a live agent, or if their situation can’t be handled by a chatbot, then Amazon Pinpoint passes information about the customer’s situation and communication history to Amazon Connect. The passenger is entered into a queue. When the passenger reaches the front of the queue, they receive a phone call from an agent.
  5. After being re-booked, the passenger receives a written confirmation of the changes to their itinerary through their preferred channel. Passengers are also given the option of providing feedback on their interaction when the process is complete.

To build this solution, we use Amazon Pinpoint to segment our customers based on their attributes (such as which flight they’ve booked), and to deliver messages to those segments.

We also use Amazon Connect to manage the voice calling part of the solution, and Amazon Lex to power the chatbot. Finally, we connect these services using logic that’s defined in AWS Lambda functions.

Setting up the solution

Step 1: Set up Amazon Pinpoint and link it with Amazon Lex

The first step in setting up this solution is to create a new Amazon Pinpoint project and configure the SMS channel. When that’s done, you can create an Amazon Lex chatbot and link it to the Amazon Pinpoint project.

We described this process in detail in an earlier blog post. Complete the procedures in Create an SMS Chatbot with Amazon Pinpoint and Amazon Lex, and then proceed to step 2.

Step 2: Set up Amazon Connect and link it with your Amazon Lex chatbot

By completing step 1, we’ve created a system that can send messages to our passengers and receive messages from them. The next step is to create a way for passengers to communicate with our call center.

The Amazon Connect Administrator Guide provides instructions for linking an Amazon Lex bot to an Amazon Connect instance. For complete procedures, see Add an Amazon Lex Bot.

When you complete these procedures, link your Amazon Connect instance to the same Amazon Lex bot that you created in step 1. This step is intended to provide customers with a consistent, cohesive experience across channels.

Step 3: Set up an Amazon Connect callback queue and use Amazon Pinpoint keyword logic to trigger it

Now that we’ve configured Amazon Pinpoint and Amazon Connect, we can connect them.

Linking the two services makes it possible for passengers to request additional assistance. Traditionally, passengers in this situation would have to call a call center themselves and then wait on hold for an agent to become available. However, in this solution, our call center calls the passenger directly as soon as an agent is available. When the agent calls the passenger, the agent has all of the information about the passenger’s issue, as well as a transcript of the passenger’s interactions with your chatbot.

To implement an automatic callback mechanism, use the Amazon Pinpoint Connect Callback Requestor, which is available on the AWS GitHub page.

Next steps

By completing the preceding three steps, you can send messages to a subset of your users based on the criteria you choose and the type of message you want to send. Your customers can interact with your message by replying with questions. When they do, a chatbot responds intelligently and appropriately.

You can add to this solution by expanding it to cover other communication channels, such as push notifications. You can also automate the initial communication by integrating the solution with your systems of record.

We’re excited to see what you build using the solution that we outlined in this post. Let us know of your ideas and your successes in the comments.

ISP Questions Rightscorp’s Credibility and Objectivity Ahead of Piracy Trial

Post Syndicated from Ernesto original https://torrentfreak.com/isp-questions-rightscorps-credibility-and-objectivity-ahead-of-piracy-trial-200218/

A group of major record labels is running a legal campaign against Internet providers which they accuse of not doing enough to deter persistent copyright infringers.

This has already resulted in a massive windfall in their case against Cox, where a jury awarded a billion dollars in damages. In a few weeks, the music companies will be hoping for the same outcome following the trial against ISP Grande Communications.

Similar to the Cox case, the music companies – including Capitol Records, Warner Bros, and Sony Music – argue that the Internet provider willingly turned a blind eye to pirating customers. As such, it should be held accountable for copyright infringements allegedly committed by its users.

In preparation for the trial, both sides have submitted requests to keep information away from the jury members. These motions in limine, as they’re called, can be used to prevent misleading or prejudicial information from influencing the jury.

The record labels, for example, asked to exclude certain evidence regarding Rightcorp, the company that sent the anti-piracy notices to Grande. These notices are essential evidence in the case as Grande is accused of not properly responding to them.

Specifically, the music companies asked the court to exclude “irrelevant or unfairly prejudicial” evidence or arguments about Rightscorp’s business practices, the company’s finances, or the allegation that the anti-piracy firm destroyed evidence.

A few days ago Grande responded to this request. According to the ISP, it would be unfair to exclude these broad categories, especially because the information is directly relevant to the reliability of key witnesses.

As we have documented here in the past, Rightscorp’s financial situation isn’t very positive. It manages to survive with financial help from the record companies, a point not lost on Grande.

The ISP questions whether the music companies’ trial witnesses, Rightscorp’s Gregory and Boswell and Christopher Sabec, are still credible given the circumstances.

“In assessing the credibility of Mr. Boswell and Mr. Sabec, the jury should be permitted to consider not only Rightscorp’s financial relationship with Plaintiffs, but also evidence regarding Rightscorp’s dire financial condition,” Grande notes.

“In short, Rightscorp’s relationship with Plaintiffs is the only thing keeping Rightscorp’s business afloat, and the jury should know that when evaluating testimony from Mr. Boswell and Mr. Sabec regarding the reliability of the Rightscorp system and the evidence it generates.”

Grande concedes that Rightscorp technically has no direct financial interest in the outcome of the lawsuit. However, it notes that the company certainly has a strong interest in proving that its notices are reliable.

In addition to the financial situation, Grande also questions the ethical side of Rightcorp’s business practices.

The piracy tracking outfit made a name for itself by demanding settlements from hundreds of thousands of alleged pirates. This business model is one that the music companies were aware of and frowned upon, Grande argues

The ISP points to emails it obtained from the music companies through discovery which reference an article that describes Rightscorp’s call center script as “terrifying extortion.”

In addition, Grande points out an email from Sony where the music company notes that it wants to keep its distance from Rightscorp, describing it as “publishers using 3rd parties to milk consumers.” Despite these comments, its lawsuit now relies on evidence provided by the same company.

“Now, however, having purchased evidence from Rightscorp, Plaintiffs want to present Rightscorp’s notices as legitimate evidence of infringement and intend to argue that Rightscorp is a credible business with a reliable system,” Grande notes.

The ISP believes that the jury should know about Rightscorp’s financial situation and business practices, including the call center script. This should allow it to make a better assessment of the Rightscorp witnesses’ credibility.

The music companies disagree and, at the same time, submitted several responses to Grande’s requests to have information excluded from the trial.

For example, Grande asked the court to exclude evidence which shows that the company terminated customers for non-payment. However, the music companies argue that this information is crucial, as it shows that terminations were taking place.

“It is understandable that Grande wants to keep from the jury evidence that it terminated customers for non-payment. Such evidence completely eviscerates an argument Grande is likely to make: that because of the importance of internet access, termination of service is a drastic measure that should be used sparingly, if at all.”

The music companies feel that it’s important to highlight that terminations were not a problem when the ISP itself was affected.

“Moreover, evidence that Grande terminated customers when its property or services were being stolen, but refused to do so when others’ property was being stolen, is independently admissible as it is highly probative of Grande’s willfulness,” the music companies add.

It is now up to the court to decide on these and various other motions to determine what evidence can be discussed at trial. Later this month the jury will be selected. As reported earlier, the jury members will be first asked several selection questions, including whether they read TorrentFreak articles.

A copy of Grande’s response to the music companies’ motion in limine is available here (pdf), and the music companies’ opposition can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

96-Core Processor Made of Chiplets

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/semiconductors/processors/core-processor-chiplets-isscc-news

For decades, the trend was for more and more of a computer’s systems to be integrated onto a single chip. Today’s system-on-chips, which power smartphones and servers alike, are the result. But complexity and cost are starting to erode the idea that everything should be on a single slice of silicon.

Already, some of the most of advanced processors, such as AMD’s Zen 2 processor family, are actually a collection of chiplets bound together by high-bandwidth connections within a single package. This week at the IEEE Solid-State Circuits Conference (ISSCC) in San Francisco, French research organization CEA-Leti showed how far this scheme can go, creating a 96-core processor out of six chiplets.

What’s in Store for 5G This Year

Post Syndicated from Kathy Pretz original https://spectrum.ieee.org/the-institute/ieee-news/whats-in-store-for-5g-this-year

THE INSTITUTE Although 5G is poised to replace LTE for cellular communications, its base stations require about three times as much power. That is one challenge identified by members of the IEEE Future Networks Initiative in “7 Experts Forecast What’s Coming for 5G in 2020,” a round up of predictions for the technology. The initiative, an IEEE Future Directions program, is helping to pave the way for 5G.

“As more operators push 5G from demonstration sites into wider deployment, 2020 is going to be the year that power efficiency moves to the center of the conversation,” says IEEE Fellow Earl McCune, cochair of the initiative’s hardware working group

“To operate profitably, the 5G industry requires a sea change in transmitter radio frequency efficiency.” McCune is chief technology officer for Eridan, a company that builds extremely efficient radio hardware for 5G, based in Mountain View, Calif.”

 “Later this year could see the deployment of 5G-enhanced mobile broadband networks for portable devices used in malls, convention centers, and sports arenas, says IEEE Senior Member David Witkowski, cochair of the initiative’s deployment working group.

Witkowski is the founder and CEO of Oku Solutions and serves as executive director of the Wireless Communications Initiative at the nonprofit Joint Venture Silicon Valley, both based in San Jose, Calif.

IEEE Fellow Rod Waterhouse, cochair of the initiative’s publication working group, says areas of interest this year include the role satellites will play, vehicle-to-everything communication, and virtual medical care. Waterhouse is the CTO of Octane Wireless, in Hanover, Md.

Witkowski and Waterhouse both predict debates will continue over whether 5G will have harmful effects on humans and the environment. Overcoming such fears will require a deliberate response from industry, governments, and medical academia, Witkowski says.

5G Hybrid Beamforming Design

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/5g_hybrid_beamforming_design

5G systems will deploy large scale antenna arrays and massive MIMO techniques to boost system capacity. Among MIMO techniques considered for 5G systems, hybrid beamforming has emerged as a scalable and economical choice. In this webinar, we cover elements of an end-to-end 5G hybrid beamforming design, that include:

  • 5G waveform generation
  • Channel modeling and spatial multiplexing and precoding
  • Antenna arrays design and visualization
  • Approach to hybrid beamforming
  • Design of RF front end and matching networks

NextGen Healthcare: Build and Deployment Pipelines with AWS

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/nextgen-healthcare-build-and-deployment-pipelines-with-aws/

Owen Zacharias, Vice President of Application Delivery at NextGen Healthcare, explains to AWS Solutions Architect Andrea Sabet how his company developed a series of build and deployment pipelines using native AWS services in the highly regulated healthcare sector.

Learn how the following services can be used to build and deploy infrastructure and application code:

Discover how AWS resources can be rapidly created and updated as part of a CI/CD pipeline while ensuring HIPAA compliance through approved/vetted AWS Identity and Access Management (IAM) roles that AWS CloudFormation is permitted to assume.

February’s AWS Architecture Monthly magazine is all about healthcare. Check it out on Kindle Newsstand, download the PDF, or see it on Flipboard.

*Check out more This Is My Architecture video series.

Go Language Tops List of In-Demand Software Skills

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/go-language-tops-list-of-indemand-software-skills

Engineers love Python, JavaScript, and Java. Employers, on the other hand, shine their light on Go.

That’s the takeaway of the Hottest Coding Languages section of job site Hired’s annual State of Software Engineers report. Engineers experienced with Go received an average of 9.2 interview requests, making it the most in-demand language. Worldwide, Go’s popularity among employers was followed by Scala and Ruby. That’s not great news for engineers, who ranked Ruby number one in least loved languages, followed by PHP and Objective-C.

There are regional differences in employer interest. In the San Francisco Bay Area and Toronto, Scala rules; in London, it’s TypeScript. A roundup of regional favorites, along with the worldwide rankings, is in the chart below.

(To compile its data, Hired reviewed 400,000 interview requests from 10,000 companies made to 98,000 job seekers throughout 2019.)

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/812763/rss

Security updates have been issued by Arch Linux (systemd and thunderbird), Debian (clamav, libgd2, php7.3, spamassassin, and webkit2gtk), Fedora (kernel, kernel-headers, and sway), Mageia (firefox, kernel-linus, mutt, python-pillow, sphinx, thunderbird, and webkit2), openSUSE (firefox, nextcloud, and thunderbird), Oracle (firefox and ksh), Red Hat (curl, java-1.7.0-openjdk, kernel, and ruby), Scientific Linux (firefox and ksh), SUSE (sudo and xen), and Ubuntu (clamav, php5, php7.0, php7.2, php7.3, postgresql-10, postgresql-11, and webkit2gtk).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close