Tag Archives: issue

AWS Bill Simplification – Consolidated CloudWatch Charges

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-bill-simplification-consolidated-cloudwatch-charges/

The bill that you receive for your use of AWS in July will include a change in the way that Amazon CloudWatch charges are presented. The CloudWatch team made this change in order to make your bill simpler and easier to understand.

Consolidating Charges
In the past, charges for your usage of CloudWatch were split between two sections of your bill. For historical reasons, the charges for CloudWatch Alarms, CloudWatch Metrics, and calls to the CloudWatch API were reported in the Elastic Compute Cloud (EC2) detail section, while charges for CloudWatch Logs and CloudWatch Dashboards were reported in the CloudWatch detail section, like this:

We have received feedback that splitting the charges across two sections of the bill made it difficult to locate and understand the entire set of monitoring charges. In order to address this issue, we are moving the charges that were formerly listed in the Elastic Compute Cloud (EC2) detail section to the CloudWatch detail section. We are making the same change to the detailed billing report, moving the affected charges from the AmazonEC2 product code to the AmazonCloudWatch product code and changing to the AmazonCloudWatch product name. This change does not affect your overall bill; it simply consolidates all of the charges for the use of CloudWatch in one section.

Billing Metric
The CloudWatch billing metric named Estimated Charges can be viewed as a Total Estimated Charge, or broken down By Service:

The total will not change. However, as noted above, the charges that formerly had AmazonEC2 as the ServiceName dimension will now have it set to AmazonCloudWatch:

You may need to adjust thresholds on your billing alarms as a result:

Once again, your total AWS bill will not change. You will begin to see the consolidated charges for CloudWatch in your AWS bill for July 2017.

Jeff;

 

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Kim Dotcom Opposes US’s “Fugitive” Claims at Supreme Court

Post Syndicated from Ernesto original https://torrentfreak.com/kim-dotcom-opposes-uss-fugitive-claims-supreme-court-170622/

megaupload-logoWhen Megaupload and Kim Dotcom were raided five years ago, the authorities seized millions of dollars in cash and other property.

The US government claimed the assets were obtained through copyright crimes so went after the bank accounts, cars, and other seized possessions of the Megaupload defendants.

Kim Dotcom and his colleagues were branded as “fugitives” and the Government won its case. Dotcom’s legal team quickly appealed this verdict, but lost once more at the Fourth Circuit appeals court.

A few weeks ago Dotcom and his former colleagues petitioned the Supreme Court to take on the case.

They don’t see themselves as “fugitives” and want the assets returned. The US Government opposed the request, but according to a new reply filed by Megaupload’s legal team, the US Government ignores critical questions.

The Government has a “vested financial stake” in maintaining the current situation, they write, which allows the authorities to use their “fugitive” claims as an offensive weapon.

“Far from being directed towards persons who have fled or avoided our country while claiming assets in it, fugitive disentitlement is being used offensively to strip foreigners of their assets abroad,” the reply brief (pdf) reads.

According to Dotcom’s lawyers there are several conflicting opinions from lower courts, which should be clarified by the Supreme Court. That Dotcom and his colleagues have decided to fight their extradition in New Zealand, doesn’t warrant the seizure of their assets.

“Absent review, forfeiture of tens of millions of dollars will be a fait accompli without the merits being reached,” they write, adding that this is all the more concerning because the US Government’s criminal case may not be as strong as claimed.

“This is especially disconcerting because the Government’s criminal case is so dubious. When the Government characterizes Petitioners as ‘designing and profiting from a system that facilitated wide-scale copyright infringement,’ it continues to paint a portrait of secondary copyright infringement, which is not a crime.”

The defense team cites several issues that warrant review and urges the Supreme Court to hear the case. If not, the Government will effectively be able to use assets seizures as a pressure tool to urge foreign defendants to come to the US.

“If this stands, the Government can weaponize fugitive disentitlement in order to claim assets abroad,” the reply brief reads.

“It is time for the Court to speak to the Questions Presented. Over the past two decades it has never had a better vehicle to do so, nor is any such vehicle elsewhere in sight,” Dotcom’s lawyers add.

Whether the Supreme Court accepts or denies the case will likely be decided in the weeks to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/726234/rss

Security updates have been issued by Arch Linux (lxterminal, lxterminal-gtk3, openvpn, and pcmanfm), CentOS (thunderbird), Debian (jython, spip, tomcat7, and tomcat8), openSUSE (openvpn), Oracle (thunderbird), Slackware (openvpn), SUSE (openvpn), and Ubuntu (kernel, linux-lts-trusty, nss, and valgrind).

HiveMQ 3.2.5 released

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/hivemq-3-2-5-released/

The HiveMQ team is pleased to announce the availability of HiveMQ 3.2.5. This is a maintenance release for the 3.2 series and brings the following improvements:

  • Fixed an issue that caused cluster nodes to not be operational for a long time after start up
  • Fixed an issue that could cause wildcard (+ operator) subscriptions to get lost
  • Fixed an issue that could cause QoS=1 messages to get lost when using cleanSession=false and shared subscription groups
  • Fixed an issue that could cause the current session count metric to be incorrect
  • Fixed an issue that could lead to QoS=0 message to be resent incorrectly when using shared subscriptions
  • Fixed various issues that could cause false Exceptions to be logged
  • Fixed an issue that could lead to an increase in memory usage when using retained messages
  • Improved documentation
  • Improved logging
  • Performance improvements

You can download the new HiveMQ version here.

We strongly recommend to upgrade if you are an HiveMQ 3.2.x user.

Have a great day,
The HiveMQ Team

Vranken: The OpenVPN post-audit bug bonanza

Post Syndicated from corbet original https://lwn.net/Articles/726157/rss

Guido Vranken describes
his efforts
to fuzz-test OpenVPN and the bug reports that resulted.
Most of this issues were found through fuzzing. I hate admitting it,
but my chops in the arcane art of reviewing code manually, acquired through
grueling practice, are dwarfed by the fuzzer in one fell swoop; the
mortal’s mind can only retain and comprehend so much information at a time,
and for programs that perform long cycles of complex, deeply nested
operations it is simply not feasible to expect a human to perform an
encompassing and reliable verification.

DynamoDB Accelerator (DAX) Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/dynamodb-accelerator-dax-now-generally-available/

Earlier this year I told you about Amazon DynamoDB Accelerator (DAX), a fully-managed caching service that sits in front of (logically speaking) your Amazon DynamoDB tables. DAX returns cached responses in microseconds, making it a great fit for eventually-consistent read-intensive workloads. DAX supports the DynamoDB API, and is seamless and easy to use. As a managed service, you simply create your DAX cluster and use it as the target for your existing reads and writes. You don’t have to worry about patching, cluster maintenance, replication, or fault management.

Now Generally Available
Today I am pleased to announce that DAX is now generally available. We have expanded DAX into additional AWS Regions and used the preview time to fine-tune performance and availability:

Now in Five Regions – DAX is now available in the US East (Northern Virginia), EU (Ireland), US West (Oregon), Asia Pacific (Tokyo), and US West (Northern California) Regions.

In Production – Our preview customers are reporting that they are using DAX in production, that they loved how easy it was to add DAX to their application, and have told us that their apps are now running 10x faster.

Getting Started with DAX
As I outlined in my earlier post, it is easy to use DAX to accelerate your existing DynamoDB applications. You simply create a DAX cluster in the desired region, update your application to reference the DAX SDK for Java (the calls are the same; this is a drop-in replacement), and configure the SDK to use the endpoint to your cluster. As a read-through/write-through cache, DAX seamlessly handles all of the DynamoDB read/write APIs.

We are working on SDK support for other languages, and I will share additional information as it becomes available.

DAX Pricing
You pay for each node in the cluster (see the DynamoDB Pricing page for more information) on a per-hour basis, with prices starting at $0.269 per hour in the US East (Northern Virginia) and US West (Oregon) regions. With DAX, each of the nodes in your cluster serves as a read target and as a failover target for high availability. The DAX SDK is cluster aware and will issue round-robin requests to all nodes in the cluster so that you get to make full use of the cluster’s cache resources.

Because DAX can easily handle sudden spikes in read traffic, you may be able to reduce the amount of provisioned throughput for your tables, resulting in an overall cost savings while still returning results in microseconds.

Jeff;

 

Is Continuing to Patch Windows XP a Mistake?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/is_continuing_t.html

Last week, Microsoft issued a security patch for Windows XP, a 16-year-old operating system that Microsoft officially no longer supports. Last month, Microsoft issued a Windows XP patch for the vulnerability used in WannaCry.

Is this a good idea? This 2014 essay argues that it’s not:

The zero-day flaw and its exploitation is unfortunate, and Microsoft is likely smarting from government calls for people to stop using Internet Explorer. The company had three ways it could respond. It could have done nothing­ — stuck to its guns, maintained that the end of support means the end of support, and encouraged people to move to a different platform. It could also have relented entirely, extended Windows XP’s support life cycle for another few years and waited for attrition to shrink Windows XP’s userbase to irrelevant levels. Or it could have claimed that this case is somehow “special,” releasing a patch while still claiming that Windows XP isn’t supported.

None of these options is perfect. A hard-line approach to the end-of-life means that there are people being exploited that Microsoft refuses to help. A complete about-turn means that Windows XP will take even longer to flush out of the market, making it a continued headache for developers and administrators alike.

But the option Microsoft took is the worst of all worlds. It undermines efforts by IT staff to ditch the ancient operating system and undermines Microsoft’s assertion that Windows XP isn’t supported, while doing nothing to meaningfully improve the security of Windows XP users. The upside? It buys those users at best a few extra days of improved security. It’s hard to say how that was possibly worth it.

This is a hard trade-off, and it’s going to get much worse with the Internet of Things. Here’s me:

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

At least Microsoft has security engineers on staff that can write a patch for Windows XP. There will be no one able to write patches for your 16-year-old thermostat and refrigerator, even assuming those devices can accept security patches.

A Stack Clash disclosure post-mortem

Post Syndicated from corbet original https://lwn.net/Articles/726137/rss

For those who are curious about how the community deals with a serious
vulnerability, Solar Designer’s description of the embargo process around
the “Stack Clash” issue (and his unhappiness with it) is worth
a read. “Qualys first informed the distros list about this upcoming set of issues
on May 3. This initial notification didn’t say Stack Clash nor anything
like that, but merely expressed intent to disclose the issues and
concern that the list’s maximum embargo duration of 14 to 19 days might
not be sufficient in this case. In the resulting discussion, I agreed
to consider extending the embargo beyond list policy should there be
convincing reasons for that. In retrospect, I think I shouldn’t have
agreed to that.

MPAA & RIAA Demand Tough Copyright Standards in NAFTA Negotiations

Post Syndicated from Andy original https://torrentfreak.com/mpaa-riaa-demand-tough-copyright-standards-in-nafta-negotiations-170621/

The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico was negotiated more than 25 years ago. With a quarter of a decade of developments to contend with, the United States wants to modernize.

“While our economy and U.S. businesses have changed considerably over that period, NAFTA has not,” the government says.

With this in mind, the US requested comments from interested parties seeking direction for negotiation points. With those comments now in, groups like the MPAA and RIAA have been making their positions known. It’s no surprise that intellectual property enforcement is high on the agenda.

“Copyright is the lifeblood of the U.S. motion picture and television industry. As such, MPAA places high priority on securing strong protection and enforcement disciplines in the intellectual property chapters of trade agreements,” the MPAA writes in its submission.

“Strong IPR protection and enforcement are critical trade priorities for the music industry. With IPR, we can create good jobs, make significant contributions to U.S. economic growth and security, invest in artists and their creativity, and drive technological innovation,” the RIAA notes.

While both groups have numerous demands, it’s clear that each seeks an environment where not only infringers can be held liable, but also Internet platforms and services.

For the RIAA, there is a big focus on the so-called ‘Value Gap’, a phenomenon found on user-uploaded content sites like YouTube that are able to offer infringing content while avoiding liability due to Section 512 of the DMCA.

“Today, user-uploaded content services, which have developed sophisticated on-demand music platforms, use this as a shield to avoid licensing music on fair terms like other digital services, claiming they are not legally responsible for the music they distribute on their site,” the RIAA writes.

“Services such as Apple Music, TIDAL, Amazon, and Spotify are forced to compete with services that claim they are not liable for the music they distribute.”

But if sites like YouTube are exercising their rights while acting legally under current US law, how can partners Canada and Mexico do any better? For the RIAA, that can be achieved by holding them to standards envisioned by the group when the DMCA was passed, not how things have panned out since.

Demanding that negotiators “protect the original intent” of safe harbor, the RIAA asks that a “high-level and high-standard service provider liability provision” is pursued. This, the music group says, should only be available to “passive intermediaries without requisite knowledge of the infringement on their platforms, and inapplicable to services actively engaged in communicating to the public.”

In other words, make sure that YouTube and similar sites won’t enjoy the same level of safe harbor protection as they do today.

The RIAA also requires any negotiated safe harbor provisions in NAFTA to be flexible in the event that the DMCA is tightened up in response to the ongoing safe harbor rules study.

In any event, NAFTA should not “support interpretations that no longer reflect today’s digital economy and threaten the future of legitimate and sustainable digital trade,” the RIAA states.

For the MPAA, Section 512 is also perceived as a problem. While noting that the original intent was to foster a system of shared responsibility between copyright owners and service providers, the MPAA says courts have subsequently let copyright holders down. Like the RIAA, the MPAA also suggests that Canada and Mexico can be held to higher standards.

“We recommend a new approach to this important trade policy provision by moving to high-level language that establishes intermediary liability and appropriate limitations on liability. This would be fully consistent with U.S. law and avoid the same misinterpretations by policymakers and courts overseas,” the MPAA writes.

“In so doing, a modernized NAFTA would be consistent with Trade Promotion Authority’s negotiating objective of ‘ensuring that standards of protection and enforcement keep pace with technological developments’.”

The MPAA also has some specific problems with Mexico, including unauthorized camcording. The Hollywood group says that 85 illicit audio and video recordings of films were linked to Mexican theaters in 2016. However, recording is not currently a criminal offense in Mexico.

Another issue for the MPAA is that criminal sanctions for commercial scale infringement are only available if the infringement is for profit.

“This has hampered enforcement against the above-discussed camcording problem but also against online infringement, such as peer-to-peer piracy, that may be on a scale that is immensely harmful to U.S. rightsholders but nonetheless occur without profit by the infringer,” the MPAA writes.

“The modernized NAFTA like other U.S. bilateral free trade agreements must provide for criminal sanctions against commercial scale infringements without proof of profit motive.”

Also of interest are the MPAA’s complaints against Mexico’s telecoms laws. Unlike in the US and many countries in Europe, Mexico’s ISPs are forbidden to hand out their customers’ personal details to rights holders looking to sue. This, the MPAA says, needs to change.

The submissions from the RIAA and MPAA can be found here and here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

US Embassy Threatens to Close Domain Registry Over ‘Pirate Bay’ Domain

Post Syndicated from Andy original https://torrentfreak.com/us-embassy-threatens-to-close-domain-registry-over-pirate-bay-domain-170620/

Domains have become an integral part of the piracy wars and no one knows this better than The Pirate Bay.

The site has burned through numerous domains over the years, with copyright holders and authorities successfully pressurizing registries to destabilize the site.

The latest news on this front comes from the Central American country of Costa Rica, where the local domain registry is having problems with the United States government.

The drama is detailed in a letter to ICANN penned by Dr. Pedro León Azofeifa, President of the Costa Rican Academy of Science, which operates NIC Costa Rica, the registry in charge of local .CR domain names.

Azofeifa’s letter is addressed to ICANN board member Thomas Schneider and pulls no punches. It claims that for the past two years the United States Embassy in Costa Rica has been pressuring NIC Costa Rica to take action against a particular domain.

“Since 2015, the United Estates Embassy in Costa Rica, who represents the interests of the United States Department of Commerce, has frequently contacted our organization regarding the domain name thepiratebay.cr,” the letter to ICANN reads.

“These interactions with the United States Embassy have escalated with time and include great pressure since 2016 that is exemplified by several phone calls, emails, and meetings urging our ccTLD to take down the domain, even though this would go against our domain name policies.”

The letter states that following pressure from the US, the Costa Rican Ministry of Commerce carried out an investigation which concluded that not taking down the domain was in line with best practices that only require suspensions following a local court order. That didn’t satisfy the United States though, far from it.

“The representative of the United States Embassy, Mr. Kevin Ludeke, Economic Specialist, who claims to represent the interests of the US Department of
Commerce, has mentioned threats to close our registry, with repeated harassment
regarding our practices and operation policies,” the letter to ICANN reads.

Ludeke is indeed listed on the US Embassy site for Costa Rica. He’s also referenced in a 2008 diplomatic cable leaked previously by Wikileaks. Contacted via email, Ludeke did not immediately respond to TorrentFreak’s request for comment.

Extract from the letter to ICANN

Surprisingly, Azofeifa says the US representative then got personal, making negative comments towards his Executive Director, “based on no clear evidence or statistical data to support his claims, as a way to pressure our organization to take down the domain name without following our current policies.”

Citing the Tunis Agenda for the Information Society of 2005, Azofeifa asserts that “policy authority for Internet-related public policy issues is the sovereign right of the States,” which in Costa Rica’s case means that there must be “a final judgment from the Courts of Justice of the Republic of Costa Rica” before the registry will suspend a domain.

But it seems legal action was not the preferred route of the US Embassy. Demanding that NIC Costa Rica take unilateral action, Mr. Ludeke continued with “pressure and harassment to take down the domain name without its proper process and local court order.”

Azofeifa’s letter to ICANN, which is cc’d to Stafford Fitzgerald Haney, United States Ambassador to Costa Rica and various people in the Costa Rican Ministry of Commerce, concludes with a request for suggestions on how to deal with the matter.

While the response should prove very interesting, none of the parties involved appear to have noticed that ThePirateBay.cr isn’t officially connected to The Pirate Bay

The domain and associated site appeared in the wake of the December 2014 shut down of The Pirate Bay, claiming to be the real deal and even going as far as making fake accounts in the names of famous ‘pirate’ groups including ettv and YIFY.

Today it acts as an unofficial and unaffiliated reverse proxy to The Pirate Bay while presenting the site’s content as its own. It’s also affiliated with a fake KickassTorrents site, Kickass.cd, which to this day claims that it’s a reincarnation of the defunct torrent giant.

But perhaps the most glaring issue in this worrying case is the apparent willingness of the United States to call out Costa Rica for not doing anything about a .CR domain run by third parties, when the real Pirate Bay’s .org domain is under United States’ jurisdiction.

Registered by the Public Interest Registry in Reston, Virginia, ThePirateBay.org is the famous site’s main domain. TorrentFreak asked PIR if anyone from the US government had ever requested action against the domain but at the time of publication, we had received no response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/725989/rss

Security updates have been issued by Arch Linux (glibc and lib32-glibc), CentOS (glibc and kernel), Debian (eglibc, kernel, and libffi), openSUSE (exim, freeradius-server, libxml2, Mozilla based packages, and xorg-x11-server), Oracle (glibc and kernel), Scientific Linux (glibc and kernel), SUSE (glibc, kernel, and openvpn), and Ubuntu (eglibc, glibc, exim4, libnl3, linux, linux-meta, linux-aws, linux-meta-aws, linux-gke, linux-meta-gke, linux-hwe, linux-meta-hwe, linux-lts-xenial, linux-meta-lts-xenial, linux-meta-raspi2, linux-raspi2, and linux-meta-snapdragon, linux-snapdragon).

Shelfchecker Smart Shelf: build a home library system

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/smart-shelf-home-library/

Are you tired of friends borrowing your books and never returning them? Maybe you’re sure you own 1984 but can’t seem to locate it? Do you find a strange satisfaction in using the supermarket self-checkout simply because of the barcode beep? With the ShelfChecker smart shelf from maker Annelynn described on Instructables, you can be your own librarian and never misplace your books again! Beep!

Shelfchecker smart shelf annelynn Raspberry Pi

Harry Potter and the Aesthetically Pleasing Smart Shelf

The ShelfChecker smart shelf

Annelynn built her smart shelf utilising a barcode scanner, LDR light sensors, a Raspberry Pi, plus a few other peripherals and some Python scripts. She has created a fully integrated library checkout system with accompanying NeoPixel location notification for your favourite books.

This build allows you to issue your book-borrowing friends their own IDs and catalogue their usage of your treasured library. On top of that, you’ll be able to use LED NeoPixels to highlight your favourite books, registering their removal and return via light sensor tracking.

Using light sensors for book cataloguing

Once Annelynn had built the shelf, she drilled holes to fit the eight LDRs that would guard her favourite books, and separated them with corner brackets to prevent confusion.

Shelfchecker smart shelf annelynn Raspberry Pi

Corner brackets keep the books in place without confusion between their respective light sensors

Due to the limitations of the MCP3008 Adafruit microchip, the smart shelf can only keep track of eight of your favourite books. But this limitation won’t stop you from cataloguing your entire home library; it simply means you get to pick your ultimate favourites that will occupy the prime real estate on your wall.

Obviously, the light sensors sense light. So when you remove or insert a book, light floods or is blocked from that book’s sensor. The sensor sends this information to the Raspberry Pi. In response, an Arduino controls the NeoPixel strip along the ‘favourites’ shelf to indicate the book’s status.

Shelfchecker smart shelf annelynn Raspberry Pi

The book you are looking for is temporarily unavailable

Code your own library

While keeping a close eye on your favourite books, the system also allows creation of a complete library catalogue system with the help of a MySQL database. Users of the library can log into the system with a barcode scanner, and take out or return books recorded in the database guided by an LCD screen attached to the Pi.

Shelfchecker smart shelf annelynn Raspberry Pi

Beep!

I won’t go into an extensive how-to on creating MySQL databases here on the blog, because my glamourous assistant Janina has pulled up these MySQL tutorials to help you get started. Annelynn’s Github scripts are also packed with useful comments to keep you on track.

Raspberry Pi and books

We love books and libraries. And considering the growing number of Code Clubs and makespaces into libraries across the world, and the host of book-based Pi builds we’ve come across, the love seems to be mutual.

We’ve seen the Raspberry Pi introduced into the Wordery bookseller warehouse, a Pi-powered page-by-page book scanner by Jonathon Duerig, and these brilliant text-to-speech and page turner projects that use our Pis!

Did I say we love books? In fact we love them so much that members of our team have even written a few.*

If you’ve set up any sort of digital making event in a library, have in some way incorporated Raspberry Pi into your own personal book collection, or even managed to recreate the events of your favourite story using digital making, make sure to let us know in the comments below.

* Shameless plug**

Fancy adding some Pi to your home library? Check out these publications from the Raspberry Pi staff:

A Beginner’s Guide to Coding by Marc Scott

Adventures in Raspberry Pi by Carrie Anne Philbin

Getting Started with Raspberry Pi by Matt Richardson

Raspberry Pi User Guide by Eben Upton

The MagPi Magazine, Essentials Guides and Project Books

Make Your Own Game and Build Your Own Website by CoderDojo

** Shameless Pug

 

The post Shelfchecker Smart Shelf: build a home library system appeared first on Raspberry Pi.

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/725822/rss

Security updates have been issued by Arch Linux (chromium, firefox, and thunderbird), Debian (exim4, expat, firefox-esr, glibc, gnutls28, irssi, jython, and kernel), Fedora (dolphin-emu, firefox, golang, mariadb, perl-File-Path, redis, and yara), Mageia (firefox, kodi, and thunderbird), openSUSE (chromium and lynis), and SUSE (mercurial).

Roku Sales Banned in Mexico Over Piracy Concerns

Post Syndicated from Ernesto original https://torrentfreak.com/roku-sales-banned-in-mexico-over-piracy-concerns-170619/

Online streaming piracy is on the rise and many people use dedicated media players to watch it through their regular TV.

While a lot of attention has been on Kodi, there are other players on the market that allow people to do the same. Roku, for example, has been doing very well too.

Like Kodi, Roku media players don’t offer any pirated content out of the box. In fact, they can be hooked up to a wide variety of legal streaming options including HBO Go, Hulu, and Netflix. Still, there is also a market for third-party pirate channels, outside the Roku Channel Store, which turn the boxes into pirate tools.

This pirate angle has now resulted in a ban on Roku sales in Mexico, according to a report in Milenio.

The ban was issued by the Superior Court of Justice of the City of Mexico, following a complaint from Cablevision. The order in question prohibits stores such as Amazon, Liverpool, El Palacio de Hierro, and Sears from importing and selling the devices.

In addition, the court also instructs banks including Banorte and BBVA Bancomer to stop processing payments from a long list of accounts linked to pirated services on Roku.

The main reason for the order is the availability of pirated content through Roku, but banning the device itself is utterly comprehensive. It would be similar to banning all Android-based devices because certain apps allow users to stream copyrighted content without permission.

Roku

Roku has yet to release an official statement on the court order. TorrentFreak reached out to the company but hadn’t heard back at the time of publication.

It’s clear, however, that streaming players are among the top concerns for copyright holders. Motion Picture Association boss Stan McCoy recently characterized the use of streaming players to access infringing content as “Piracy 3.0.

“If you think of old-fashioned peer-to-peer piracy as 1.0, and then online illegal streaming websites as 2.0, in the audio-visual sector, in particular, we now face challenge number 3.0, which is what I’ll call the challenge of illegal streaming devices,” McCoy said earlier this month.

Unlike the court order in Mexico, however, McCoy stressed that the devices themselves, and software such as Kodi, are ‘probably’ not illegal. However, copyright-infringing pirate add-ons have the capability to turn them into an unprecedented piracy threat.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BPI Breaks Record After Sending 310 Million Google Takedowns

Post Syndicated from Andy original https://torrentfreak.com/bpi-breaks-record-after-sending-310-million-google-takedowns-170619/

A little over a year ago during March 2016, music industry group BPI reached an important milestone. After years of sending takedown notices to Google, the group burst through the 200 million URL barrier.

The fact that it took BPI several years to reach its 200 million milestone made the surpassing of the quarter billion milestone a few months later even more remarkable. In October 2016, the group sent its 250 millionth takedown to Google, a figure that nearly doubled when accounting for notices sent to Microsoft’s Bing.

But despite the volumes, the battle hadn’t been won, let alone the war. The BPI’s takedown machine continued to run at a remarkable rate, churning out millions more notices per week.

As a result, yet another new milestone was reached this month when the BPI smashed through the 300 million URL barrier. Then, days later, a further 10 million were added, with the latter couple of million added during the time it took to put this piece together.

BPI takedown notices, as reported by Google

While demanding that Google places greater emphasis on its de-ranking of ‘pirate’ sites, the BPI has called again and again for a “notice and stay down” regime, to ensure that content taken down by the search engine doesn’t simply reappear under a new URL. It’s a position BPI maintains today.

“The battle would be a whole lot easier if intermediaries played fair,” a BPI spokesperson informs TF.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down.”

The long-standing suggestion is that the volume of takedown notices sent would reduce if a “take down, stay down” regime was implemented. The BPI says it’s difficult to present a precise figure but infringing content has a tendency to reappear, both in search engines and on hosting sites.

“Google rejects repeat notices for the same URL. But illegal content reappears as it is re-indexed by Google. As to the sites that actually host the content, the vast majority of notices sent to them could be avoided if they implemented take-down & stay-down,” BPI says.

The fact that the BPI has added 60 million more takedowns since the quarter billion milestone a few months ago is quite remarkable, particularly since there appears to be little slowdown from month to month. However, the numbers have grown so huge that 310 billion now feels a lot like 250 million, with just a few added on top for good measure.

That an extra 60 million takedowns can almost be dismissed as a handful is an indication of just how massive the issue is online. While pirates always welcome an abundance of links to juicy content, it’s no surprise that groups like the BPI are seeking more comprehensive and sustainable solutions.

Previously, it was hoped that the Digital Economy Bill would provide some relief, hopefully via government intervention and the imposition of a search engine Code of Practice. In the event, however, all pressure on search engines was removed from the legislation after a separate voluntary agreement was reached.

All parties agreed that the voluntary code should come into effect two weeks ago on June 1 so it seems likely that some effects should be noticeable in the near future. But the BPI says it’s still early days and there’s more work to be done.

“BPI has been working productively with search engines since the voluntary code was agreed to understand how search engines approach the problem, but also what changes can and have been made and how results can be improved,” the group explains.

“The first stage is to benchmark where we are and to assess the impact of the changes search engines have made so far. This will hopefully be completed soon, then we will have better information of the current picture and from that we hope to work together to continue to improve search for rights owners and consumers.”

With more takedown notices in the pipeline not yet publicly reported by Google, the BPI informs TF that it has now notified the search giant of 315 million links to illegal content.

“That’s an astonishing number. More than 1 in 10 of the entire world’s notices to Google come from BPI. This year alone, one in every three notices sent to Google from BPI is for independent record label repertoire,” BPI concludes.

While it’s clear that groups like BPI have developed systems to cope with the huge numbers of takedown notices required in today’s environment, it’s clear that few rightsholders are happy with the status quo. With that in mind, the fight will continue, until search engines are forced into compromise. Considering the implications, that could only appear on a very distant horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Comodo DNS Blocks TorrentFreak Over “Hacking and Warez “

Post Syndicated from Ernesto original https://torrentfreak.com/comodo-dns-blocks-torrentfreak-over-hacking-and-warez-170617/

Website blocking has become one of the go-to methods for reducing online copyright infringement.

In addition to court-ordered blockades, various commercial vendors also offer a broad range of blocking tools. This includes Comodo, which offers a free DNS service that keeps people away from dangerous sites.

The service labeled SecureDNS is part of the Comodo Internet Security bundle but can be used by the general public as well, without charge. Just change the DNS settings on your computer or any other device, and you’re ready to go.

“As a leading provider of computer security solutions, Comodo is keenly aware of the dangers that plague the Internet today. SecureDNS helps users keep safe online with its malware domain filtering feature,” the company explains.

Aside from malware and spyware, Comodo also blocks access to sites that offer access to pirated content. Or put differently, they try to do this. But it’s easier said than done.

This week we were alerted to the fact that Comodo blocks direct access to TorrentFreak. Those who try to access our news site get an ominous warning instead, suggesting that we might share pirated content.

“This website has been blocked temporarily because of the following reason(s): Hacking/Warez: Site may offer illegal sharing of copyrighted software or media,” the warning reads, adding that several users also reported the site to be unsafe.

TorrentFreak blocked

People can still access the site by clicking on a big red cross, although that’s something Comodo doesn’t recommend. However, it is quite clear that new readers will be pretty spooked by the alarming message.

We assume that TorrentFreak was added to Comodo’s blocklist by mistake. And while mistakes can happen everywhere, this once again show that overblocking is a serious concern.

We are lucky enough that readers alerted us to the problem, but in other cases, it could easily go unnoticed.

Interestingly, the ‘piracy’ blocklist is not as stringent as the above would suggest. While we replicated the issue, we also checked several other known ‘pirate’ sites including The Pirate Bay, RARBG, GoMovies, and Pubfilm. These could all be accessed through SecureDNS without any warning.

TorrentFreak contacted Comodo for a comment on their curious blocking efforts, but we have yet to hear back from the company. In the meantime, Comodo SecureDNS users may want to consider switching to a more open DNS provider.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.