Tag Archives: marketing

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How NAGRA Fights Kodi and IPTV Piracy

Post Syndicated from Andy original https://torrentfreak.com/how-nagra-fights-kodi-and-iptv-piracy-170603/

Nagravision or NAGRA is one of the best known companies operating in the digital cable and satellite television content security space. Due to successes spanning several decades, the company has often proven unpopular with pirates.

In particular, Nagravision encryption systems have regularly been a hot topic for discussion on cable and satellite hacking forums, frustrating those looking to receive pay TV services without paying the high prices associated with them. However, the rise of the Internet is now presenting new challenges.

NAGRA still protects traditional cable and satellite pay TV services in 2017; Virgin Media in the UK is a long-standing customer, for example. But the rise of Internet streaming means that pirate content can now be delivered to the home with ease, completely bypassing the entire pay TV provider infrastructure. And, by extension, NAGRA’s encryption.

This means that NAGRA has been required to spread its wings.

As reported in April, NAGRA is establishing a lab to monitor and detect unauthorized consumption of content via set-top boxes, websites and other streaming platforms. That covers the now omnipresent Kodi phenomenon, alongside premium illicit IPTV services. TorrentFreak caught up with the company this week to find out more.

“NAGRA has an automated monitoring platform that scans all live channels and VOD assets available on Kodi,” NAGRA’s Ivan Schnider informs TF.

“The service we offer to our customers automatically finds illegal distribution of their content on Kodi and removes infringing streams.”

In the first instance, NAGRA sends standard takedown notices to hosting services to terminate illicit streams. The company says that while some companies are very cooperative, others are less so. When meeting resistance, NAGRA switches to more coercive methods, described here by Christopher Schouten, NAGRA Senior Director Product Marketing.

“Takedowns are generally sent to streaming platforms and hosting servers. When those don’t work, Advanced Takedowns allow us to use both technical and legal means to get results,” Schouten says.

“Numerous stories in recent days show how for instance popular Kodi plug-ins have been removed by their authors because of the mere threat of legal actions like this.”

At the center of operations is NAGRA’s Piracy Intelligence Portal, which offers customers a real-time view of worldwide online piracy trends, information on the infrastructure behind illegal services, as well as statistics and status of takedown requests.

“We measure takedown compliance very carefully using our Piracy Intelligence Portal, so we can usually predict the results we will get. We work on a daily basis to improve relationships and interfaces with those who are less compliant,” Schouten says.

The Piracy Intelligence Portal

While persuasion is probably the best solution, some hosts inevitably refuse to cooperate. However, NAGRA also offers the NexGuard system, which is able to determine the original source of the content.

“Using forensic watermarking to trace the source of the leak, we will be able to completely shut down the ‘leak’ at the source, independently and within minutes of detection,” Schouten says.

Whatever route is taken, NAGRA says that the aim is to take down streams as quickly as possible, something which hopefully undermines confidence in pirate services and encourages users to re-enter the legal market. Interestingly, the company also says it uses “technical means” to degrade pirate services to the point that consumers lose faith in them.

But while augmented Kodi setups and illicit IPTV are certainly considered a major threat in 2017, they are not the only problem faced by content companies.

While the Apple platform is quite tight, the open nature of Android means that there are a rising number of apps that can be sideloaded from the web. These allow pirate content to be consumed quickly and conveniently within a glossy interface.

Apps like Showbox, MovieHD and Terrarium TV have the movie and TV show sector wrapped up, while the popular Mobdro achieves the same with live TV, including premium sports. Schnider says NAGRA can handle apps like these and other emerging threats in a variety of ways.

“In addition to Kodi-related anti-piracy activities, NAGRA offers a service that automatically finds illegal distribution of content on Android applications, fully loaded STBs, M3U playlist and other platforms that provide plug-and-play solutions for the big TV screen; this service also includes the removal of infringing streams,” he explains.

M3U playlist piracy doesn’t get a lot of press. An M3U file is a text file that specifies locations where content (such as streams) can be found online.

In its basic ‘free’ form, it’s simply a case of finding an M3U file on an indexing site or blog and loading it into VLC. It’s not as flashy as any of the above apps, and unless one knows where to get the free M3Us quickly, many channels may already be offline. Premium M3U files are widely available, however, and tend to be pretty reliable.

But while attacking sources of infringing content is clearly a big part of NAGRA’s mission, the company also deploys softer strategies for dealing with pirates.

“Beyond disrupting pirate streams, raising awareness amongst users that these services are illegal and helping service providers deliver competing legitimate services, are also key areas in the fight against premium IPTV piracy where NAGRA can help,” Schnider says.

“Converting users of such services to legitimate paying subscribers represents a significant opportunity for content owners and distributors.”

For this to succeed, Schouten says there needs to be an understanding of the different motivators that lead an individual to commit piracy.

“Is it price? Is it availability? Is it functionality?” he asks.

Interestingly, he also reveals that lots of people are spending large sums of money on IPTV services they believe are legal but are not. Rather than the high prices putting them off, they actually add to their air of legitimacy.

“These consumers can relatively easily be converted into paying subscribers if they can be convinced that pay-TV services offer superior quality, reliability, and convenience because let’s face it, most IPTV services are still a little dodgy to use,” he says.

“Education is also important; done through working with service providers to inform consumers through social media platforms of the risks linked to the use of illegitimate streaming devices / IPTV devices, e.g. purchasing boxes that may no longer work after a short period of time.”

And so the battle over content continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

6th RISC-V Workshop Proceedings

Post Syndicated from ris original https://lwn.net/Articles/724172/rss

The proceedings of the RISC-V workshop, held May 8-11 in Shanghai China,
are available
with links to slides and videos.

This workshop was a four day event broken down as follow:

  • Monday May 8, 2017 – Introduction to RISC-V – this day long session was held for those who were new to RISC-V and have yet to be exposed to the RISC-V ISA. The session consisted of presentations from the RISC-V Foundation, some of the original creators of the RISC-V ISA and product presentations from vendors within the RISC-V community.
  • Tuesday and Wednesday May 9-10, 2017 – These two days followed our traditional two day format with presentations covering various RISC-V projects underway within the RISC-V community and will included a poster / demo reception on Tuesday evening.
  • Thursday May 11, 2017 – The workshop week concluded with RISC-V Foundation meetings with attendance restricted to members of the RISC-V Foundation. The day consisted of Technical and Marketing Committee face to face meetings to progress the work currently underway within our various Task Groups.

Building a Competitive Moat: Turning Challenges Into Advantages

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/turning-challenges-into-advantages/

castle on top of a storage pod

In my previous post on how Backblaze got started, I mentioned that “just because we knew the right solution, didn’t mean that it was possible.” I’ll dig into that here. The right solution was to offer unlimited backup for $5 per month. The price of storage at the time, however, would have likely forced us to price our unlimited backup service at 2x – 5x that.

We were faced with a difficult challenge – compromise a fundamental feature of our product by removing the unlimited storage element, increase our price point in order to cover our costs but likely limit our potential customer base, seek funding in order to run at a loss while we built market share with a hope/prayer we could make a profit in the future, or find another way (huge unknown that might not have a solution). Below I’ll dig into the options that were available, the paths we tried, and how this challenge completely transformed our company and ended up being our greatest technological advantage.

Available Options:

Use a Storage Service

Originally we intended to build the backup application, but leave the back-end storage to others; likely Amazon S3. This had many advantages:

  1. We would not have to worry about the storage at all
  2. It would scale up or down as we needed it
  3. We would pay only for what we used

Especially as a small, bootstrapped company with limited resources – these were incredible benefits.

There was just one problem. At S3’s then current pricing ($0.15/GB/month), a customer storing just 33 GB would cost us 100% of the $5 per month we would collect. Additionally, we would need to pay S3 transaction and download charges, along with our engineering/support/marketing and other expenses.. The conclusion, even if the average customer stored just 33 GB, it would cost us at least $10/month for a customer that we were charging just $5/month.

In 2007, when we were getting started, there were a few other storage services available. But all were more expensive. Despite the fantastic benefits of using such a service, it simply didn’t work for us.

Buy Storage Systems

Buying storage systems didn’t have all the benefits of using a storage service – we would have to forecast need, buy in big blocks up front, manage data centers, etc. – but it seemed the second-best option. Companies such as EMC, NetApp, Dell, and others sold hundreds of petabytes of storage systems where they provide the servers, software, and support.

Alas, there were two problems: One temporary, the other permanent (and fatal). The temporary problem was that these systems were hundreds of thousands of dollars just to get started. This was challenging for us from a cash-flow perspective, but it was just a question of coming up with the cash. The permanent problem was that these systems cost ~$1,000/TB of storage. Hard drives were selling for ~$100/TB, so there was a 10x markup for the storage system. That markup eliminated pursuing this path. What if the the average customer had 100 GB to store? It would take us 20 months to pay off the purchase. We weren’t sure how much data the average customer would have, but the scenarios we were running made it seem like a $5/month price point was unsustainable.

Our Choices Where:

Don’t Offer the Right Solution

If it’s impossible to offer unlimited backup for $5/month, there are certainly choices. We could have raised the price to $10/month, not make the backup unlimited, or close-up shop altogether. All doable, none ideal.

Raise Funding

Plenty of companies raise funding before they can be self-sustaining, and it can work out great for everyone. We had raised funding for a previous company and believed we could have done it for Backblaze. And raising funding would have taken care of the cash-flow issue if we chose to buy storage systems.

However, it would have left us with a business with negative unit economics – we would lose money on every customer, and the faster we grew, the more money we would lose. VCs do fund these types of companies often (many of the delivery companies today fall in this realm) with the idea that, at scale, you improve your cost structure and possibly also charge more. But it’s a dangerous game since not only is the business not self-sustaining, it inevitably must be significantly altered in order to survive.

Find a Way to Store Data for Less

If there were some way to store data for less, significantly less, it could all work. We had a tiny glimmer of hope that it would be possible: Since hard drives only cost ~$100/TB, if we could somehow use those drives without adding much overhead, that would be quite affordable.

“we wanted to build a sustainable business from day one and build a culture that believes dollars come from customers.”

Our first decision was to not compromise our product by restricting the amount of storage. Although this would have been a much easier solution, it violated our core mission: Create a simple and inexpensive solution to backup all of your important data.

We had previously also decided not to raise funding to get started because we wanted to build a sustainable business from day one and build a culture that believes dollars come from customers. With those decisions made, we moved onto finding the best solution to fulfill our mission and create a viable company.

Experimentation

All we wanted was to attach hard drives to the Internet. If we could do that inexpensively, our backup application could store the data there and we could offer our unlimited backup service.

A hard drive needs to be connected to a server to be available on the Internet. It certainly wouldn’t be very cost effective to have one server for every hard drive, as the server costs would dominate the equation. Alternatively, trying to attach a lot of drives to a server resulted in needing expensive “enterprise” servers. The goal then became cost-efficiently attaching as many hard drives as possible to one server. According to its spec, USB is supposed to allow for 127 devices to be daisy-chained to a single port. We tried; it didn’t work. We considered Firewire, which could connect 63 devices, but the connectors are aimed at graphic designers and ended up too expensive on a unit-basis. Our next thought was to use small consumer-grade DAS (Direct-attached storage) devices and connect those to a server. We managed to attach 8 DAS devices with 4 drives each for a total of 32 hard drives connected to one server.

DAS units attached to a server
This worked well, but it was operationally challenging as none of these devices were meant to fit in a data center rack. Further complicating matters was that moving one of these setups required cabling 10 power cords, and separately moving 9 boxes. Fine at small scale, but very hard to scale up.

We realized that we didn’t need all the boxes, we just needed backplanes to connect the drives from the DAS boxes to the motherboard from the server. We found a different DAS box that supports port multipliers and took that backplane. How did we decide on that DAS box? Tim, co-founder & Chief Cloud Officer, remembers going to Fry’s and picking the box that looked “about right”.

That all laid the path for our eventual 45 drive design. The next thought was: If we could put all that in one box, it might be the solution we were looking for. The first iteration of this was a plywood box.

the first wooden storage pod

That eventually evolved into a steel server and what we refer to as a Storage Pod.

steel storage pod chassis

Building a Storage Platform

The Storage Pod became our key building block, but was just a tiny component of the ‘storage platform’. We had to write software that would run on each Storage Pod, software that would create redundancy between the Storage Pods, and central software and systems that would coordinate other aspects of the system to accept/load balance/validate/clean-up data. We had to find and train contract manufacturers to build the Storage Pods, find and negotiate data center space and bandwidth, setup processes to buy drives and track their reliability, hire people to maintain the systems, and setup the business processes to do all of this and more at scale.

All of this ended up taking tremendous technical effort, management engagement, and work from all corners of Backblaze. But it has also paid enormous dividends.

The Transformation

We started Backblaze thinking of ourselves as a backup company. In reality, we became a storage company with ‘backup’ as the first service we offered on our storage platform. Our backup service relies on the storage platform as, without the storage platform, we couldn’t offer unlimited backup. To enable the backup service, storage became the foundation of our company and is still what we live and breathe every day.

It didn’t just change how we built the service, it changed the fundamental DNA of the company.

Dividends

Creating our own storage platform was certainly hard. But it enabled us to offer our unlimited backup for a low price and do that while running a sustainable business.

“It didn’t just change how we built the service, it changed the fundamental DNA of the company.”

We felt that we had a service and price point that customers wanted, and we “unlocked” the way to let us build it. Having our storage platform also provides us with a deep connection to our customers and the storage community – we share how we build Storage Pods and how reliable hard drives in our environment have been. That content, in turns, helps brings awareness to Backblaze; the awareness helps establish the company as a tech leader; that reputation helps us recruit to our growing team and earns customers who are evaluating our solutions vs Storage Company X.

And after years of being a storage company with a backup service, and being asked all the time to just offer our storage directly, we launched our Backblaze B2 Cloud Storage service. We offer this raw storage at a price of $0.005/GB/month – that’s less than 1/4th of the price of S3.

If we had built our backup service on one of the existing storage services or storage systems, it would have been easier – but none of this would have been possible. This challenge, which we have spent a decade working to overcome, has also transformed our company and became our greatest technological advantage.

The post Building a Competitive Moat: Turning Challenges Into Advantages appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Build a Visualization and Monitoring Dashboard for IoT Data with Amazon Kinesis Analytics and Amazon QuickSight

Post Syndicated from Karan Desai original https://aws.amazon.com/blogs/big-data/build-a-visualization-and-monitoring-dashboard-for-iot-data-with-amazon-kinesis-analytics-and-amazon-quicksight/

Customers across the world are increasingly building innovative Internet of Things (IoT) workloads on AWS. With AWS, they can handle the constant stream of data coming from millions of new, internet-connected devices. This data can be a valuable source of information if it can be processed, analyzed, and visualized quickly in a scalable, cost-efficient manner. Engineers and developers can monitor performance and troubleshoot issues while sales and marketing can track usage patterns and statistics to base business decisions.

In this post, I demonstrate a sample solution to build a quick and easy monitoring and visualization dashboard for your IoT data using AWS serverless and managed services. There’s no need for purchasing any additional software or hardware. If you are already using AWS IoT, you can build this dashboard to tap into your existing device data. If you are new to AWS IoT, you can be up and running in minutes using sample data. Later, you can customize it to your needs, as your business grows to millions of devices and messages.

Architecture

The following is a high-level architecture diagram showing the serverless setup to configure.

 

AWS service overview

AWS IoT is a managed cloud platform that lets connected devices interact easily and securely with cloud applications and other devices. AWS IoT can process and route billions of messages to AWS endpoints and to other devices reliably and securely.

Amazon Kinesis Firehose is the easiest way to capture, transform, and load streaming data continuously into AWS from thousands of data sources, such as IoT devices. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.

Amazon Kinesis Analytics allows you to process streaming data coming from IoT devices in real time with standard SQL, without having to learn new programming languages or processing frameworks, providing actionable insights promptly.

The processed data is fed into Amazon QuickSight, which is a fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from the data.

The most popular way for Internet-connected devices to send data is using MQTT messages. The AWS IoT gateway receives these messages from registered IoT devices. The solution in this post uses device data from AWS Simple Beer Service (SBS), a series of internet-connected kegerators sending sensor outputs such as temperature, humidity, and sound levels in a JSON payload. You can use any existing IoT data source that you may have.

The AWS IoT rules engine allows selecting data from message payloads, processing it, and sending it to other services. You forward the data to a Firehose delivery stream to consolidate the continuous data stream into batches for further processing. The batched data is also stored temporarily in an Amazon S3 bucket for later retrieval and can be set for deletion after a specified time using S3 Lifecycle Management rules.

The incoming data from the Firehose delivery stream is fed into an Analytics application that provides an easy way to process the data in real time using standard SQL queries. Analytics allows writing standard SQL queries to extract specific components from the incoming data stream and perform real-time ETL on it. In this post, you use this feature to aggregate minimum and maximum temperature values from the sensors per minute. You load it in Amazon QuickSight to create a monitoring dashboard and check if the devices are over-heating or cooling down during use. You also extract every device’s location, parameters such as temperature, sound levels, humidity, and the time stamp in Analytics to use on the visualization dashboard.

The processed data from the two queries is fed into two Firehose delivery streams, both of which batch the data into CSV files every minute and store it in S3. The batching time interval is configurable between 1 and 15 minutes in 1-second intervals.

Finally, you use Amazon QuickSight to ingest the processed CSV files from S3 as a data source to build visualizations. Amazon QuickSight’s super-fast, parallel, in-memory, calculation engine (SPICE) parses the ingested data and allows you to create a variety of visualizations with different graph types. You can also use the Amazon QuickSight built-in Story feature to combine visualizations into business dashboards that can be shared in a secure manner.

Implementation

AWS IoT, Amazon Kinesis, and Amazon QuickSight are all fully managed services, which means you can complete the entire setup in just a few steps using the AWS Management Console. Don’t worry about setting up any underlying hardware or installing any additional software. So, get started.

Step 1. Set up your AWS IoT data source

Do you currently use AWS IoT? If you have an existing IoT thing set up and running on AWS IoT, you can skip to Step 2.

If you have an AWS IoT button or other IoT devices that can publish MQTT messages and would like to use that for the setup, follow the Getting Started with AWS IoT topic to connect your thing to AWS IoT. Continue to Step 2.

If you do not have an existing IoT device, you can generate simulated device data using a script on your local machine and have it publish to AWS IoT. The following script lets you set up your AWS IoT environment and publish simulated data that mimics device data from Simple Beer Service.

Generate sample Data

Running the sbs.py Python script generates fictitious AWS IoT messages from multiple SBS devices. The IoT rule sends the message to Firehose for further processing.

The script requires access to AWS CLI credentials and boto3 installation on the machine running the script. Download and run the following Python script:

https://github.com/awslabs/sbs-iot-data-generator/blob/master/sbs.py

The script generates random data that looks like the following:

{"deviceParameter": "Temperature", "deviceValue": 33, "deviceId": "SBS01", "dateTime": "2017-02-03 11:29:37"}
{"deviceParameter": "Sound", "deviceValue": 140, "deviceId": "SBS03", "dateTime": "2017-02-03 11:29:38"}
{"deviceParameter": "Humidity", "deviceValue": 63, "deviceId": "SBS01", "dateTime": "2017-02-03 11:29:39"}
{"deviceParameter": "Flow", "deviceValue": 80, "deviceId": "SBS04", "dateTime": "2017-02-03 11:29:41"}

Run the script and keep it running for the duration of the project to generate sufficient data.

Tip: If you encounter any issues running the script from your local machine, launch an EC2 instance and run the script there as a root user. Remember to assign an appropriate IAM role to your instance at the time of launch that allows it to access AWS IoT.

Step 2. Create three Firehose delivery streams

For this post, you require three Firehose delivery streams:  one to batch raw data from AWS IoT, and two to batch output device data and aggregated data from Analytics.

  1. In the console, choose Firehose.
  2. Create all three Firehose delivery streams using the following field values.

Delivery stream 1:

Name IoT-Source-Stream
S3 bucket <your unique name>-kinesis
S3 prefix source/

Delivery stream 2:

Name IoT-Destination-Data-Stream
S3 bucket <your unique name>-kinesis
S3 prefix data/

Delivery stream 3:

Name IoT-Destination-Aggregate-Stream
S3 bucket <your unique name>-kinesis
S3 prefix aggregate/

Step 3. Set up AWS IoT to receive and forward incoming data

  1. In the console, choose IoT.
  2. Create a new AWS IoT rule with the following field values.
Name IoT_to_Firehose
Attribute *
Topic Filter /sbs/devicedata/#
Add Action Send messages to an Amazon Kinesis Firehose stream (select IoT-Source-Stream from dropdown)
Select Separator “\n (newline)”

A quick check before proceeding further: make sure that you have run the script to generate simulated IoT data or that your IoT Thing is running and delivering data. If not, set it up now. The Amazon Kinesis Analytics application you set up in the next step needs the data to process it further.

Step 4: Create an Analytics application to process data

  1. In the console, choose Kinesis.
  2. Create a new application.
  3. Enter a name of your choice, for example, SBS-IoT-Data.
  4. For the source, choose IoT-Source-Stream.

Analytics auto-discovers the schema on the data by sampling records from the input stream. It also includes an in-built SQL editor that allows you to write standard SQL queries to transform incoming data.

Tip: If Analytics is unable to discover your incoming data, it may be missing the appropriate IAM permissions. In the IAM console, select the role that you assigned to your IoT rule in Step 3. Make sure that it has the ARN of the IoT-Source-Data Firehose stream listed in the firehose:putRecord section.

Here is a sample SQL query that generates two output streams:

  • DESTINATION_SQL_BASIC_STREAM contains the device ID, device parameter, its value, and the time stamp from the incoming stream.
  • DESTINATION_SQL_AGGREGATE_STREAM aggregates the maximum and minimum values of temperatures from the sensors over a one-minute period from the incoming data.
-- Create an output stream with four columns, which is used to send IoT data to the destination
CREATE OR REPLACE STREAM "DESTINATION_SQL_BASIC_STREAM" (dateTime TIMESTAMP, deviceId VARCHAR(8), deviceParameter VARCHAR(16), deviceValue INTEGER);

-- Create a pump that continuously selects from the source stream and inserts it into the output data stream
CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "DESTINATION_SQL_BASIC_STREAM"

-- Filter specific columns from the source stream
SELECT STREAM "dateTime", "deviceId", "deviceParameter", "deviceValue" FROM "SOURCE_SQL_STREAM_001";

-- Create a second output stream with three columns, which is used to send aggregated min/max data to the destination
CREATE OR REPLACE STREAM "DESTINATION_SQL_AGGREGATE_STREAM" (dateTime TIMESTAMP, highestTemp SMALLINT, lowestTemp SMALLINT);

-- Create a pump that continuously selects from a source stream 
CREATE OR REPLACE PUMP "STREAM_PUMP_2" AS INSERT INTO "DESTINATION_SQL_AGGREGATE_STREAM"

-- Extract time in minutes, plus the highest and lowest value of device temperature in that minute, into the destination aggregate stream, aggregated per minute
SELECT STREAM FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO MINUTE) AS "dateTime", MAX("deviceValue") AS "highestTemp", MIN("deviceValue") AS "lowestTemp" FROM "SOURCE_SQL_STREAM_001" WHERE "deviceParameter"='Temperature' GROUP BY FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO MINUTE);

Real-time analytics shows the results of the SQL query. If everything is working correctly, you see three streams listed, similar to the following screenshot.

Step 5: Connect the Analytics application to output Firehose delivery streams

You create two destinations for the two delivery streams that you created in the previous step. A single Analytics application can have multiple destinations defined; however, this needs to be set up using the AWS CLI, not from the console. If you do not already have it, install the AWS CLI on your local machine and configure it with your credentials.

Tip: If you are running the IoT script from an EC2 instance, it comes pre-installed with the AWS CLI.

Create the first destination delivery stream 

The AWS CLI command to create a new output Firehose delivery stream is as follows:

aws kinesisanalytics add-application-output --application-name <Name of Analytics Application> --current-application-version-id <number> --application-output 'Name=DESTINATION_SQL_BASIC_STREAM,KinesisFirehoseOutput={ResourceARN=<ARN of IoT-Data-Stream>,RoleARN=<ARN of Analytics application>,DestinationSchema={RecordFormatType=CSV}'

Do not copy this into the CLI just yet! Before entering this command, make the following four changes to personalize it:

  • For Name of Analytics Application, enter the value from Step 4, or from the Analytics console.
  • For current-application-version-ID, run the following command:
aws kinesisanalytics describe-application --application-name <application name from above>; | grep ApplicationVersionId
  • For ResourceARN, run the following command:
aws firehose describe-delivery-stream --delivery-stream-name IoT-Destination-Data-Stream | grep DeliveryStreamARN
  • For RoleARN, run the following command:
aws kinesisanalytics describe-application --application-name <application name from above>; | grep RoleARN

Now, paste the complete command in the AWS CLI and press Enter. If there are any errors, the response provides details. If everything goes well, a new destination delivery stream is created to send the first query (DESTINATION_SQL_BASIC_STREAM) to IoT-Destination-Data-Stream.

Create the second destination delivery stream

Following similar steps as above, create a second destination Firehose delivery stream with the following changes:

  • For Name of Analytics Application, enter the same name as the first delivery stream.
  • For current-application-version-ID, increment by 1 from the previous value (unless you made other changes in between these steps). If unsure, run the same command as above to get it again.
  • For ResourceARN, get the value by running the following CLI command:
aws firehose describe-delivery-stream --delivery-stream-name IoT-Destination-Aggregate-Stream | grep DeliveryStreamARN
  • For RoleArn, enter the same value as the first stream.

Run the aws kinesisanalytics CLI command, similar to the previous step but with the new parameters substituted. This creates the second output Firehose destination delivery stream.

Update the IAM role for Analytics to allow writing to both output streams.

  1. In the console, choose IAM, Roles.
  2. Select the role that you created with Analytics in Step 4.
  3. Choose Policy, JSON, and Edit.
  4. Find “Sid”: “WriteOutputFirehose” in the JSON document, go to the “Resource” section and make sure that it includes Resource ARNs of both streams that you found in the previous step.
  5. If it has only one ARN, add the second ARN and choose Save.

This completes the Amazon Kinesis setup. The incoming IoT data is processed by Analytics and delivered, using two output delivery streams, to two separate folders in your S3 bucket.

Step 6: Set up Amazon QuickSight to analyze the data

To build the visualization dashboard, ingest the processed CSV files from the S3 bucket into Amazon QuickSight.

  1. In the console, choose QuickSight.
  2. If this is your first time using Amazon QuickSight, you are asked to create a new account. Follow the prompts.
  3. When you are logged in to your account, choose New Analysis and enter a name of your choice.
  4. Choose New data set for the analysis or, if you have previously imported your data set, select one from the available data sets.
  5. You import two data sets: one with general device parameters information, and the other with aggregates of maximum and minimum temperatures for monitoring. For the first data set, choose S3 from the list of available data sources and enter a name, for example, IoT Device Data.
  6. The location of the S3 bucket and the objects to use are provided to Amazon QuickSight as a manifest file. Create a new manifest file following the supported formats for Amazon S3 manifest files.
  7. In the URIPrefixes section, provide your appropriate S3 bucket and folder location for the general device data. Hint: it should include <your unique name>-kinesis/data/.

Your manifest file should look similar to the following:

{ 
    "fileLocations": [                                                    
              {"URIPrefixes": ["https://s3.amazonaws.com/<YOUR_BUCKET_NAME>/data/<YEAR>/<MONTH>/<DATE>/<HOUR>/"]}
     ],
     "globalUploadSettings": { 
     "format": "CSV",  
     "delimiter": ","
    }
}

Amazon QuickSight imports and parses the data set, and provides available data fields that can be used for making graphs. The Edit/Preview data button allows you to format and transform the data, change data types, and filter or join your data. Make sure that the columns have the correct titles. If not, you can edit them and then save.

Tip: choose the downward arrow on the top right and unselect Files include headers to give each column appropriate headers. Choose Save. This takes you back to the data sets page.

Follow the same steps as above to import the second data set. This time, your manifest should include your aggregate data set folder on S3, which is named <your unique name>-kinesis/aggregate/. Update headers if necessary and choose Save & visualize.

Build an analysis

The visualization screen shows the data set that you last imported, which in this case is the aggregate data. To include the general device data as well, for Fields on the top left, choose Edit analysis data sets. Choose Add data set and select the other data set that you saved earlier.

Now both data sets are available on the analysis screen. For Visual Types at bottom left, select the type of graph to make. For Fields, select the fields to visualize. For example, drag Device ID, Device Parameter, and Value to Field wells, as shown in the screenshot below, to generate a visualization of average parameter values compared across devices.

You can create another visual by choose +Add. This time, select a line graph to show monitoring of the maximum temperature values of the sensors in any minute, from the aggregate data set.

If you would like to create an interactive story to present to your team or organization, you can choose the Story option on the left panel. Create a dashboard with multiple visualizations, to save and share securely with the intended audience. An example of a story is shown below.

Conclusion

Any data is valuable only when it can be actually put to use. In this post, you’ve seen how it’s possible to quickly build a simple Analytics application to ingest, process, and visualize IoT data in near real time entirely using AWS managed services. This solution is scalable and reliable, and costs a fraction of other business intelligence solutions. It is easy enough that anyone with an AWS account can build and use it without any special training.

If you have any questions or suggestions, please comment below.


About the Author

Karan Desai is a Solutions Architect with Amazon Web Services. He works with startups and small businesses in the US, helping them adopt cloud technology to build scalable and secure solutions using AWS. In his spare time, he likes to build personal IoT projects, travel to offbeat places and write about it.

 

 


Related

Visualize Big Data with Amazon QuickSight, Presto, and Apache Spark on Amazon EMR

 

 

 

 

 

 

 

Hiring a Content Director

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/hiring-content-director/


Backblaze is looking to hire a full time Content Director. This role is an essential piece of our team, reporting directly to our VP of Marketing. As the hiring manager, I’d like to tell you a little bit more about the role, how I’m thinking about the collaboration, and why I believe this to be a great opportunity.

A Little About Backblaze and the Role

Since 2007, Backblaze has earned a strong reputation as a leader in data storage. Our products are astonishingly easy to use and affordable to purchase. We have engaged customers and an involved community that helps drive our brand. Our audience numbers in the millions and our primary interaction point is the Backblaze blog. We publish content for engineers (data infrastructure, topics in the data storage world), consumers (how to’s, merits of backing up), and entrepreneurs (business insights). In all categories, our Content Director drives our earned positioned as leaders.

Backblaze has a culture focused on being fair and good (to each other and our customers). We have created a sustainable business that is profitable and growing. Our team places a premium on open communication, being cleverly unconventional, and helping each other out. The Content Director, specifically, balances our needs as a commercial enterprise (at the end of the day, we want to sell our products) with the custodianship of our blog (and the trust of our audience).

There’s a lot of ground to be covered at Backblaze. We have three discreet business lines:

  • Computer Backup -> a 10 year old business focusing on backing up consumer computers.
  • B2 Cloud Storage -> Competing with Amazon, Google, and Microsoft… just at ¼ of the price (but with the same performance characteristics).
  • Business Backup -> Both Computer Backup and B2 Cloud Storage, but focused on SMBs and enterprise.

The Best Candidate Is…

An excellent writer – possessing a solid academic understanding of writing, the creative process, and delivering against deadlines. You know how to write with multiple voices for multiple audiences. We do not expect our Content Director to be a storage infrastructure expert; we do expect a facility with researching topics, accessing our engineering and infrastructure team for guidance, and generally translating the technical into something easy to understand. The best Content Director must be an active participant in the business/ strategy / and editorial debates and then must execute with ruthless precision.

Our Content Director’s “day job” is making sure the blog is running smoothly and the sales team has compelling collateral (emails, case studies, white papers).

Specifically, the Perfect Content Director Excels at:

  • Creating well researched, elegantly constructed content on deadline. For example, each week, 2 articles should be published on our blog. Blog posts should rotate to address the constituencies for our 3 business lines – not all blog posts will appeal to everyone, but over the course of a month, we want multiple compelling pieces for each segment of our audience. Similarly, case studies (and outbound emails) should be tailored to our sales team’s proposed campaigns / audiences. The Content Director creates ~75% of all content but is responsible for editing 100%.
  • Understanding organic methods for weaving business needs into compelling content. The majority of our content (but not EVERY piece) must tie to some business strategy. We hate fluff and hold our promotional content to a standard of being worth someone’s time to read. To be effective, the Content Director must understand the target customer segments and use cases for our products.
  • Straddling both Consumer & SaaS mechanics. A key part of the job will be working to augment the collateral used by our sales team for both B2 Cloud Storage and Business Backup. This content should be compelling and optimized for converting leads. And our foundational business line, Computer Backup, deserves to be nurtured and grown.
  • Product marketing. The Content Director “owns” the blog. But also assists in writing cases studies / white papers, creating collateral (email, trade show). Each of these things has a variety of call to action(s) and audiences. Direct experience is a plus, experience that will plausibly translate to these areas is a requirement.
  • Articulating views on storage, backup, and cloud infrastructure. Not everyone has experience with this. That’s fine, but if you do, it’s strongly beneficial.

A Thursday In The Life:

  • Coordinate Collaborators – We are deliverables driven culture, not a meeting driven one. We expect you to collaborate with internal blog authors and the occasional guest poster.
  • Collaborate with Design – Ensure imagery for upcoming posts / collateral are on track.
  • Augment Sales team – Lock content for next week’s outbound campaign.
  • Self directed blog agenda – Feedback for next Tuesday’s post is addressed, next Thursday’s post is circulated to marketing team for feedback & SEO polish.
  • Review Editorial calendar, make any changes.

Oh! And We Have Great Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee & fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Interested in Joining Our Team?

Send us an email to [email protected] with the subject “Content Director”. Please include your resume and 3 brief abstracts for content pieces.
Some hints for each of your three abstracts:

  • Create a compelling headline
  • Write clearly and concisely
  • Be brief, each abstract should be 100 words or less – no longer
  • Target each abstract to a different specific audience that is relevant to our business lines

Thank you for taking the time to read and consider all this. I hope it sounds like a great opportunity for you or someone you know. Principles only need apply.

The post Hiring a Content Director appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

City of Abbotsford Enters WordPress’ DMCA “Hall of Shame”

Post Syndicated from Ernesto original https://torrentfreak.com/city-of-abbotsford-enters-wordpress-dmca-hall-of-shame-170506/

As one of the leading blog platforms, WordPress.com receives thousands of DMCA takedown requests every year, but nearly half of these are rejected.

Parent company Automattic is known to inspect all notices carefully, and has a track record of defending its users against DMCA abuse. In addition, it occasionally highlights the worst offenders in its own “Hall of Shame.”

This week the company added a new entry for the first time in several months. The dubious honor goes to the City of Abbotsford, Canada, which tried to clean up its ‘image’ with a recent DMCA notice.

The “infringement” Abbotsford reported concerns an article written by a homeless blogger, who highlighted that city officials deliberately spread chicken manure on a camp for homeless people.

To illustrate this unfortunate event with a fitting image, the blogger posted a parody logo of the city, replacing the pine tree with a turd.

Abbotsford’s parody logo

Pretty innocent, one would think, but apparently the city of Abbotsford thought otherwise. Through a marketing company, Abbotsford city council sent a DMCA notice to Automattic, asking it to remove the offending image.

However, since there is a clear fair use case here, the company behind the WordPress blogging platform was not impressed.

“Pardon the pun. It was glaringly obvious that the addition of the hilariously large feces was for the purposes of parody, and tied directly to the criticisms laid out in the post,” Automattic writes.

“As a result, it seems hard to believe that the city council took fair use considerations into account before firing off their ill-advised notice, and trying to wipe up this mess,” the company adds.

Instead of taking the image offline, Automattic referred the takedown notice to the blogger in question. He decided to keep it online as well, adding a massive “parody” watermark just to avoid any further confusion.

PARODY

So, instead of wiping the “crappy” logo from the Internet, the marketing firm actually managed to magnify the issue, entering WordPress’ DMCA Hall of Shame. Since the original article is nearly four years old, they would have been better off ignoring it, but some people have to learn that the hard way.

In its closing comments, Automattic stresses that their use of the ‘shitty’ logo also falls under fair use protection, urging the City counsel to refrain from sending them any additional takedown requests.

“Our use of the Abbotsford city logo in this post is also for the purposes of commentary or criticism, and therefore falls under fair use protections. If anybody on the council happens to be reading, please don’t send us another DMCA takedown.”

At TorrentFreak we would like to repeat Automattic’s argument, also adding a fair use exception for the purpose of news reporting.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Hot Startups – April 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-april-2017/

Spring is here, the flowers are blooming and Tina Barr is back with more great startups for you to check out!

-Ana


Welcome back to another month of hot AWS-powered startups! Today we have three exciting startups:

  • Beekeeper – simplifying employee communication in the workplace.
  • Betterment – making investing easier for everyone.
  • ClearSlide – a leading sales engagement platform.

Be sure to check out our March hot startups in case you missed them.

Beekeeper (Zurich, Switzerland)
Beekeeper logoFlavio Pfaffhauser and Christian Grossmann, both graduates of ETH Zurich, were passionate about building a technology that would connect and bring people together. What started as a student’s social community soon turned into Beekeeper – a communication platform for the workplace that allows employees to interact wherever they are. As Flavio and Christian learned how to build a social platform that engaged people properly, businesses began requesting a platform that could be adapted to their specific processes and needs. The platform started with the concept of helping people feel as if they are sitting right next to each other, whether they’re at a desk or in the field. Founded in 2012, Beekeeper is focused on improving information sharing, communication and peer collaboration, and the company strongly believes that listening to employees is crucial for organizations.

The “Mobile First, Desktop Friendly” platform has a simple and intuitive interface that easily integrates multiple operating systems into one ecosystem. The interface can be styled and customized to match a company’s brand and identity. Employees can connect with their colleagues anytime and anywhere with private and group chats, video and file sharing, and feedback surveys. With Beekeeper’s analytical dashboard leadership teams can identify trending topics of discussion and track employee engagement and app usage in real-time. Beekeeper is currently connecting users in 137 countries across industries including hospitality, construction, transportation, and more.

Beekeeper likes using AWS because it allows their engineers to focus on the things that really matter; solving customer issues. The company builds its infrastructure using services like Amazon EC2, Amazon S3, and Amazon RDS, all of which allow the technical teams to offload administrative tasks. Amazon Elastic Transcoder and Amazon QuickSight are used to build analytical dashboards and Amazon Redshift for data warehousing.

Check out the Beekeeper blog to keep up with their latest news!

Betterment (New York, NY)
Betterment logo
Betterment is on a mission to make investing easier and more accessible for everyone, no matter their financial goal. In 2008, Jon Stein founded Betterment with the intent to reinvent the industry and save future investors from making the same common mistakes he had been making. At that time, most people only had a couple of options when it came to investing their money – either do it yourself or hire another person to do it for you. Unfortunately, financial advisors are sometimes paid to recommend certain investments even if it’s not what is best for their clients. Betterment only chooses investments that are in their customer’s best interest and align with their financial goals. Today, they are the largest, independent online investment advisor managing more than $8 billion in assets for over 240,000 customers.

Betterment uses technology to make investing easier and more efficient, while also helping to increase after-tax returns. They offer a wide range of financial planning services that are personalized to their customer’s life goals. To start an investment plan, customers can input their age, retirement status, and annual income and Betterment will recommend how much money to invest and which type of account is the right choice. They will invest and manage it in a way that many traditional investment services can’t at a lower cost.

The engineers at Betterment are constantly working to build industry-changing technology as quickly as possible to help customers maximize their money. AWS gives Betterment the flexibility to easily provision infrastructure and offload functions to various services that once required entire teams to manage. When they first started in the cloud, Betterment was using standard implementations of Amazon EC2, Amazon RDS, and Amazon S3. Since they’ve gone all in with AWS, they have been leveraging services like Amazon Redshift, AWS Lambda, AWS Database Migration Service, Amazon Kinesis, Amazon DynamoDB, and more. Today, they are using over 20 AWS services to develop, test, and deploy features and enhancements on a daily basis.

Learn more about Betterment here.

ClearSlide (San Francisco, CA)
ClearSlide is one of today’s leading sales engagement platforms, offering a complete and integrated tool that makes every customer interaction successful. Since their founding in 2009, ClearSlide has looked for ways to improve customer experiences and have developed numerous enablement tools for sales leaders and teams, marketing, customer support teams, and more. The platform puts content, communication channels, and insights at their customer’s fingertips to help drive better decisions and manage opportunities. ClearSlide serves thousands of companies including Comcast, the Sacramento Kings, The Economist, and so far their customers have generated over 750 million minutes of engagement!

ClearSlide offers a solution for all parts of the sales process. For sales leaders, ClearSlide provides engagement dashboards to improve deal visibility, coaching, and sales forecast accuracy. For marketing and sales enablement teams, they guide sellers to the right content, at the right time, in the right context, and provide insight to maximize content ROI. For sales reps, ClearSlide integrates communications, content, and analytics in a single platform experience. Communications can be made across email, in-person or online meetings, web, or social. Today, ClearSlide customers report a 10-20% increase in closed deals, 25% decrease in onboarding time for new reps, and a 50-80% reduction in selling costs.

ClearSlide uses a range of AWS services, but Amazon EC2 and Amazon RDS have made the biggest impact on their business. EC2 enables them to easily scale compute capacity, which is critical for a fast-growing startup. It also provides consistency during deployment – from development and integration to staging and production. RDS reduces overhead and allows ClearSlide to scale their database infrastructure. Since AWS takes care of time-consuming database management tasks, ClearSlide sees a reduction in operations costs and can focus on being more strategic with their customers.

Watch this video to learn how LiveIntent reduced sales cycles by 22% using ClearSlide. Get all the latest updates by following them on Twitter!

Thanks for checking out another month of awesome AWS-powered startups!

-Tina

 

Five Million Brits Go Crazy For Illegal Streaming

Post Syndicated from Andy original https://torrentfreak.com/five-million-brits-go-crazy-for-illegal-streaming-170425/

With Internet access almost universal across the UK and a young, tech-savvy audience fully clued-up on the wonders of on-demand content, little wonder that new ways to consume content are being gobbled up across the country.

Unfortunately for content providers, those methods aren’t always official. In fact, all signs point to an uptick in illegal content consumption from a range of relatively new devices which excel when it comes to ease of access.

We’re talking about software such as Kodi, the legal media player that can be super-charged to provide a full-on piracy experience. Or custom Android apps like Showbox, Terrarium TV and Mobdro, each capable of delivering premium movies, TV shows and sport, without the user ever paying a penny.

The popularity of illegal streaming has prompted market research firm YouGov to conduct a survey on the consumption habits of Brits, with the company concluding that such services pose a “major threat” to TV subscription brands in the UK.

According to its “Illegal Streaming” survey, just short of five million people in the UK are using pirate TV streaming services. YouGov says that around 10% of the adult population (4.9m people) currently have access to custom Kodi boxes, modified Fire TV sticks, and illegal streaming apps running on smartphones and tablets.

The part that will send shudders down the spines of companies like Sky, Virgin and BT – who are not only ISPs but major content providers too – is that pirate consumers are backing away from premium services.

YouGov says that one out of every seven ‘pirate’ streamers say that their use of unauthorized platforms has caused them to cancel “at least one subscription service.” The market researchers note that the actual number of subscriptions is likely to be lower due to the discrepancy between households and individuals but still, the numbers remain significant.

For now at least, there is a significant number of people who maintain both legal subscriptions and parallel pirate setups. However, YouGov reports that almost a third of pirates (31%) say they will cancel official subscriptions during the next 12 months.

There can be little doubt that the rise of so-called “living room” streaming can be attributed to word-of-mouth marketing, with people finding excitement in the range of content on offer for a negligible outlay. With that in mind, YouGov reports that six out of ten pirates say they intend to promote the availability of unauthorized services to both friends and family.

If the growth that promotion fuels meets YouGov’s expectations, the numbers of people getting into the piracy scene could be significant. The market research company reports that 2.6 million non-pirates say they’re preparing to start using illegal streaming platforms in the future, with 400,000 predicting involvement within the next 90 days.

And here’s the problem. YouGov reports that almost 90% of these non-pirates currently have a subscription to an official TV service but should they get involved, almost half are predicting that they will cancel them within 12 months of obtaining a pirate device.

Over in the United States, it was recently reported that more than half of all millennials regularly use pirate streaming services to watch TV shows or movies. The numbers in the UK aren’t so high, but they remain significant nonetheless. According to YouGov, 18 to 34-year-olds account for 37% of pirate users, around 1.8 million people.

The survey also touched on people’s attitudes towards unauthorized streaming, and it comes as no surprise that many feel the high prices of genuine products justifies the use of illicit alternatives.

Bundled packaging means that people in the UK are often required to spend large amounts of money on channels they don’t even watch, so when cheaper options are made available, few have sympathy for what are perceived to be rich content providers.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Game Pirates Celebrate Fall of Denuvo’s Brand New Protection

Post Syndicated from Andy original https://torrentfreak.com/game-pirates-celebrate-fall-of-denuvos-brand-new-protection-170414/

When file-sharing was first getting off the ground, groups like the RIAA and MPAA were public enemy number one. They’re not exactly popular now but neither receive the hatred liberally poured on Denuvo.

The brainchild of Austria-based Denuvo Software Solutions GmbH, Denuvo is an anti-tamper technology designed to protect underlying DRM products. It’s been successfully deployed on gaming titles but just recently it’s iron skin has been showing the cracks.

After all previous versions were defeated, in January version three of Denuvo fell to pirates with the release of Resident Evil 7: Biohazard just five days after its street date. It was a landmark moment for a scene that had grown accustomed to Denuvo-protected games trickling down into the piracy scene months after their retail debut.

But while celebrations got underway, it seemed unlikely that Denuvo would simply sit back and take a beating. Indeed, within days of the crack, Denuvo marketing director Thomas Goebl told Eurogamer that improvements to Denuvo were underway.

“As always, we continue working to improve our solution to create security updates for upcoming Anti-Tamper versions. We will do the same with the learning from this bypass,” Goebl said.

With all eyes primed for a release of a game using the new technology (the cracking scene has labeled it Denuvo v4), earlier this month Mass Effect Andromeda was cracked by CPY, the group behind most of Denuvo’s recent pain. Despite some early claims, the title was actually protected by v3, so the big test was yet to arrive.

Yesterday it did so, in some style.

With its usual fanfare, cracking group CPY announced that it had defeated Denuvo v4 protection on 2Dark, a lesser-known stealth adventure game from the creator of Alone in the Dark.

As seen from the dates in the release notes above, the crack took a little over a month following 2Dark’s street date. Denuvo are still likely to claim that as a victory, since the first few weeks of sales were allowed to go ahead piracy-free. However, it’s worth keeping in mind that this is the new version of Denuvo which was supposed to put the anti-tamper company back out in front.

With celebrations now at fever pitch in game piracy land, there’s an interesting angle to the cracking of 2Dark. First of all, it’s apparent that the majority of people are more excited about Denuvo v4 being cracked than they are at the prospect of playing the game. However, the cracking of 2Dark is being seen as particularly sweet for other reasons.

About a month ago, a poster to Reddit’s /r/crackwatch highlighted that the developers of 2Dark had made some promises they later failed to keep.

It appears that during a 2014 crowdfunding campaign (French) for 2Dark, developer Gloomywood was asked whether there would be any DRM added to the game. For many game players this would be a deal-breaker, especially if they were the ones financing the game. Here’s the assurance that contributors received back.

On the game’s Steam page, the truth later emerged with a note confirming that the title would incorporate “3rd-party DRM: Denuvo Antitamper.” According to a subsequent interview with Techraptor, that was a result of Gloomywood having to team up with publisher Bigben Interactive who insisted on the protection.

Now all eyes are turning to potential forthcoming releases from CPY, each protected by Denuvo v4. Will Nier Automata, Dead Rising 4, and Bulletstorm: Full Clip Edition fall as well? It probably won’t be long before we find out.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

President Trump Signs Internet Privacy Repeal Into Law

Post Syndicated from Andy original https://torrentfreak.com/president-trump-signs-internet-privacy-repeal-into-law-170404/

In a major setback to those who value their online privacy in the United States, last week the House of Representatives voted to grant Internet service providers permission to sell subscribers’ browsing histories to third parties.

The bill repeals broadband privacy rules adopted last year by the Federal Communications Commission, which required ISPs to obtain subscribers’ consent before using their browsing records for advertising or marketing purposes.

Soon after, the Trump Administration officially announced its support for the bill, noting that the President’s advisors would advise him to sign it, should it be presented. Yesterday, that’s exactly what happened.

To howls of disapproval from Internet users and privacy advocates alike, President Trump signed into law a resolution that seriously undermines the privacy of all citizens using ISPs to get online in the US. The bill removes protections that were approved by the FCC in the final days of the Obama administration but had not yet gone into effect.

The dawning reality is that telecoms giants including Comcast, AT&T, and Verizon, are now free to collect and leverage the browsing histories of subscribers – no matter how sensitive – in order to better target them with advertising and other marketing.

The White House says that the changes will simply create an “equal playing field” between ISPs and Internet platforms such as Google and Facebook, who are already able to collect data for advertising purposes.

The repeal has drawn criticism from all sides, with Mozilla’s Executive Director Mark Surman openly urging the public to fight back.

“The repeal should be a call to action. And not just to badger our lawmakers,” Surman said.

“It should be an impetus to take online privacy into our own hands.”

With the bill now signed into law, that’s the only real solution if people want to claw back their privacy. Surman has a few suggestions, including the use of Tor and encrypted messaging apps like Signal. But like so many others recently, he leads with the use of VPN technology.

As reported last week, Google searches for the term VPN reached unprecedented levels when the public realized that their data would soon be up for grabs.

That trend continued through the weekend, with many major VPN providers reporting increased interest in their products.

Only time will tell if interest from the mainstream will continue at similar levels. However, in broad terms, the recent public outcry over privacy is only likely to accelerate the uptake of security products and the use of encryption as a whole. It could even prove to be the wake-up call the Internet needed.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Blizzard Beats “Cheat” Maker, Wins $8.5 Million Copyright Damages

Post Syndicated from Ernesto original https://torrentfreak.com/blizzard-beats-cheat-maker-wins-85-million-copyright-damages-170403/

While most gamers do their best to win fair and square, there are always those who try to cheat themselves to victory.

With the growth of the gaming industry, the market for “cheats,” “hacks” and bots has also grown spectacularly. The German company Bossland is one of the frontrunners in this area.

Bossland created cheats and bots for several Blizzard games including World of Warcraft, Diablo 3, Heroes of the Storm, Hearthstone, and Overwatch, handing its users an unfair advantage over the competition. Blizzard is not happy with these and the two companies have been battling in court for quite some time, both in the US and Germany.

Last week a prominent US case came to a conclusion in the California District Court. Because Bossland decided not to represent itself, it was a relatively easy for Blizzard, which was awarded several million in copyright damages.

The court agreed that hacks developed by Bossland effectively bypassed Blizzard’s cheat protection technology “Warden,” violating the DMCA. By reverse engineering the games and allowing users to play modified versions, Bossland infringed Blizzard’s copyrights and allowed its users to do the same.

“Bossland materially contributes to infringement by creating the Bossland Hacks, making the Bossland Hacks available to the public, instructing users how to install and operate the Bossland Hacks, and enabling users to use the software to create derivative works,” the court’s order reads (pdf).

The WoW Honorbuddy

The infringing actions are damaging to the game maker as they render its anti-cheat protection ineffective. The cheaters, subsequently, ruin the gaming experience for other players who may lose interest, causing additional damage.

“Blizzard has established a showing of resulting damage or harm because Blizzard expends a substantial amount of money combating the use of the Bossland Hacks to ensure fair game play,” the court writes.

“Additionally, players of the Blizzard Games lodge complaints against cheating players, which has caused users to grow dissatisfied with the Blizzard Games and cease playing. Accordingly, the in-game cheating also harms Blizzard’s goodwill and reputation.”

As a result, the court grants the statutory copyright damages Blizzard requested for 42,818 violations within the United States, totaling $8,563,600. In addition, the game developer is entitled to $174,872 in attorneys’ fees.

To prevent further damage, Bossland is also prohibited from marketing or sellings its cheats in the United States. This applies to hacks including “Honorbuddy,” “Demonbuddy,”
“Stormbuddy,” “Hearthbuddy,” and “Watchover Tyrant,” as well as any other software designed to exploit Blizzard games.

While its a hefty judgment, the order doesn’t really come as a surprise given that the German cheat maker failed to defend itself.

Bossland CEO Zwetan Letschew previously informed TorrentFreak that his company would continue the legal battle after the issue of a default judgment. Whatever the outcome, the cheats will remain widely available outside of the US for now.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Hot Startups – March 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-march-2017/

As the madness of March rounds up, take a break from all the basketball and check out the cool startups Tina Barr brings you for this month!

-Ana


The arrival of spring brings five new startups this month:

  • Amino Apps – providing social networks for hundreds of thousands of communities.
  • Appboy – empowering brands to strengthen customer relationships.
  • Arterys – revolutionizing the medical imaging industry.
  • Protenus – protecting patient data for healthcare organizations.
  • Syapse – improving targeted cancer care with shared data from across the country.

In case you missed them, check out February’s hot startups here.

Amino Apps (New York, NY)
Amino Logo
Amino Apps was founded on the belief that interest-based communities were underdeveloped and outdated, particularly when it came to mobile. CEO Ben Anderson and CTO Yin Wang created the app to give users access to hundreds of thousands of communities, each of them a complete social network dedicated to a single topic. Some of the largest communities have over 1 million members and are built around topics like popular TV shows, video games, sports, and an endless number of hobbies and other interests. Amino hosts communities from around the world and is currently available in six languages with many more on the way.

Navigating the Amino app is easy. Simply download the app (iOS or Android), sign up with a valid email address, choose a profile picture, and start exploring. Users can search for communities and join any that fit their interests. Each community has chatrooms, multimedia content, quizzes, and a seamless commenting system. If a community doesn’t exist yet, users can create it in minutes using the Amino Creator and Manager app (ACM). The largest user-generated communities are turned into their own apps, which gives communities their own piece of real estate on members’ phones, as well as in app stores.

Amino’s vast global network of hundreds of thousands of communities is run on AWS services. Every day users generate, share, and engage with an enormous amount of content across hundreds of mobile applications. By leveraging AWS services including Amazon EC2, Amazon RDS, Amazon S3, Amazon SQS, and Amazon CloudFront, Amino can continue to provide new features to their users while scaling their service capacity to keep up with user growth.

Interested in joining Amino? Check out their jobs page here.

Appboy (New York, NY)
In 2011, Bill Magnuson, Jon Hyman, and Mark Ghermezian saw a unique opportunity to strengthen and humanize relationships between brands and their customers through technology. The trio created Appboy to empower brands to build long-term relationships with their customers and today they are the leading lifecycle engagement platform for marketing, growth, and engagement teams. The team recognized that as rapid mobile growth became undeniable, many brands were becoming frustrated with the lack of compelling and seamless cross-channel experiences offered by existing marketing clouds. Many of today’s top mobile apps and enterprise companies trust Appboy to take their marketing to the next level. Appboy manages user profiles for nearly 700 million monthly active users, and is used to power more than 10 billion personalized messages monthly across a multitude of channels and devices.

Appboy creates a holistic user profile that offers a single view of each customer. That user profile in turn powers contextual cross-channel messaging, lifecycle engagement automation, and robust campaign insights and optimization opportunities. Appboy offers solutions that allow brands to create push notifications, targeted emails, in-app and in-browser messages, news feed cards, and webhooks to enhance the user experience and increase customer engagement. The company prides itself on its interoperability, connecting to a variety of complimentary marketing tools and technologies so brands can build the perfect stack to enable their strategies and experiments in real time.

AWS makes it easy for Appboy to dynamically size all of their service components and automatically scale up and down as needed. They use an array of services including Elastic Load Balancing, AWS Lambda, Amazon CloudWatch, Auto Scaling groups, and Amazon S3 to help scale capacity and better deal with unpredictable customer loads.

To keep up with the latest marketing trends and tactics, visit the Appboy digital magazine, Relate. Appboy was also recently featured in the #StartupsOnAir video series where they gave insight into their AWS usage.

Arterys (San Francisco, CA)
Getting test results back from a physician can often be a time consuming and tedious process. Clinicians typically employ a variety of techniques to manually measure medical images and then make their assessments. Arterys founders Fabien Beckers, John Axerio-Cilies, Albert Hsiao, and Shreyas Vasanawala realized that much more computation and advanced analytics were needed to harness all of the valuable information in medical images, especially those generated by MRI and CT scanners. Clinicians were often skipping measurements and making assessments based mostly on qualitative data. Their solution was to start a cloud/AI software company focused on accelerating data-driven medicine with advanced software products for post-processing of medical images.

Arterys’ products provide timely, accurate, and consistent quantification of images, improve speed to results, and improve the quality of the information offered to the treating physician. This allows for much better tracking of a patient’s condition, and thus better decisions about their care. Advanced analytics, such as deep learning and distributed cloud computing, are used to process images. The first Arterys product can contour cardiac anatomy as accurately as experts, but takes only 15-20 seconds instead of the 45-60 minutes required to do it manually. Their computing cloud platform is also fully HIPAA compliant.

Arterys relies on a variety of AWS services to process their medical images. Using deep learning and other advanced analytic tools, Arterys is able to render images without latency over a web browser using AWS G2 instances. They use Amazon EC2 extensively for all of their compute needs, including inference and rendering, and Amazon S3 is used to archive images that aren’t needed immediately, as well as manage costs. Arterys also employs Amazon Route 53, AWS CloudTrail, and Amazon EC2 Container Service.

Check out this quick video about the technology that Arterys is creating. They were also recently featured in the #StartupsOnAir video series and offered a quick demo of their product.

Protenus (Baltimore, MD)
Protenus Logo
Protenus founders Nick Culbertson and Robert Lord were medical students at Johns Hopkins Medical School when they saw first-hand how Electronic Health Record (EHR) systems could be used to improve patient care and share clinical data more efficiently. With increased efficiency came a huge issue – an onslaught of serious security and privacy concerns. Over the past two years, 140 million medical records have been breached, meaning that approximately 1 in 3 Americans have had their health data compromised. Health records contain a repository of sensitive information and a breach of that data can cause major havoc in a patient’s life – namely identity theft, prescription fraud, Medicare/Medicaid fraud, and improper performance of medical procedures. Using their experience and knowledge from former careers in the intelligence community and involvement in a leading hedge fund, Nick and Robert developed the prototype and algorithms that launched Protenus.

Today, Protenus offers a number of solutions that detect breaches and misuse of patient data for healthcare organizations nationwide. Using advanced analytics and AI, Protenus’ health data insights platform understands appropriate vs. inappropriate use of patient data in the EHR. It also protects privacy, aids compliance with HIPAA regulations, and ensures trust for patients and providers alike.

Protenus built and operates its SaaS offering atop Amazon EC2, where Dedicated Hosts and encrypted Amazon EBS volume are used to ensure compliance with HIPAA regulation for the storage of Protected Health Information. They use Elastic Load Balancing and Amazon Route 53 for DNS, enabling unique, secure client specific access points to their Protenus instance.

To learn more about threats to patient data, read Hospitals’ Biggest Threat to Patient Data is Hiding in Plain Sight on the Protenus blog. Also be sure to check out their recent video in the #StartupsOnAir series for more insight into their product.

Syapse (Palo Alto, CA)
Syapse provides a comprehensive software solution that enables clinicians to treat patients with precision medicine for targeted cancer therapies — treatments that are designed and chosen using genetic or molecular profiling. Existing hospital IT doesn’t support the robust infrastructure and clinical workflows required to treat patients with precision medicine at scale, but Syapse centralizes and organizes patient data to clinicians at the point of care. Syapse offers a variety of solutions for oncologists that allow them to access the full scope of patient data longitudinally, view recommended treatments or clinical trials for similar patients, and track outcomes over time. These solutions are helping health systems across the country to improve patient outcomes by offering the most innovative care to cancer patients.

Leading health systems such as Stanford Health Care, Providence St. Joseph Health, and Intermountain Healthcare are using Syapse to improve patient outcomes, streamline clinical workflows, and scale their precision medicine programs. A group of experts known as the Molecular Tumor Board (MTB) reviews complex cases and evaluates patient data, documents notes, and disseminates treatment recommendations to the treating physician. Syapse also provides reports that give health system staff insight into their institution’s oncology care, which can be used toward quality improvement, business goals, and understanding variables in the oncology service line.

Syapse uses Amazon Virtual Private Cloud, Amazon EC2 Dedicated Instances, and Amazon Elastic Block Store to build a high-performance, scalable, and HIPAA-compliant data platform that enables health systems to make precision medicine part of routine cancer care for patients throughout the country.

Be sure to check out the Syapse blog to learn more and also their recent video on the #StartupsOnAir video series where they discuss their product, HIPAA compliance, and more about how they are using AWS.

Thank you for checking out another month of awesome hot startups!

-Tina Barr

 

VPN Searches Soar as Congress Votes to Repeal Broadband Privacy Rules

Post Syndicated from Andy original https://torrentfreak.com/vpn-searches-soar-as-congress-votes-to-repeal-broadband-privacy-rules-170329/

In a blow to privacy advocates across the United States, the House of Representatives voted Tuesday to grant Internet service providers permission to sell subscribers’ browsing histories to third parties.

The bill repeals broadband privacy rules adopted last year by the Federal Communications Commission under the Obama administration, which required ISPs to obtain consumer consent before using their data for advertising or marketing purposes.

The House of Representatives voted 215-205 in favor of overturning the regulations after the Senate voted to revoke the rules last week. President Donald Trump’s signature is needed before it can go into law but with the White House giving its full support, that’s a given.

“The Administration strongly supports House passage of S.J.Res. 34, which would nullify the Federal Communications Commission’s final rule titled ‘Protecting the Privacy of Customers of Broadband and Other Telecommunication Services’,” the White House said in a statement yesterday.

“If S.J.Res. 34 were presented to the President, his advisors would recommend that he sign the bill into law.”

If that happens, the US will free up the country’s Internet service providers to compete in the online advertising market with platform giants such as Google and Facebook. Of course, that will come at the expense of subscribers’ privacy, whose every browsing move online can be subjected to some level of scrutiny.

While supporters say that scrapping the regulations will mean that all Internet companies will operate on a level playing field when it comes to privacy protection, critics say that ISPs should be held to a higher level of accountability.

Whereas consumers have a choice over which information can be shared with websites, browsing history via an ISP is total, potentially exposing sensitive issues concerning health, finances, or even sexual preferences.

With this in mind, it’s no surprise that US Internet users are beginning to realize that everything they do online could soon be exposed to third-parties intent on invading their privacy in the interests of commerce. Predictably, questions are being raised over what can be done to mitigate the threat.

Aside from cutting the cord entirely, there’s only one practical way to hinder ISPs, and that’s through the use of some form of encryption. Importantly, visitors to basic HTTP websites will have no browsing protection whatsoever. Those using HTTPS can assume that although ISPs will still know which URLs they’ve visited, content exchanged will be cloaked.

Of course, for those looking for a more workable solution, VPNs – Virtual Private Networks – can provide a much greater level of encrypted protection, especially among providers who promise to keep no logs.

As a result, various providers, including blackVPN, ExpressVPN, LiquidVPN, StrongVPN and Torguard, have weighed in on the debate via social media. NordVPN have also spoken out against the bill in the press, and Private Internet Access even took out a full page ad in the New York Times this week.

It’s now becoming clear that while it was once a somewhat niche activity, VPN use could now be about to hit the mainstream.

Taking a look at Google Trends results for the search term ‘VPN’, we can see that interest across the United States is now double what it was back in 2012. The significant surge to the right of the chart is likely attributable to the past few weeks of debate surrounding the repeal of broadband privacy rules.

While most VPN providers have been campaigning against the changes, there can be no doubt that the signing of the bill into law will be extremely good for business. As seen from the above, record numbers of people are learning about VPNs and there’s even encouragement coming in from people at the very top of Internet commerce.

Following the vote yesterday, Twitter general counsel Vijaya Gadde took to her company’s platform to‏ suggest that citizens should take steps to protect their privacy.

Her tweet, which was later attributed to her own opinion and not company policy, was retweeted by Twitter Chief Executive Jack Dorsey.

It will be interesting to see how the new rules will affect VPN uptake longer term when the fuss around the debate this month has died down. Nevertheless, there seems little doubt that VPN use will rise to some extent and that could be bad news for copyright holders seeking to enforce their rights online.

In addition to stopping ISPs from spying on users’ browsing histories, a good VPN also prevents users being monitored online when using BitTorrent. A further handy side-effect is that they also render site-blocking efforts useless.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

‘Pirate’ Kodi Box Sellers Fail to Overturn Sales Ban in Canada

Post Syndicated from Andy original https://torrentfreak.com/pirate-kodi-box-sellers-fail-overturn-sales-ban-canada-170321/

From a niche hobbyist affair under its former name XBMC, Kodi is now grabbing international headlines on a daily basis. The media player is both benign and entirely legal in standard form, but boost it with special addons and it becomes a piracy powerhouse.

One of the main problems for the content industries arises from the software’s ability to run on cheap Android and similar hardware. Whether that’s a phone, tablet, set-top box or a device such as Amazon’s Fire Stick, these setups are now in millions of homes, delivering free content to the masses.

Authorities everywhere are now scrambling to deal with the problem and Canada is one of the areas where content producers and cable providers have resorted to legal action. Last year, Rogers Communications, Bell, Videotron and others targeted several retailers who supplied so-called “fully loaded” Android and Apple set-top boxes to the public.

The original defendants, including ITVBOX.NET, My Electronics, Android Bros Inc., WatchNSaveNow Inc and MTLFreeTV, all sold devices that came pre-configured to receive content that customers would otherwise have had to pay for.

Inquiries into the sales began in April 2015 and in the months that followed test purchases were made. The plaintiffs found that the devices not only provided access to their content for free but that the sellers advertised their products as a way to avoid paying bills.

In response, the TV and content companies went to the Federal Court with claims under the Copyright Act and Radiocommunication Act. Last June they were successful in obtaining an interlocutory injunction to stop the devices being made available for sale.

“The devices marketed, sold and programmed by the Defendants enable consumers to obtain unauthorized access to content for which the Plaintiffs own the copyright,” Judge Daniele Tremblay-Lamer wrote in her order.

“For the time being, I am satisfied that the Plaintiffs have established a strong prima facie case of copyright infringement and that an injunction would prevent irreparable harm without unduly inconveniencing the Defendants.”

While the majority of the defendants in the case have been silent (the list has now grown to more than 50 sellers), WatchNSaveNow and MTLFreeTV took the decision to appeal the injunction, arguing that it was never established in court that sales of the devices would hurt the plaintiffs’ business in advance of a trial.

According to CBC, that argument failed to convince the Appeal Court, which yesterday upheld the Federal Court’s decision to hand down an injunction. Turning the box-sellers’ marketing material against them, the Court noted that they’d advertised their devices as providing a way to access free content and avoid paying cable bills.

One of the sellers to appeal, Vincent Wesley of MTLFreeTV, was the only box-seller to turn up at the original Federal Court hearing last year. Back then he said he had nothing to do with the development or maintenance of the software installed on the devices he sold. That didn’t appear to help back then and now the Appeal Court has failed to see the case in the defendants’ favor.

“I’m actually very disappointed. We weren’t even given a fair shot,” Wesley said.

Unsurprisingly, the plaintiffs were rather pleased with the outcome, with both Bell and Rogers welcoming the decision to uphold the injunction.

“Today’s swift dismissal of the appeal of the Federal Court’s injunction speaks to what this case is all about — an obvious case of piracy,” Rogers spokesperson Sarah Schmidt told CBC.

A Bell spokesperson said the decision provided more confirmation that the devices are illegal and that those that sell them face “significant consequences.”

For Wesley, those consequences are already being felt in the shape of a $5,000 court costs bill, something which he says has left him “at the end of his finances.”

With no money left to fight, any trial will almost certainly go the way of the cable and TV companies. Certainly, the public hasn’t signaled any intention to come to the sellers’ rescue. A GoFundMe campaign set up by Wesley in June last year has seen just 10 people deposit $350 of a $30,000 target.

The legal assaults on Kodi, Showbox, and Popcorn-Time enabled devices seems set to continue for some time but one has to wonder what effect the endless flood of news articles is doing to promote the availability of free content through the platforms. Legal action is perhaps inevitable but every case only serves to raise the profile of this new piracy phenomenon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pranksters gonna prank

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/pranksters-gonna-prank.html

So Alfa Bank (the bank whose DNS traffic link it to trump-email.com) is back in the news with this press release about how in the last month, hackers have spoofed traffic trying to make it look like there’s a tie with Trump. In other words, Alfa claims these packets are trying to frame them for a tie with Trump now, and thus (by extension) it must’ve been a frame last October.

There is no conspiracy here: it’s just merry pranksters doing pranks (as this CNN article quotes me).

Indeed, among the people pranking has been me (not the pranks mentioned by Alfa, but different pranks). I ran a scan sending packets from IP address to almost everyone one the Internet, and set the reverse lookup to “mail1.trumpemail.com”.

Sadly, my ISP doesn’t allow me to put hyphens in the name, so it’s not “trump-email.com” as it should be in order to prank well.

Geeks gonna geek and pranksters gonna prank. I can imagine all sorts of other fun pranks somebody might do in order to stir the pot. Since the original news reports of the AlfaBank/trump-email.com connection last year, we have to assume any further data is tainted by goofballs like me goofing off.

By the way, in my particular case, there’s a good lesson to be had here about the arbitrariness of IP addresses and names. There is no server located at my IP address of 209.216.230.75. No such machine exists. Instead, I run my scans from a nearby machine on the same network, and “spoof” that address with masscan:

$ masscan 0.0.0.0/0 -p80 –banners –spoof-ip 209.216.230.75

This sends a web request to every machine on the Internet from that IP address, despite no machine anywhere being configured with that IP address.

I point this out because people are confused by the meaning of an “IP address”, or a “server”, “domain”, and “domain name”. I can imagine the FBI looking into this and getting a FISA warrant for the server located at my IP address, and my ISP coming back and telling them that no such server exists, nor has a server existed at that IP address for many years.

In the case of last years story, there’s little reason to believe IP spoofing was happening, but the conspiracy theory still breaks down for the same reason: the association between these concepts is not what you think it is. Listrak, the owner of the server at the center of the conspiracy, still reverse resolves the IP address 66.216.133.29 as “mail1.trump-email.com”, either because they are lazy, or because they enjoy the lulz.

It’s absurd thinking anything sent by the server is related to the Trump Orgainzation today, and it’s equally plausible that nothing the server sent was related to Trump last year as well, especially since (as CNN reports), Trump had severed their ties with Cendyn (the marketing company that uses Listrak servers for email).


Also, as mentioned in a previous blog post, I set my home network’s domain to be “moscow.alfaintra.net”, which means that some of my DNS lookups at home are actually being sent to Alfa Bank. I should probably turn this off before the FBI comes knocking at my door.

Amazon WorkDocs Update – Commenting & Reviewing Enhancements and a New Activity Feed

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-workdocs-update-commenting-reviewing-enhancements-and-a-new-activity-feed/

As I have told you in the past, we like to drink our own Champagne at Amazon. Practically speaking, this means that we make use of our own services, tools, and applications as part of our jobs, and that we supply the development teams with feedback if we have an idea for an improvement or if we find something that does not work as expected.

I first talked about Amazon WorkDocs (which was originally called Zocalo) back in the middle of 2014, and have been using it ever since (at busy times I often have drafts of 7 or 8 posts circulating).

I upload drafts of every new blog post (usually as PDFs) to WorkDocs and then share them with the Product Manager, Product Marketing Manager, and other designated reviewers. The reviewers leave feedback for me, I update the draft, and I wait for more feedback. After a couple of iterations the draft settles down and I wait for the go-ahead to publish the post. The circle of reviews often grows to include developers, senior management, and so forth. I simply share the document with them and look forward to even more feedback. My job is to read and to process all of the feedback (lots of suggestions, and the occasional question) as quickly as possible and to make sure that I did not miss anything.

Today I would like to tell you about some recent recent enhancements that makes WorkDocs even more useful. We have added some more commenting and reviewing features, along with an activity feed.

Enhanced Commenting
Over the course of a couple of revisions, some comments will spur a discussion. There might be a question about the applicability of a particular feature or the value of a particular image. In order to make it easier to start and to continue conversations, WorkDocs now supports threaded replies. I simply click on Reply and respond to a comment:

It is displayed like this:

If I click on Private, the comment is accessible only to the person who wrote the original.

In order to strengthen my message, I can also use simple formatting (bold, italic, and strikethrough) in my comments. Here’s how I specify each one:

And here’s the result:

Clicking on the ? displays a handy guide to formatting:

When the time for comments has passed, I can now disable feedback with a single click:

To learn more about these features, read Giving Feedback in the WorkDocs User Guide.

Enhanced Reviewing
As the comments accumulate, I sometimes need to draw a reviewer’s attention to a particular comment. I can do this by entering an @ in the comment and then choosing their name from the popup menu:

The user will be notified by email in order to let them know that their feedback is needed.

From time to time, a potential reviewer will come in to possession of a URL to a WorkDocs document but will not have access to the document. They can now request access to the document like this:

The request will be routed to the owner of the document via email for approval.

Similarly, someone who has been granted Viewer-level access can now request Contributor-level access:

Again, the request will be routed to the owner of the document via email for approval:

 

Activity Feed
With multiple blog posts out for review at any given time, keeping track of what’s coming and going can be challenging. In order to give me a big-picture view, WorkDocs now includes an Activity Feed. The feed shows me what is going on with my own documents and with those that have been shared with me. I can watch as files and folders are created, changed, removed, and commented on. I can also see who is making the changes and track the times when they were made:

I can enter a search term to control what I see in the feed:

And I can further filter the updates by activity type or by date:

Available Now
These features are available now and you can start using them today.

Jeff;