Tag Archives: shed

Banning VPNs and Proxies is Dangerous, IT Experts Warn

Post Syndicated from Andy original https://torrentfreak.com/banning-vpns-and-proxies-is-dangerous-it-experts-warn-170623/

In April, draft legislation was developed to crack down on systems and software that allow Russian Internet users to bypass website blockades approved by telecoms watchdog Roskomnadzor.

Earlier this month the draft bill was submitted to the State Duma, the lower house of the Russian parliament. If passed, the law will make it illegal for services to circumvent web blockades by “routing traffic of Russian Internet users through foreign servers, anonymous proxy servers, virtual private networks and other means.”

As the plans currently stand, anonymization services that fail to restrict access to sites listed by telecoms watchdog Rozcomnadzor face being blocked themselves. Sites offering circumvention software for download also face potential blacklisting.

This week the State Duma discussed the proposals with experts from the local Internet industry. In addition to the head of Rozcomnadzor, representatives from service providers, search engines and even anonymization services were in attendance. Novaya Gazeta has published comments (Russian) from some of the key people at the meeting and it’s fair to say there’s not a lot of support.

VimpelCom, the sixth largest mobile network operator in the world with more than 240 million subscribers, sent along Director for Relations with Government, Sergey Malyanov. He wondered where all this blocking will end up.

“First we banned certain information. Then this information was blocked with the responsibility placed on both owners of resources and services. Now there are blocks on top of blocks – so we already have a triple effort,” he said.

“It is now possible that there will be a fourth iteration: the block on the block to block those that were not blocked. And with that, we have significantly complicated the law and the activities of all the people affected by it.”

Malyanov said that these kinds of actions have the potential to close down the entire Internet by ruining what was once an open network running standard protocols. But amid all of this, will it even be effective?

“The question is not even about the losses that will be incurred by network operators, the owners of the resources and the search engines. The question is whether this bill addresses the goal its creators have set for themselves. In my opinion, it will not.”

Group-IB, one of the world’s leading cyber-security and threat intelligence providers, was represented CEO Ilya Sachkov. He told parliament that “ordinary respectable people” who use the Internet should always use a VPN for security. Nevertheless, he also believes that such services should be forced to filter sites deemed illegal by the state.

But in a warning about blocks in general, he warned that people who want to circumvent them will always be one step ahead.

“We have to understand that by the time the law is adopted the perpetrators will already find it very easy to circumvent,” he said.

Mobile operator giant MTS, which turns over billions of dollars and employs 50,000+ people, had their Vice-President of Corporate and Legal Affairs in attendance. Ruslan Ibragimov said that in dealing with a problem, the government should be cautious of not causing more problems, including disruption of a growing VPN market.

“We have an understanding that evil must be fought, but it’s not necessary to create a new evil, even more so – for those who are involved in this struggle,” he said.

“Broad wording of this law may pose a threat to our network, which could be affected by the new restrictive measures, as well as the VPN market, which we are currently developing, and whose potential market is estimated at 50 billion rubles a year.”

In its goal to maintain control of the Internet, it’s clear that Russia is determined to press ahead with legislative change. Unfortunately, it’s far from clear that there’s a technical solution to the problem, but if one is pursued regardless, there could be serious fallout.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

Weekly roundup: Successful juggling

Post Syndicated from Eevee original https://eev.ee/dev/2017/06/19/weekly-roundup-successful-juggling/

Despite flipping my sleep, as I seem to end up doing every month now, I’ve had a pretty solid week. We finally got our hands on a Switch, so I just played Zelda to stay up a ridiculously long time and restore my schedule pretty quickly.

  • potluck: I started building the potluck game in LÖVE, and it’s certainly come along much faster — I have map transitions, dialogue, and a couple moving platforms working. I still don’t quite know what this game is, but I’m starting to get some ideas.

    I also launched GAMES MADE QUICK??? 1½, a game jam for making a game while watching GDQ, instead of just plain watching GDQ. I intend to spend the week working on the potluck game, though I’m not sure whether I’ll finish it then.

  • fox flux: I started planning out a more interesting overworld and doodled a couple relevant tiles. Terrain is still hard. Also some more player frames.

  • art: I finally finished a glorious new banner, which now hangs proudly above my Twitter and Patreon. I did a bedtime slate doodle. I made and animated a low-poly Yoshi. I sketched Styx based on a photo.

    I keep wishing I have time to dedicate to painting experiments, but I guess this is pretty good output.

  • veekun: Wow! I touched veekun on three separate occasions. I have basic item data actually physically dumping now, I fixed some stuff with Pokémon, and I got evolutions working. Progress! Getting there! So close!

  • blog: Per request, I wrote about digital painting software, though it was hampered slightly by the fact that most of it doesn’t run on my operating system.

I seem to be maintaining tangible momentum on multiple big projects, which is fantastic. And there’s still 40% of the month left! I’m feeling pretty good about where I’m standing; if I can get potluck and veekun done soon, that’ll be a medium and a VERY LARGE weight off my shoulders.

BPI Breaks Record After Sending 310 Million Google Takedowns

Post Syndicated from Andy original https://torrentfreak.com/bpi-breaks-record-after-sending-310-million-google-takedowns-170619/

A little over a year ago during March 2016, music industry group BPI reached an important milestone. After years of sending takedown notices to Google, the group burst through the 200 million URL barrier.

The fact that it took BPI several years to reach its 200 million milestone made the surpassing of the quarter billion milestone a few months later even more remarkable. In October 2016, the group sent its 250 millionth takedown to Google, a figure that nearly doubled when accounting for notices sent to Microsoft’s Bing.

But despite the volumes, the battle hadn’t been won, let alone the war. The BPI’s takedown machine continued to run at a remarkable rate, churning out millions more notices per week.

As a result, yet another new milestone was reached this month when the BPI smashed through the 300 million URL barrier. Then, days later, a further 10 million were added, with the latter couple of million added during the time it took to put this piece together.

BPI takedown notices, as reported by Google

While demanding that Google places greater emphasis on its de-ranking of ‘pirate’ sites, the BPI has called again and again for a “notice and stay down” regime, to ensure that content taken down by the search engine doesn’t simply reappear under a new URL. It’s a position BPI maintains today.

“The battle would be a whole lot easier if intermediaries played fair,” a BPI spokesperson informs TF.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down.”

The long-standing suggestion is that the volume of takedown notices sent would reduce if a “take down, stay down” regime was implemented. The BPI says it’s difficult to present a precise figure but infringing content has a tendency to reappear, both in search engines and on hosting sites.

“Google rejects repeat notices for the same URL. But illegal content reappears as it is re-indexed by Google. As to the sites that actually host the content, the vast majority of notices sent to them could be avoided if they implemented take-down & stay-down,” BPI says.

The fact that the BPI has added 60 million more takedowns since the quarter billion milestone a few months ago is quite remarkable, particularly since there appears to be little slowdown from month to month. However, the numbers have grown so huge that 310 billion now feels a lot like 250 million, with just a few added on top for good measure.

That an extra 60 million takedowns can almost be dismissed as a handful is an indication of just how massive the issue is online. While pirates always welcome an abundance of links to juicy content, it’s no surprise that groups like the BPI are seeking more comprehensive and sustainable solutions.

Previously, it was hoped that the Digital Economy Bill would provide some relief, hopefully via government intervention and the imposition of a search engine Code of Practice. In the event, however, all pressure on search engines was removed from the legislation after a separate voluntary agreement was reached.

All parties agreed that the voluntary code should come into effect two weeks ago on June 1 so it seems likely that some effects should be noticeable in the near future. But the BPI says it’s still early days and there’s more work to be done.

“BPI has been working productively with search engines since the voluntary code was agreed to understand how search engines approach the problem, but also what changes can and have been made and how results can be improved,” the group explains.

“The first stage is to benchmark where we are and to assess the impact of the changes search engines have made so far. This will hopefully be completed soon, then we will have better information of the current picture and from that we hope to work together to continue to improve search for rights owners and consumers.”

With more takedown notices in the pipeline not yet publicly reported by Google, the BPI informs TF that it has now notified the search giant of 315 million links to illegal content.

“That’s an astonishing number. More than 1 in 10 of the entire world’s notices to Google come from BPI. This year alone, one in every three notices sent to Google from BPI is for independent record label repertoire,” BPI concludes.

While it’s clear that groups like BPI have developed systems to cope with the huge numbers of takedown notices required in today’s environment, it’s clear that few rightsholders are happy with the status quo. With that in mind, the fight will continue, until search engines are forced into compromise. Considering the implications, that could only appear on a very distant horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Digital painter rundown

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/17/digital-painter-rundown/

Another patron post! IndustrialRobot asks:

You should totally write about drawing/image manipulation programs! (Inspired by https://eev.ee/blog/2015/05/31/text-editor-rundown/)

This is a little trickier than a text editor comparison — while most text editors are cross-platform, quite a few digital art programs are not. So I’m effectively unable to even try a decent chunk of the offerings. I’m also still a relatively new artist, and image editors are much harder to briefly compare than text editors…

Right, now that your expectations have been suitably lowered:

Krita

I do all of my digital art in Krita. It’s pretty alright.

Okay so Krita grew out of Calligra, which used to be KOffice, which was an office suite designed for KDE (a Linux desktop environment). I bring this up because KDE has a certain… reputation. With KDE, there are at least three completely different ways to do anything, each of those ways has ludicrous amounts of customization and settings, and somehow it still can’t do what you want.

Krita inherits this aesthetic by attempting to do literally everything. It has 17 different brush engines, more than 70 layer blending modes, seven color picker dockers, and an ungodly number of colorspaces. It’s clearly intended primarily for drawing, but it also supports animation and vector layers and a pretty decent spread of raster editing tools. I just right now discovered that it has Photoshop-like “layer styles” (e.g. drop shadow), after a year and a half of using it.

In fairness, Krita manages all of this stuff well enough, and (apparently!) it manages to stay out of your way if you’re not using it. In less fairness, they managed to break erasing with a Wacom tablet pen for three months?

I don’t want to rag on it too hard; it’s an impressive piece of work, and I enjoy using it! The emotion it evokes isn’t so much frustration as… mystified bewilderment.

I once filed a ticket suggesting the addition of a brush size palette — a panel showing a grid of fixed brush sizes that makes it easy to switch between known sizes with a tablet pen (and increases the chances that you’ll be able to get a brush back to the right size again). It’s a prominent feature of Paint Tool SAI and Clip Studio Paint, and while I’ve never used either of those myself, I’ve seen a good few artists swear by it.

The developer response was that I could emulate the behavior by creating brush presets. But that’s flat-out wrong: getting the same effect would require creating a ton of brush presets for every brush I have, plus giving them all distinct icons so the size is obvious at a glance. Even then, it would be much more tedious to use and fill my presets with junk.

And that sort of response is what’s so mysterious to me. I’ve never even been able to use this feature myself, but a year of amateur painting with Krita has convinced me that it would be pretty useful. But a developer didn’t see the use and suggested an incredibly tedious alternative that only half-solves the problem and creates new ones. Meanwhile, of the 28 existing dockable panels, a quarter of them are different ways to choose colors.

What is Krita trying to be, then? What does Krita think it is? Who precisely is the target audience? I have no idea.


Anyway, I enjoy drawing in Krita well enough. It ships with a respectable set of brushes, and there are plenty more floating around. It has canvas rotation, canvas mirroring, perspective guide tools, and other art goodies. It doesn’t colordrop on right click by default, which is arguably a grave sin (it shows a customizable radial menu instead), but that’s easy to rebind. It understands having a background color beneath a bottom transparent layer, which is very nice. You can also toggle any brush between painting and erasing with the press of a button, and that turns out to be very useful.

It doesn’t support infinite canvases, though it does offer a one-click button to extend the canvas in a given direction. I’ve never used it (and didn’t even know what it did until just now), but would totally use an infinite canvas.

I haven’t used the animation support too much, but it’s pretty nice to have. Granted, the only other animation software I’ve used is Aseprite, so I don’t have many points of reference here. It’s a relatively new addition, too, so I assume it’ll improve over time.

The one annoyance I remember with animation was really an interaction with a larger annoyance, which is: working with selections kind of sucks. You can’t drag a selection around with the selection tool; you have to switch to the move tool. That would be fine if you could at least drag the selection ring around with the selection tool, but you can’t do that either; dragging just creates a new selection.

If you want to copy a selection, you have to explicitly copy it to the clipboard and paste it, which creates a new layer. Ctrl-drag with the move tool doesn’t work. So then you have to merge that layer down, which I think is where the problem with animation comes in: a new layer is non-animated by default, meaning it effectively appears in any frame, so simply merging it down with merge it onto every single frame of the layer below. And you won’t even notice until you switch frames or play back the animation. Not ideal.

This is another thing that makes me wonder about Krita’s sense of identity. It has a lot of fancy general-purpose raster editing features that even GIMP is still struggling to implement, like high color depth support and non-destructive filters, yet something as basic as working with selections is clumsy. (In fairness, GIMP is a bit clumsy here too, but it has a consistent notion of “floating selection” that’s easy enough to work with.)

I don’t know how well Krita would work as a general-purpose raster editor; I’ve never tried to use it that way. I can’t think of anything obvious that’s missing. The only real gotcha is that some things you might expect to be tools, like smudge or clone, are just types of brush in Krita.

GIMP

Ah, GIMP — open source’s answer to Photoshop.

It’s very obviously intended for raster editing, and I’m pretty familiar with it after half a lifetime of only using Linux. I even wrote a little Scheme script for it ages ago to automate some simple edits to a couple hundred files, back before I was aware of ImageMagick. I don’t know what to say about it, specifically; it’s fairly powerful and does a wide variety of things.

In fact I’d say it’s almost frustratingly intended for raster editing. I used GIMP in my first attempts at digital painting, before I’d heard of Krita. It was okay, but so much of it felt clunky and awkward. Painting is split between a pencil tool, a paintbrush tool, and an airbrush tool; I don’t really know why. The default brushes are largely uninteresting. Instead of brush presets, there are tool presets that can be saved for any tool; it’s a neat idea, but doesn’t feel like a real substitute for brush presets.

Much of the same functionality as Krita is there, but it’s all somehow more clunky. I’m sure it’s possible to fiddle with the interface to get something friendlier for painting, but I never really figured out how.

And then there’s the surprising stuff that’s missing. There’s no canvas rotation, for example. There’s only one type of brush, and it just stamps the same pattern along a path. I don’t think it’s possible to smear or blend or pick up color while painting. The only way to change the brush size is via the very sensitive slider on the tool options panel, which I remember being a little annoying with a tablet pen. Also, you have to specifically enable tablet support? It’s not difficult or anything, but I have no idea why the default is to ignore tablet pressure and treat it like a regular mouse cursor.

As I mentioned above, there’s also no support for high color depth or non-destructive editing, which is honestly a little embarrassing. Those are the major things Serious Professionals™ have been asking for for ages, and GIMP has been trying to provide them, but it’s taking a very long time. The first signs of GEGL, a new library intended to provide these features, appeared in GIMP 2.6… in 2008. The last major release was in 2012. GIMP has been working on this new plumbing for almost as long as Krita’s entire development history. (To be fair, Krita has also raised almost €90,000 from three Kickstarters to fund its development; I don’t know that GIMP is funded at all.)

I don’t know what’s up with GIMP nowadays. It’s still under active development, but the exact status and roadmap are a little unclear. I still use it for some general-purpose editing, but I don’t see any reason to use it to draw.

I do know that canvas rotation will be in the next release, and there was some experimentation with embedding MyPaint’s brush engine (though when I tried it it was basically unusable), so maybe GIMP is interested in wooing artists? I guess we’ll see.

MyPaint

Ah, MyPaint. I gave it a try once. Once.

It’s a shame, really. It sounds pretty great: specifically built for drawing, has very powerful brushes, supports an infinite canvas, supports canvas rotation, has a simple UI that gets out of your way. Perfect.

Or so it seems. But in MyPaint’s eagerness to shed unnecessary raster editing tools, it forgot a few of the more useful ones. Like selections.

MyPaint has no notion of a selection, nor of copy/paste. If you want to move a head to align better to a body, for example, the sanctioned approach is to duplicate the layer, erase the head from the old layer, erase everything but the head from the new layer, then move the new layer.

I can’t find anything that resembles HSL adjustment, either. I guess the workaround for that is to create H/S/L layers and floodfill them with different colors until you get what you want.

I can’t work seriously without these basic editing tools. I could see myself doodling in MyPaint, but Krita works just as well for doodling as for serious painting, so I’ve never gone back to it.

Drawpile

Drawpile is the modern equivalent to OpenCanvas, I suppose? It lets multiple people draw on the same canvas simultaneously. (I would not recommend it as a general-purpose raster editor.)

It’s a little clunky in places — I sometimes have bugs where keyboard focus gets stuck in the chat, or my tablet cursor becomes invisible — but the collaborative part works surprisingly well. It’s not a brush powerhouse or anything, and I don’t think it allows textured brushes, but it supports tablet pressure and canvas rotation and locked alpha and selections and whatnot.

I’ve used it a couple times, and it’s worked well enough that… well, other people made pretty decent drawings with it? I’m not sure I’ve managed yet. And I wouldn’t use it single-player. Still, it’s fun.

Aseprite

Aseprite is for pixel art so it doesn’t really belong here at all. But it’s very good at that and I like it a lot.

That’s all

I can’t name any other serious contender that exists for Linux.

I’m dimly aware of a thing called “Photo Shop” that’s more intended for photos but functions as a passable painter. More artists seem to swear by Paint Tool SAI and Clip Studio Paint. Also there’s Paint.NET, but I have no idea how well it’s actually suited for painting.

And that’s it! That’s all I’ve got. Krita for drawing, GIMP for editing, Drawpile for collaborative doodling.

Pirate Bay Ruling is Bad News For Google & YouTube, Experts Says

Post Syndicated from Andy original https://torrentfreak.com/pirate-bay-ruling-is-bad-news-for-google-youtube-experts-says-170615/

After years of legal wrangling, yesterday the European Court of Justice handed down a decision in the case between Dutch anti-piracy outfit BREIN and ISPs Ziggo and XS4ALL.

BREIN had demanded that the ISPs block The Pirate Bay, but both providers dug in their heels, forcing the case through the Supreme Court and eventually the ECJ.

For BREIN, yesterday’s decision will have been worth the wait. Although The Pirate Bay does not provide the content that’s ultimately downloaded and shared by its users, the ECJ said that it plays an important role in how that content is presented.

“Whilst it accepts that the works in question are placed online by the users, the Court highlights the fact that the operators of the platform play an essential role in making those works available,” the Court said.

With that established the all-important matter is whether by providing such a platform, the operators of The Pirate Bay are effectively engaging in a “communication to the public” of copyrighted works. According to the ECJ, that’s indeed the case.

“The Court holds that the making available and management of an online sharing platform must be considered to be an act of communication for the purposes of the directive,” the ECJ said.

Add into the mix that The Pirate Bay generates profit from its activities and there’s a potent case for copyright liability.

While the case was about The Pirate Bay, ECJ rulings tend to have an effect far beyond individual cases. That’s certainly the opinion of Enzo Mazza, chief at Italian anti-piracy group FIMI.

“The ruling will have a major impact on the way that entities like Google operate, because it will expose them to a greater and more direct responsibility,” Mazza told La Repubblica.

“So far, Google has worked against piracy by eliminating illegal content after it gets reported. But that is not enough. It is a fairly ineffective intervention.”

Mazza says that platforms like Google, YouTube, and thousands of similar sites that help to organize and curate user-uploaded content are somewhat similar to The Pirate Bay. In any event, they are not neutral intermediaries, he insists.

The conclusion that the decision is bad for platforms like YouTube is shared by Fulvio Sarzana, a lawyer with Sarzana and Partners, a law firm specializing in Internet and copyright disputes.

“In the ruling, the Court has in fact attributed, for the first time, secondary liability to sharing platforms due to the violation of copyrights carried out by the users of a platform,” Sarzana informs TF.

“This will have consequences for video-sharing platforms and user-generated content sites like YouTube, but it excludes responsibility for platforms that play a purely passive role, without affecting users’ content. This the case with cyberlockers, for example.”

Sarzana says that “unfortunate judgments” like this should be expected, until the approval of a new European copyright law. Enzo Mazza, on the other hand, feels that the copyright reform debate should take account of this ruling when formulating legislation to stop platforms like YouTube exploiting copyright works without an appropriate license.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] Assembling the history of Unix

Post Syndicated from corbet original https://lwn.net/Articles/725297/rss

The moment when an antique operating system that has not run in decades
boots and presents a command prompt is thrilling for Warren Toomey, who
founded the Unix Heritage Society to
reconstruct the early history of the Unix operating system. Recently this
historical code has become much more accessible: we can now browse it in an
instant on GitHub
, thanks to the efforts of a computer science
professor at the Athens University of Economics and Business named Diomidis
Spinellis.

Click below (subscribers only) for a look at the Unix Heritage Society and
what it has accomplished.

UK Police Claim Success in Keeping Gambling Ads off Pirate Sites

Post Syndicated from Andy original https://torrentfreak.com/uk-police-claim-success-in-keeping-gambling-ads-off-pirate-sites-170614/

Over the past several years, there has been a major effort by entertainment industry groups to cut off revenue streams to ‘pirate’ sites. The theory is that if sites cannot generate funds, their operators will eventually lose interest.

Since advertising is a key money earner for any website, significant resources have been expended trying to keep ads off sites that directly or indirectly profit from infringement. It’s been a multi-pronged affair, with agencies being encouraged to do the right thing and brands warned that their ads appearing on pirate sites does nothing for their image.

One sector that has trailed behind most is the gambling industry. Up until fairly recently, ads for some of the UK’s largest bookmakers have been a regular feature on many large pirate sites, either embedded in pages or more often than not, appearing via popup or pop-under spreads. Now, however, a significant change is being reported.

According to the City of London Police’s Intellectual Property Crime Unit (PIPCU), over the past 12 months there has been an 87% drop in adverts for licensed gambling operators being displayed on infringing websites.

The research was carried out by whiteBULLET, a brand safety and advertising solutions company which helps advertisers to assess whether placing an advert on a particular URL will cause it to appear on a pirate site.

PIPCU says that licensed gambling operators have an obligation to “keep crime out of gambling” due to their commitments under the Gambling Act 2005. However, the Gambling Commission, the UK’s gambling regulatory body, has recently been taking additional steps to tackle the problem.

In September 2015, the Commission consulted on amendments (pdf) to licensing conditions that would compel licensees to ensure that advertisements “placed by themselves and others” do not appear on websites providing unauthorized access to copyrighted content.

After the consultation was published in May 2016 (pdf), all respondents agreed in principle that gambling operators should not advertise on pirate sites. A month later, the Commission said it would ban the placement of gambling ads on such platforms.

When the new rules came into play last October, 40 gambling companies (including Bet365, Coral and Sky Bet, who had previously been called out for displaying ads on pirate sites) were making use of PIPCU’s ‘Infringing Website List‘, a database of sites that police claim are actively involved in piracy.

Speaking yesterday, acting Detective Superintendent Peter Ratcliffe, Head of the Police Intellectual Property Crime Unit (PIPCU), welcomed the ensuing reduction in ad placement on ‘pirate’ domains.

“The success of a strong relationship built between PIPCU and The Gambling Commission can be seen by these figures. This is a fantastic example of a joint working initiative between police and an industry regulator,” Ratcliffe said.

“We commend the 40 gambling companies who are already using the Infringing Website List and encourage others to sign up. We will continue to encourage all UK advertisers to become a member of the Infringing Website List to ensure they’re not inadvertently funding criminal websites.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

More notes on US-CERTs IOCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/more-notes-on-us-certs-iocs.html

Yet another Russian attack against the power grid, and yet more bad IOCs from the DHS US-CERT.

IOCs are “indicators of compromise“, things you can look for in order to order to see if you, too, have been hacked by the same perpetrators. There are several types of IOCs, ranging from the highly specific to the uselessly generic.

A uselessly generic IOC would be like trying to identify bank robbers by the fact that their getaway car was “white” in color. It’s worth documenting, so that if the police ever show up in a suspected cabin in the woods, they can note that there’s a “white” car parked in front.

But if you work bank security, that doesn’t mean you should be on the lookout for “white” cars. That would be silly.

This is what happens with US-CERT’s IOCs. They list some potentially useful things, but they also list a lot of junk that waste’s people’s times, with little ability to distinguish between the useful and the useless.

An example: a few months ago was the GRIZZLEYBEAR report published by US-CERT. Among other things, it listed IP addresses used by hackers. There was no description which would be useful IP addresses to watch for, and which would be useless.

Some of these IP addresses were useful, pointing to servers the group has been using a long time as command-and-control servers. Other IP addresses are more dubious, such as Tor exit nodes. You aren’t concerned about any specific Tor exit IP address, because it changes randomly, so has no relationship to the attackers. Instead, if you cared about those Tor IP addresses, what you should be looking for is a dynamically updated list of Tor nodes updated daily.

And finally, they listed IP addresses of Yahoo, because attackers passed data through Yahoo servers. No, it wasn’t because those Yahoo servers had been compromised, it’s just that everyone passes things though them, like email.

A Vermont power-plant blindly dumped all those IP addresses into their sensors. As a consequence, the next morning when an employee checked their Yahoo email, the sensors triggered. This resulted in national headlines about the Russians hacking the Vermont power grid.

Today, the US-CERT made similar mistakes with CRASHOVERRIDE. They took a report from Dragos Security, then mutilated it. Dragos’s own IOCs focused on things like hostile strings and file hashes of the hostile files. They also included filenames, but similar to the reason you’d noticed a white car — because it happened, not because you should be on the lookout for it. In context, there’s nothing wrong with noting the file name.

But the US-CERT pulled the filenames out of context. One of those filenames was, humorously, “svchost.exe”. It’s the name of an essential Windows service. Every Windows computer is running multiple copies of “svchost.exe”. It’s like saying “be on the lookout for Windows”.

Yes, it’s true that viruses use the same filenames as essential Windows files like “svchost.exe”. That’s, generally, something you should be aware of. But that CRASHOVERRIDE did this is wholly meaningless.

What Dragos Security was actually reporting was that a “svchost.exe” with the file hash of 79ca89711cdaedb16b0ccccfdcfbd6aa7e57120a was the virus — it’s the hash that’s the important IOC. Pulling the filename out of context is just silly.

Luckily, the DHS also provides some of the raw information provided by Dragos. But even then, there’s problems: they provide it in formatted form, for HTML, PDF, or Excel documents. This corrupts the original data so that it’s no longer machine readable. For example, from their webpage, they have the following:

import “pe”
import “hash”

Among the problems are the fact that the quote marks have been altered, probably by Word’s “smart quotes” feature. In other cases, I’ve seen PDF documents get confused by the number 0 and the letter O, as if the raw data had been scanned in from a printed document and OCRed.

If this were a “threat intel” company,  we’d call this snake oil. The US-CERT is using Dragos Security’s reports to promote itself, but ultimate providing negative value, mutilating the content.

This, ultimately, causes a lot of harm. The press trusted their content. So does the network of downstream entities, like municipal power grids. There are tens of thousands of such consumers of these reports, often with less expertise than even US-CERT. There are sprinklings of smart people in these organizations, I meet them at hacker cons, and am fascinated by their stories. But institutionally, they are dumbed down the same level as these US-CERT reports, with the smart people marginalized.

There are two solutions to this problem. The first is that when the stupidity of what you do causes everyone to laugh at you, stop doing it. The second is to value technical expertise, empowering those who know what they are doing. Examples of what not to do are giving power to people like Obama’s cyberczar, Michael Daniels, who once claimed his lack of technical knowledge was a bonus, because it allowed him to see the strategic picture instead of getting distracted by details.

Weekly roundup: Potpourri

Post Syndicated from Eevee original https://eev.ee/dev/2017/06/12/weekly-roundup-potpourri/

  • potluck: It’s about time I actually started on this! I spent a couple days squabbling with JavaScript frameworks, but in the end I gave up and decided to just build it on my existing LÖVE code. Trying out a new thing is nice, but maybe not when I have a somewhat complicated game in mind that I want to build as soon as possible.

  • fox flux: I made a font! Also drew a ton of player sprites and made my grass much prettier. I’m going to need to start thinking about environmental art soon, and I’m kinda dreading it because I have no idea what I’m doing there.

  • art: I did some (unfinished) modelling and painted some character art (warning: this pic is fine, but the rest of the account is NSFW).

  • blog: I wrote some thoughts about teaching technical subjects.

For the first time in possibly my entire life, I feel like I’m a little ahead of the game! The month isn’t even half over and I’ve already done some obligatory stuff, finished off one languishing task, and made some decent inroads into several other things. Nice.

Popular Release Group ShAaNiG Permanently Shuts Down

Post Syndicated from Andy original https://torrentfreak.com/popular-release-group-shaanig-permanently-shuts-down-170612/

While there are dozens of torrent release groups in operation today, some providing extremely high quality work, every few years a notable ‘brand’ group appears.

Two of the most famous from recent memory were aXXo and YIFY. Neither were known for historic individual releases or world-beating quality, but both were particularly consistent. An aXXo or YIFY label on an official torrent assured the potential downloader they would be getting a ‘McDonalds-quality’ product; never haute cuisine but just enough taste and in enough volume to fill people up.

As a result, these groups gained millions of followers, something that put anti-piracy targets on their backs. No surprise then that neither are around today, with YIFY subjected to legal action in New Zealand and aXXo….well, no one seems to know.

With those groups gone, there was a gap in the market for a similar product. Popular releases delivered to the masses in small file sizes is clearly a recipe for success and an existing group called ShAaNiG decided to step in to take up some of the slack.

What followed was thousands of ShAaNiG movie and TV show releases, which were uploaded to The Pirate Bay and direct download sites. They also took pride of place on the group’s forum at Shaanig.org, where they were neatly organized into relevant categories.

ShAaNiG’s release forum

But like aXXO and YIFY before it, something went wrong at ShAaNiG. After publishing a couple of releases on Saturday, including a Blu-ray rip of the movie Jawbone and an episode of TV show Outcast, ShAaNiG unexpectedly threw in the towel. A notice on the group’s homepage gives no reason for the sudden shutdown but makes it clear that ShAaNiG won’t be coming back.

“ShAaNiG has shut down permanently,” it reads. “Every journey must come to an end, This is the end of our journey. Thank you for all your support.”

While there are only so many ways to say that a site has been shut down for good, the first sentence is identical to the one used by ExtraTorrent when it closed down last month.

Another potentially interesting similarity is that both ExtraTorrent and ShAaNiG had huge followings in India, with both sites indexing a lot of local content, especially movies.

More than 30% of ShAaNiG’s traffic came from India, with much of it driven from The Pirate Bay where more than a thousand releases live on today. When ExtraTorrent shut down, around 40% of the new traffic arriving at another popular platform came from India.

Whether the Indian connection is pure coincidence remains to be seen, but it seems possible if not likely that some kind of legal pressure played a part in the demise of both. However, if the situation plays out in the same manner, we’ll hear no more and like ExtraTorrent, ShAaNiG will simply fade away.

While that will come as a huge disappointment to ShAaNiG fans, other file-sharers are likely to have less sympathy. Like aXXo and YIFY before it, ShAaNiG was rarely (if ever) the source of the material it released, instead preferring to re-encode existing releases. For some pirates, that’s a red line that should never be crossed.

Whether a new group will rise phoenix-like from the ashes will remain to be seen but as these ‘brand’ groups have established time and again, there’s always a market for passable quality movie releases, if they come in a compact file-size.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Teaching tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/10/teaching-tech/

A sponsored post from Manishearth:

I would kinda like to hear about any thoughts you have on technical teaching or technical writing. Pedagogy is something I care about. But I don’t know how much you do, so feel free to ignore this suggestion 🙂

Good news: I care enough that I’m trying to write a sorta-kinda-teaching book!

Ironically, one of the biggest problems I’ve had with writing the introduction to that book is that I keep accidentally rambling on for pages about problems and difficulties with teaching technical subjects. So maybe this is a good chance to get it out of my system.

Phaser

I recently tried out a new thing. It was Phaser, but this isn’t a dig on them in particular, just a convenient example fresh in my mind. If anything, they’re better than most.

As you can see from Phaser’s website, it appears to have tons of documentation. Two of the six headings are “LEARN” and “EXAMPLES”, which seems very promising. And indeed, Phaser offers:

  • Several getting-started walkthroughs
  • Possibly hundreds of examples
  • A news feed that regularly links to third-party tutorials
  • Thorough API docs

Perfect. Beautiful. Surely, a dream.

Well, almost.

The examples are all microscopic, usually focused around a single tiny feature — many of them could be explained just as well with one line of code. There are a few example games, but they’re short aimless demos. None of them are complete games, and there’s no showcase either. Games sometimes pop up in the news feed, but most of them don’t include source code, so they’re not useful for learning from.

Likewise, the API docs are just API docs, leading to the sorts of problems you might imagine. For example, in a few places there’s a mention of a preUpdate stage that (naturally) happens before update. You might rightfully wonder what kinds of things happen in preUpdate — and more importantly, what should you put there, and why?

Let’s check the API docs for Phaser.Group.preUpdate:

The core preUpdate – as called by World.

Okay, that didn’t help too much, but let’s check what Phaser.World has to say:

The core preUpdate – as called by World.

Ah. Hm. It turns out World is a subclass of Group and inherits this method — and thus its unaltered docstring — from Group.

I did eventually find some brief docs attached to Phaser.Stage (but only by grepping the source code). It mentions what the framework uses preUpdate for, but not why, and not when I might want to use it too.


The trouble here is that there’s no narrative documentation — nothing explaining how the library is put together and how I’m supposed to use it. I get handed some brief primers and a massive reference, but nothing in between. It’s like buying an O’Reilly book and finding out it only has one chapter followed by a 500-page glossary.

API docs are great if you know specifically what you’re looking for, but they don’t explain the best way to approach higher-level problems, and they don’t offer much guidance on how to mesh nicely with the design of a framework or big library. Phaser does a decent chunk of stuff for you, off in the background somewhere, so it gives the strong impression that it expects you to build around it in a particular way… but it never tells you what that way is.

Tutorials

Ah, but this is what tutorials are for, right?

I confess I recoil whenever I hear the word “tutorial”. It conjures an image of a uniquely useless sort of post, which goes something like this:

  1. Look at this cool thing I made! I’ll teach you how to do it too.

  2. Press all of these buttons in this order. Here’s a screenshot, which looks nothing like what you have, because I’ve customized the hell out of everything.

  3. You did it!

The author is often less than forthcoming about why they made any of the decisions they did, where you might want to try something else, or what might go wrong (and how to fix it).

And this is to be expected! Writing out any of that stuff requires far more extensive knowledge than you need just to do the thing in the first place, and you need to do a good bit of introspection to sort out something coherent to say.

In other words, teaching is hard. It’s a skill, and it takes practice, and most people blogging are not experts at it. Including me!


With Phaser, I noticed that several of the third-party tutorials I tried to look at were 404s — sometimes less than a year after they were linked on the site. Pretty major downside to relying on the community for teaching resources.

But I also notice that… um…

Okay, look. I really am not trying to rag on this author. I’m not. They tried to share their knowledge with the world, and that’s a good thing, something worthy of praise. I’m glad they did it! I hope it helps someone.

But for the sake of example, here is the most recent entry in Phaser’s list of community tutorials. I have to link it, because it’s such a perfect example. Consider:

  • The post itself is a bulleted list of explanation followed by a single contiguous 250 lines of source code. (Not that there’s anything wrong with bulleted lists, mind you.) That code contains zero comments and zero blank lines.

  • This is only part two in what I think is a series aimed at beginners, yet the title and much of the prose focus on object pooling, a performance hack that’s easy to add later and that’s almost certainly unnecessary for a game this simple. There is no explanation of why this is done; the prose only says you’ll understand why it’s critical once you add a lot more game objects.

  • It turns out I only have two things to say here so I don’t know why I made this a bulleted list.

In short, it’s not really a guided explanation; it’s “look what I did”.

And that’s fine, and it can still be interesting. I’m not sure English is even this person’s first language, so I’m hardly going to criticize them for not writing a novel about platforming.

The trouble is that I doubt a beginner would walk away from this feeling very enlightened. They might be closer to having the game they wanted, so there’s still value in it, but it feels closer to having someone else do it for them. And an awful lot of tutorials I’ve seen — particularly of the “post on some blog” form (which I’m aware is the genre of thing I’m writing right now) — look similar.

This isn’t some huge social problem; it’s just people writing on their blog and contributing to the corpus of written knowledge. It does become a bit stickier when a large project relies on these community tutorials as its main set of teaching aids.


Again, I’m not ragging on Phaser here. I had a slightly frustrating experience with it, coming in knowing what I wanted but unable to find a description of the semantics anywhere, but I do sympathize. Teaching is hard, writing documentation is hard, and programmers would usually rather program than do either of those things. For free projects that run on volunteer work, and in an industry where anything other than programming is a little undervalued, getting good docs written can be tricky.

(Then again, Phaser sells books and plugins, so maybe they could hire a documentation writer. Or maybe the whole point is for you to buy the books?)

Some pretty good docs

Python has pretty good documentation. It introduces the language with a tutorial, then documents everything else in both a library and language reference.

This sounds an awful lot like Phaser’s setup, but there’s some considerable depth in the Python docs. The tutorial is highly narrative and walks through quite a few corners of the language, stopping to mention common pitfalls and possible use cases. I clicked an arbitrary heading and found a pleasant, informative read that somehow avoids being bewilderingly dense.

The API docs also take on a narrative tone — even something as humble as the collections module offers numerous examples, use cases, patterns, recipes, and hints of interesting ways you might extend the existing types.

I’m being a little vague and hand-wavey here, but it’s hard to give specific examples without just quoting two pages of Python documentation. Hopefully you can see right away what I mean if you just take a look at them. They’re good docs, Bront.

I’ve likewise always enjoyed the SQLAlchemy documentation, which follows much the same structure as the main Python documentation. SQLAlchemy is a database abstraction layer plus ORM, so it can do a lot of subtly intertwined stuff, and the complexity of the docs reflects this. Figuring out how to do very advanced things correctly, in particular, can be challenging. But for the most part it does a very thorough job of introducing you to a large library with a particular philosophy and how to best work alongside it.

I softly contrast this with, say, the Perl documentation.

It’s gotten better since I first learned Perl, but Perl’s docs are still a bit of a strange beast. They exist as a flat collection of manpage-like documents with terse names like perlootut. The documentation is certainly thorough, but much of it has a strange… allocation of detail.

For example, perllol — the explanation of how to make a list of lists, which somehow merits its own separate documentation — offers no fewer than nine similar variations of the same code for reading a file into a nested lists of words on each line. Where Python offers examples for a variety of different problems, Perl shows you a lot of subtly different ways to do the same basic thing.

A similar problem is that Perl’s docs sometimes offer far too much context; consider the references tutorial, which starts by explaining that references are a powerful “new” feature in Perl 5 (first released in 1994). It then explains why you might want to nest data structures… from a Perl 4 perspective, thus explaining why Perl 5 is so much better.

Some stuff I’ve tried

I don’t claim to be a great teacher. I like to talk about stuff I find interesting, and I try to do it in ways that are accessible to people who aren’t lugging around the mountain of context I already have. This being just some blog, it’s hard to tell how well that works, but I do my best.

I also know that I learn best when I can understand what’s going on, rather than just seeing surface-level cause and effect. Of course, with complex subjects, it’s hard to develop an understanding before you’ve seen the cause and effect a few times, so there’s a balancing act between showing examples and trying to provide an explanation. Too many concrete examples feel like rote memorization; too much abstract theory feels disconnected from anything tangible.

The attempt I’m most pleased with is probably my post on Perlin noise. It covers a fairly specific subject, which made it much easier. It builds up one step at a time from scratch, with visualizations at every point. It offers some interpretations of what’s going on. It clearly explains some possible extensions to the idea, but distinguishes those from the core concept.

It is a little math-heavy, I grant you, but that was hard to avoid with a fundamentally mathematical topic. I had to be economical with the background information, so I let the math be a little dense in places.

But the best part about it by far is that I learned a lot about Perlin noise in the process of writing it. In several places I realized I couldn’t explain what was going on in a satisfying way, so I had to dig deeper into it before I could write about it. Perhaps there’s a good guideline hidden in there: don’t try to teach as much as you know?

I’m also fairly happy with my series on making Doom maps, though they meander into tangents a little more often. It’s hard to talk about something like Doom without meandering, since it’s a convoluted ecosystem that’s grown organically over the course of 24 years and has at least three ways of doing anything.


And finally there’s the book I’m trying to write, which is sort of about game development.

One of my biggest grievances with game development teaching in particular is how often it leaves out important touches. Very few guides will tell you how to make a title screen or menu, how to handle death, how to get a Mario-style variable jump height. They’ll show you how to build a clearly unfinished demo game, then leave you to your own devices.

I realized that the only reliable way to show how to build a game is to build a real game, then write about it. So the book is laid out as a narrative of how I wrote my first few games, complete with stumbling blocks and dead ends and tiny bits of polish.

I have no idea how well this will work, or whether recapping my own mistakes will be interesting or distracting for a beginner, but it ought to be an interesting experiment.

Nintendo & BREIN Target Seller of ‘Pirate’ Retro Gaming System

Post Syndicated from Andy original https://torrentfreak.com/nintendo-brein-target-seller-of-pirate-retro-gaming-system-170610/

As millions of often younger gamers immerse themselves in the latest 3D romp-fests from the world’s leading games developers, huge numbers of people are reliving their youth through the wonders of emulation.

The majority of old gaming systems can be emulated on a decent PC these days, opening up the possibility of reanimating thousands of the greatest games to ever grace the planet. While that’s a great prospect, the good news doesn’t stop there. The games are all free – if you don’t mind pirating them.

While many people go the do-it-yourself route by downloading emulators and ROMs (the games) from the Internet, increasingly people are saving time by buying systems ready-made online. Some of these are hugely impressive, housed in full-size arcade machine cabinets and packing many thousands of games. They also have sizeable price tags to match, running in some cases to thousands of dollars. But there are other options.

The rise of affordable compact computers has opened up emulation and retro gaming to a whole new audience and inevitable some people have taken to selling these devices online with the games pre-bundled on SD cards. These systems can be obtained relatively cheaply but despite the games being old, companies like Nintendo still take a dim view of their sale.

That’s also the case in the Netherlands, where Nintendo and other companies are taking action against people involved in the sale of what are effectively pirate gaming systems. In a recent case, Dutch anti-piracy outfit BREIN took action against the operator of the Retrospeler (Retro Player) site, an outlet selling a ready-made retro gaming system.

Retro Player site (translated from Dutch)

As seen from the image above, for a little under 110 euros the player can buy a games machine with classics like Super Mario, Street Fighter, and Final Fantasy pre-installed. Add a TV via an HDMI lead and a joypad or two, and yesteryear gaming becomes reality today. Unfortunately, the fun didn’t last long and it was soon “Game Over” for Retro Player.

Speaking with TorrentFreak, BREIN chief Tim Kuik says that the system sold by Retro Player was based on the popular Raspberry Pi single-board computer. Although small and relatively cheap, the Pi is easily capable of running retro games via software such as RetroPie, but it’s unclear which product was installed on the version sold by Retro Player.

What is clear is that the device came pre-installed with a lot of games. The now-defunct Retro Player site listed 6,500 titles for a wide range of classic gaming systems, including Gameboy, Super Nintendo, Nintendo 64, Megadrive and Playstation. Kuik didn’t provide precise numbers but said that the machine came packaged with “a couple of thousand” titles.

BREIN says in this particular case it was acting on behalf of Nintendo, among others. However, it doesn’t appear that the case will be going to court. Like many other cases handled by the anti-piracy group, BREIN says it has reached a settlement with the operator of the Retro Player site for an unspecified amount.

The debate and controversy surrounding retro gaming and emulation is one that has been running for years. The thriving community sees little wrong with reanimating games for long-dead systems and giving them new life among a new audience. On the other hand, copyright holders such as Nintendo view their titles as their property, to be exploited in a time, place and manner of their choosing.

While that friction will continue for a long time to come, there will be few if any legal problems for those choosing to pursue their emulation fantasies in the privacy of their own home. Retro gaming is here to stay and as long as computing power continues to increase, the experience is only likely to improve.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NSA Document Outlining Russian Attempts to Hack Voter Rolls

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_document_ou.html

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the Intercept published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational ­– and there’s no evidence that they had any actual effect ­– they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia’s military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company’s network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA’s analysis ends. We don’t know whether those 122 targeted attacks were successful, or what their effects were if so. We don’t know whether other election software companies besides VR Systems were targeted, or what the GRU’s overall plan was — if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect — anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC ­– one of the states that VR Systems supports ­– but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don’t know what happened next, if anything. VR Systems isn’t commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn’t much of a smoking gun, it’s yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the Intercept anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The Intercept sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI’s affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the Intercept. It is unclear whether the e-mail evidence was from Winner’s NSA account or her personal account, but in either case, it’s incredibly sloppy tradecraft.

With President Trump’s election, the issue of Russian interference in last year’s campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It’s interesting that this document was reported by the Intercept, which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It’s easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there’s a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don’t know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won’t be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don’t have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­– by extension ­– our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what’s historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the Washington Post.

Encased in amber: meet the epoxy-embedded Pi

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/epoxy-pi-resin-io/

The maker of one of our favourite projects from this year’s Maker Faire Bay Area took the idea of an ’embedded device’ and ran with it: Ronald McCollam has created a wireless, completely epoxy-encased Pi build – screen included!

Resin.io in resin epoxy-encased Raspberry Pi

*cue epic music theme* “Welcome…to resin in resin.”

Just encase…

Of course, this build is not meant to be a museum piece: Ronald embedded a Raspberry Pi 3 with built-in wireless LAN and Bluetooth to create a hands-on demonstration of the resin.io platform, for which he is a Solution Architect. Resin.io is useful for remotely controlling groups of Linux-based IoT devices. In this case, Ronald used it to connect to the encased Pi. And yes, he named his make Resin-in-resin – we salute you, sir!

resin.io in resin epoxy-encased Raspberry Pi

“Life uh…finds a way.”

Before he started the practical part of his project, he did his research to find a suitable resin. He found that epoxy types specifically designed for encasing electronics are very expensive. In the end, Ronald tried out a cheap type, usually employed to coat furniture, by encasing an LED. It worked perfectly, and he went ahead to use this resin for embedding the Pi.

Bubbleshooting epoxy

This was the first time Ronald had worked with resin, so he learned some essential things about casting. He advises other makers to mix the epoxy very, very slowly to minimize the formation of bubbles; to try their hands on some small-scale casting attempts first; and to make sure they’re using a large enough mold for casting. Another thing to keep in mind is that some components of the make will heat up and expand while the device is running.

His first version of an encased Pi was still connected to the outside world by its USB cable:

Ronald McCollam on Twitter

Updates don’t get more “hands off” than a Raspberry Pi encased in epoxy — @resin_io inside resin! Come ask me about it at @DockerCon!

Not satisfied with this, he went on to incorporate an inductive charging coil as a power source, so that the Pi could be totally insulated in epoxy. The Raspberry Pi Foundation’s Matt Richardson got a look the finished project at Maker Faire Bay Area:

MattRichardson🏳️‍🌈 on Twitter

If you’re at @makerfaire, you must check out what @resin_io is showing. A @Raspberry_Pi completely enclosed in resin. Completely wireless. https://t.co/djVjoLz3hI

MAGNETS!

The charging coil delivers enough power to keep the Pi running for several hours, but it doesn’t allow secure booting. After some head-scratching, Ronald came up with a cool solution to this problem: he added a battery and a magnetic reed switch. He explains:

[The] boot process is to use the magnetic switch to turn off the Pi, put it on the charger for a few minutes to allow the battery to charge up, then remove the magnet so the Pi boots.

Pi in resin controlled by resin.io

“God help us, we’re in the hands of engineers.”

He talks about his build on the resin.io blog, and has provided a detailed project log on Hackaday. For those of you who want to recreate this project at home, Ronald has even put together an Adafruit wishlist of the necessary components.

Does this resin-ate with you?

What’s especially great about Ronald’s posts is that they’re full of helpful tips about getting started with using epoxy resin in your digital making projects. So whether you’re keen to build your own wireless Pi, or just generally interested in embedding electronic components in resin, you’ll find his write-ups useful.

If you have experience in working with epoxy and electronic devices and want to share what you’ve learned, please do so in the comments!

The post Encased in amber: meet the epoxy-embedded Pi appeared first on Raspberry Pi.

Tweetponic lavender: nourishing nature with the Twitter API

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/tweetponic-lavender/

In a Manhattan gallery, there is an art installation that uses a Raspberry Pi to control the lights, nourishing an underground field of lavender. The twist: the Pi syncs the intensity of the lights to the activity of a dozen or so Twitter accounts belonging to media personalities and members of the US government.

In May 2017 I cultivated a piece of land in Midtown Manhattan nurtured by tweets.

204 Likes, 5 Comments – Martin Roth (@martinroth02) on Instagram: “In May 2017 I cultivated a piece of land in Midtown Manhattan nurtured by tweets.”

Turning tweets into cellulose

Artist Martin Roth has used the Raspberry Pi to access the accounts via the Twitter API, and to track their behaviour. This information is then relayed to the lights in real time. The more tweets, retweets, and likes there are on these accounts at a given moment, the brighter the lights become, and the better the lavender plants grow. Thus Twitter storms are converted into plant food, and ultimately into a pleasant lavender scent.

Until June 21st @ ACF (11 East 52nd Street)

39 Likes, 1 Comments – Martin Roth (@martinroth02) on Instagram: “Until June 21st @ ACF (11 East 52nd Street)”

Regarding his motivation to create the art installation, Martin Roth says:

[The] Twitter storm is something to be resisted. But I am using it in my exhibition as a force to create growth.

The piece, descriptively titled In May 2017 I cultivated a piece of land in Midtown Manhattan nurtured by tweets, is on show at the Austrian Cultural Forum, New York.

Using the Twitter API as part of digital making

We’ve seen a number of cool makes using the Twitter API. These often involve the posting of tweets in response to real-world inputs. Some of our favourites are the tweeting cat flap Flappy McFlapface, the tweeting dog Oliver Twitch, and of course Pi Towers resident Bert the plant. It’s interesting to see the concept turned on its head.

If you feel inspired by these projects, head on over to our resource introducing the Twitter API using Python. Or do you already have a project, in progress or finished, that uses the API? Let us know about it in the comments!

The post Tweetponic lavender: nourishing nature with the Twitter API appeared first on Raspberry Pi.

Online Platforms Should Collaborate to Ban Piracy and Terrorism, Report Suggests

Post Syndicated from Andy original https://torrentfreak.com/online-platforms-collaborate-ban-piracy-terrorism-report-suggests-170608/

With deep ties to the content industries, the Digital Citizens Alliance periodically produces reports on Internet piracy. It has published reports on cyberlockers and tried to blame Cloudflare for the spread of malware, for example.

One of the key themes pursued by DCA is that Internet piracy is inextricably linked to a whole bunch of other online evils and that tackling the former could deliver a much-needed body blow to the latter.

Its new report, titled ‘Trouble in Our Digital Midst’, takes this notion and runs with it, bundling piracy with everything from fake news to hacking, to malware and brand protection, to the sextortion of “young girls and boys” via their computer cameras.

The premise of the report is that cybercrime as a whole is undermining America’s trust in the Internet, noting that 64% of US citizens say that their trust in digital platforms has dropped in the last year. Given the topics under the spotlight, it doesn’t take long to see where this is going – Internet platforms like Google, Facebook and YouTube must tackle the problem.

“When asked, ‘In your opinion, are digital platforms doing enough to keep the Internet safe and trustworthy, or are do they need to do more?’ a staggering 75 percent responded that they need to do more to keep the Internet safe,” the report notes.

It’s abundantly clear that the report is mostly about piracy but a lot of effort has been expended to ensure that people support its general call for the Internet to be cleaned up. By drawing attention to things that even most pirates might find offensive, it’s easy to find more people in agreement.

“Nearly three-quarters of respondents see the pairing of brand name advertising with offensive online content – like ISIS/terrorism recruiting videos – as a threat to the continued trust and integrity of the Internet,” the report notes.

Of course, this is an incredibly sensitive topic. When big brand ads turned up next to terrorist recruiting videos on YouTube, there was an almighty stink, and rightly so. However, at every turn, the DCA report manages to weave the issue of piracy into the equation, noting that the problem includes the “$200 million in advertising that shows up on illegal content theft websites often unbeknownst to the brands.”

The overriding theme is that platforms like Google, Facebook, and YouTube should be able to tackle all of these problems in the same way. Filtering out a terrorist video is the same as removing a pirate movie. And making sure that ads for big brands don’t appear alongside terrorist videos will be just as easy as starving pirates of revenue, the suggestion goes.

But if terrorism doesn’t grind your gears, what about fake news?

“64 percent of Americans say that the Fake News issue has made them less likely to trust the Internet as a source of information,” the report notes.

At this juncture, Facebook gets a gentle pat on the back for dealing with fake news and employing 3,000 people to monitor for violent videos being posted to the network. This shows that the company “takes seriously” the potential harm bad actors pose to Internet safety. But in keeping with the theme running throughout the report, it’s clear DCA are carefully easing in the thin end of the wedge.

“We are at only the beginning of thinking through other kinds of illicit and illegal activity happening on digital platforms right now that we must gain or re-gain control over,” DCA writes.

Quite. In the very next sentence, the group goes on to warn about the sale of drugs and stolen credit cards, adding that the sale of illicit streaming devices (modified Kodi boxes etc) is actually an “insidious yet effective delivery mechanism to infect computers with malware such as Remote Access Trojans.”

Both Amazon and Facebook receive praise in the report for their recent banning (1,2) of augmented Kodi devices but their actions are actually framed as the companies protecting their own reputations, rather than the interests of the media groups that have been putting them under pressure.

“And though this issue underscores the challenges faced by digital platforms – not all of which act with the same level of responsibility – it also highlights the fact digital platforms can and will step up when their own brands are at stake,” the report reads.

But pirate content and Remote Access Trojans through Kodi boxes are only the beginning. Pirate sites are playing a huge part as well, DCA claims, with one in three “content theft websites” exposing people to identify theft, ransomware, and sextortion via “the computer cameras of young girls and boys.”

Worst still, if that was possible, the lack of policing by online platforms means that people are able to “showcase live sexual assaults, murders, and other illegal conduct.”

DCA says that with all this in mind, Americans are looking for online digital platforms to help them. The group claims that citizens need proactive protection from these ills and want companies like Facebook to take similar steps to those taken when warning consumers about fake news and violent content.

So what can be done to stop this tsunami of illegality? According to DCA, platforms like Google, Facebook, YouTube, and Twitter need to up their game and tackle the problem together.

“While digital platforms collaborate on policy and technical issues, there is no evidence that they are sharing information about the bad actors themselves. That enables criminals and bad actors to move seamlessly from platform to platform,” DCA writes.

“There are numerous examples of industry working together to identify and share information about exploitive behavior. For example, casinos share information about card sharks and cheats, and for decades the retail industry has shared information about fraudulent credit cards. A similar model would enable digital platforms and law enforcement to more quickly identify and combat those seeking to leverage the platforms to harm consumers.”

How this kind of collaboration could take place in the real world is open to interpretation but the DCA has a few suggestions of its own. Again, it doesn’t shy away from pulling people on side with something extremely offensive (in this case child pornography) in order to push what is clearly an underlying anti-piracy agenda.

“With a little help from engineers, digital platforms could create fingerprints of unlawful conduct that is shared across platforms to proactively block such conduct, as is done in a limited capacity with child pornography,” DCA explains.

“If these and other newly developed measures were adopted, digital platforms would have the information to enable them to make decisions whether to de-list or demote websites offering illicit goods and services, and the ability to stop the spread of illegal behavior that victimizes its users.”

The careful framing of the DCA report means that there’s something for everyone. If you don’t agree with them on tackling piracy, then their malware, fake news, or child exploitation angles might do the trick. It’s quite a clever strategy but one that the likes of Google, Facebook, and YouTube will recognize immediately.

And they need to – because apparently, it’s their job to sort all of this out. Good luck with that.

The full report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

GameTale

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2060

Are you a parent to a several years old?

Do you want to teach the little kid to like books, while all she or he wants is games?

There is now a way to have both!

Sure, there are a lot of gamebooks, but they are targeted to teenagers. I will tell now of one that was written for children between three and nine years.

It is the tale of Gremmy – the little gremlin who goes to a big adventure. Who will climb The Big Mountain, or maybe will travel down The Deep River. Will venture into The Enchanted Forest, unless you would go with it inside The Dark Cave. Who will meet magical creatures and will face ingenious choices…

It is a tale you can read to your kids. Lead them through a kingdom of magic and wonder, meet them with its inhabitants and have them make their choices and see their funny and witty results. Nurture their curiosity and imagination, while also teaching them wise and important things.

The author – Nikola Raykov – is the youngest writer ever to win the most prestigious award for children’s literature in Bulgaria. The number of copies in Bulgarian that have been sold is higher than the typical for a book by Stephen King or Paulo Coelho! Since some time, it has been published also in Russian, Italian and Latvian. And now you can have the English translation.

Most gamebooks will have few illustrations, typically black-and-white ones. GameTale is full of excellent true color ones, as a book for children must be. And it provides not only entertainment, but also value.

Don’t you believe it? Take a look yourself – the entire book is available freely on the author’s website, even before it is printed – to read and play it, to download and enjoy it. Like all of its translations and the Bulgarian original. Yes, all these sales were done while the book has been available to everybody. The ability of the readers to see what they are buying has been its best advertisement.

Here is what the writer says:

“I believe it would be cruel if children weren’t able to enjoy my books because their parents could not afford them, and children’s authors should not be cruel. They should be gentle, caring and loving. The values we write about should not be just words on paper. We should be the living and breathing examples of those values, because what we write HAS to be true. Every good author will tell you that you cannot lie to your readers (or little listeners). They will catch you in a second. When you read a book, you can actually feel if the author is being honest about his or her inner self.”

“I DO believe that people are inherently good. If you have poured your heart into something, if you have tried your best, people will feel that and give you their unconditional support. There is no need to hide your work: people are not thieves! If you share, they will care, they will follow you, they will nag you about when your next book comes out, and yes, they will gladly support you because they will know that their children’s favorite author actually believes in the values he’s writing about. The same things they believe in – friendship, love and freedom!”

Nikola started a campaign on Kickstarter. Its goal is to fund the printing of 1000 copies of the book in English. And you do get for your donations things your kid will love!

Years ago, when I read this book, I felt like a kid. And now envy you a little for the joy that you will get from it. 🙂 Do give it a try. There is nothing to lose, and a lot to win!

AWS Greengrass – Run AWS Lambda Functions on Connected Devices

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-greengrass-run-aws-lambda-functions-on-connected-devices/

I first told you about AWS Greengrass in the post that I published during re:Invent (AWS Greengrass – Ubiquitous Real-World Computing). We launched a limited preview of Greengrass at that time and invited you to sign up if you were interested.

As I noted at the time, many AWS customers want to collect and process data out in the field, where connectivity is often slow and sometimes either intermittent or unreliable. Greengrass allows them to extend the AWS programming model to small, simple, field-based devices. It builds on AWS IoT and AWS Lambda, and supports access to the ever-increasing variety of services that are available in the AWS Cloud.

Greengrass gives you access to compute, messaging, data caching, and syncing services that run in the field, and that do not depend on constant, high-bandwidth connectivity to an AWS Region. You can write Lambda functions in Python 2.7 and deploy them to your Greengrass devices from the cloud while using device shadows to maintain state. Your devices and peripherals can talk to each other using local messaging that does not pass through the cloud.

Now Generally Available
Today we are making Greengrass generally available in the US East (Northern Virginia) and US West (Oregon) Regions. During the preview, AWS customers were able to get hands-on experience with Greengrass and to start building applications and businesses around it. I’ll share a few of these early successes later in this post.

The Greengrass Core code runs on each device. It allows you to deploy and run Lambda applications on the device, supports local MQTT messaging across a secure network, and also ensures that conversations between devices and the cloud are made across secure connections. The Greengrass Core also supports secure, over-the-air software updates, including Lambda functions. It includes a message broker, a Lambda runtime, a Thing Shadows implementation, and a deployment agent. Greengrass Core and (optionally) other devices make up a Greengrass Group. The group includes configuration data, the list of devices and the identity of the Greengrass Core, a list of Lambda functions, and a set of subscriptions that define where the messages should go. All of this information is copied to the Greengrass core devices during the deployment process.

Your Lambda functions can use APIs in three distinct SDKs:

AWS SDK for Python – This SDK allows your code to interact with Amazon Simple Storage Service (S3), Amazon DynamoDB, Amazon Simple Queue Service (SQS), and other AWS services.

AWS IoT Device SDK – This SDK (available for Node.js, Python, Java, and C++) helps you to connect your hardware devices to AWS IoT. The C++ SDK has a few extra features including access to the Greengrass Discovery Service and support for root CA downloads.

AWS Greengrass Core SDK – This SDK provides APIs that allow local invocation of other Lambda functions, publish messages, and work with thing shadows.

You can run the Greengrass Core on x86 and ARM devices that have version 4.4.11 (or newer) of the Linux kernel, with the OverlayFS and user namespace features enabled. While most deployments of Greengrass will be targeted at specialized, industrial-grade hardware, you can also run the Greengrass Core on a Raspberry Pi or an EC2 instance for development and test purposes.

For this post, I used a Raspberry Pi attached to a BrickPi, connected to my home network via WiFi:

The Raspberry Pi, the BrickPi, the case, and all of the other parts are available in the BrickPi 3 Starter Kit. You will need some Linux command-line expertise and a decent amount of manual dexterity to put all of this together, but if I did it then you surely can.

Greengrass in Action
I can access Greengrass from the Console, API, or CLI. I’ll use the Console. The intro page of the Greengrass Console lets me define groups, add Greengrass Cores, and add devices to my groups:

I click on Get Started and then on Use easy creation:

Then I name my group:

And name my first Greengrass Core:

I’m ready to go, so I click on Create Group and Core:

This runs for a few seconds and then offers up my security resources (two keys and a certificate) for downloading, along with the Greengrass Core:

I download the security resources and put them in a safe place, and select and download the desired version of the Greengrass Core software (ARMv7l for my Raspberry Pi), and click on Finish.

Now I power up my Pi, and copy the security resources and the software to it (I put them in an S3 bucket and pulled them down with wget). Here’s my shell history at that point:

Following the directions in the user guide, I create a new user and group, run the rpi-update script, and install several packages including sqlite3 and openssl. After a couple of reboots, I am ready to proceed!

Next, still following the directions, I untar the Greengrass Core software and move the security resources to their final destination (/greengrass/configuration/certs), giving them generic names along the way. Here’s what the directory looks like:

The next step is to associate the core with an AWS IoT thing. I return to the Console, click through the group and the Greengrass Core, and find the Thing ARN:

I insert the names of the certificates and the Thing ARN into the config.json file, and also fill in the missing sections of the iotHost and ggHost:

I start the Greengrass demon (this was my second attempt; I had a typo in one of my path names the first time around):

After all of this pleasant time at the command line (taking me back to my Unix v7 and BSD 4.2 days), it is time to go visual once again! I visit my AWS IoT dashboard and see that my Greengrass Core is making connections to IoT:

I go to the Lambda Console and create a Lambda function using the Python 2.7 runtime (the IAM role does not matter here):

I publish the function in the usual way and, hop over to the Greengrass Console, click on my group, and choose to add a Lambda function:

Then I choose the version to deploy:

I also configure the function to be long-lived instead of on-demand:

My code will publish messages to AWS IoT, so I create a subscription by specifying the source and destination:

I set up a topic filter (hello/world) on the subscription as well:

I confirm my settings and save my subscription and I am just about ready to deploy my code. I revisit my group, click on Deployments, and choose Deploy from the Actions menu:

I choose Automatic detection to move forward:

Since this is my first deployment, I need to create a service-level role that gives Greengrass permission to access other AWS services. I simply click on Grant permission:

I can see the status of each deployment:

The code is now running on my Pi! It publishes messages to topic hello/world; I can see them by going to the IoT Console, clicking on Test, and subscribing to the topic:

And here are the messages:

With all of the setup work taken care of, I can do iterative development by uploading, publishing, and deploying new versions of my code. I plan to use the BrickPi to control some LEGO Technic motors and to publish data collected from some sensors. Stay tuned for that post!

Greengrass Pricing
You can run the Greengrass Core on three devices free for one year as part of the AWS Free Tier. At the next level (3 to 10,000 devices) two options are available:

  • Pay as You Go – $0.16 per month per device.
  • Annual Commitment – $1.49 per year per device, a 17.5% savings.

If you want to run the Greengrass Core on more than 10,000 devices or make a longer commitment, please get in touch with us; details on all pricing models are on the Greengrass Pricing page.

Jeff;