Tag Archives: math

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

AIMS Desktop 2017.1 released

Post Syndicated from corbet original https://lwn.net/Articles/725712/rss

The AIMS desktop is a
Debian-derived distribution aimed at mathematical and scientific use. This
project’s first public release, based on Debian 9, is now available.
It is a GNOME-based distribution with a bunch of add-on software.
It is maintained by AIMS (The African Institute for Mathematical
Sciences), a pan-African network of centres of excellence enabling Africa’s
talented students to become innovators driving the continent’s scientific,
educational and economic self-sufficiency.

Teaching tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/10/teaching-tech/

A sponsored post from Manishearth:

I would kinda like to hear about any thoughts you have on technical teaching or technical writing. Pedagogy is something I care about. But I don’t know how much you do, so feel free to ignore this suggestion 🙂

Good news: I care enough that I’m trying to write a sorta-kinda-teaching book!

Ironically, one of the biggest problems I’ve had with writing the introduction to that book is that I keep accidentally rambling on for pages about problems and difficulties with teaching technical subjects. So maybe this is a good chance to get it out of my system.

Phaser

I recently tried out a new thing. It was Phaser, but this isn’t a dig on them in particular, just a convenient example fresh in my mind. If anything, they’re better than most.

As you can see from Phaser’s website, it appears to have tons of documentation. Two of the six headings are “LEARN” and “EXAMPLES”, which seems very promising. And indeed, Phaser offers:

  • Several getting-started walkthroughs
  • Possibly hundreds of examples
  • A news feed that regularly links to third-party tutorials
  • Thorough API docs

Perfect. Beautiful. Surely, a dream.

Well, almost.

The examples are all microscopic, usually focused around a single tiny feature — many of them could be explained just as well with one line of code. There are a few example games, but they’re short aimless demos. None of them are complete games, and there’s no showcase either. Games sometimes pop up in the news feed, but most of them don’t include source code, so they’re not useful for learning from.

Likewise, the API docs are just API docs, leading to the sorts of problems you might imagine. For example, in a few places there’s a mention of a preUpdate stage that (naturally) happens before update. You might rightfully wonder what kinds of things happen in preUpdate — and more importantly, what should you put there, and why?

Let’s check the API docs for Phaser.Group.preUpdate:

The core preUpdate – as called by World.

Okay, that didn’t help too much, but let’s check what Phaser.World has to say:

The core preUpdate – as called by World.

Ah. Hm. It turns out World is a subclass of Group and inherits this method — and thus its unaltered docstring — from Group.

I did eventually find some brief docs attached to Phaser.Stage (but only by grepping the source code). It mentions what the framework uses preUpdate for, but not why, and not when I might want to use it too.


The trouble here is that there’s no narrative documentation — nothing explaining how the library is put together and how I’m supposed to use it. I get handed some brief primers and a massive reference, but nothing in between. It’s like buying an O’Reilly book and finding out it only has one chapter followed by a 500-page glossary.

API docs are great if you know specifically what you’re looking for, but they don’t explain the best way to approach higher-level problems, and they don’t offer much guidance on how to mesh nicely with the design of a framework or big library. Phaser does a decent chunk of stuff for you, off in the background somewhere, so it gives the strong impression that it expects you to build around it in a particular way… but it never tells you what that way is.

Tutorials

Ah, but this is what tutorials are for, right?

I confess I recoil whenever I hear the word “tutorial”. It conjures an image of a uniquely useless sort of post, which goes something like this:

  1. Look at this cool thing I made! I’ll teach you how to do it too.

  2. Press all of these buttons in this order. Here’s a screenshot, which looks nothing like what you have, because I’ve customized the hell out of everything.

  3. You did it!

The author is often less than forthcoming about why they made any of the decisions they did, where you might want to try something else, or what might go wrong (and how to fix it).

And this is to be expected! Writing out any of that stuff requires far more extensive knowledge than you need just to do the thing in the first place, and you need to do a good bit of introspection to sort out something coherent to say.

In other words, teaching is hard. It’s a skill, and it takes practice, and most people blogging are not experts at it. Including me!


With Phaser, I noticed that several of the third-party tutorials I tried to look at were 404s — sometimes less than a year after they were linked on the site. Pretty major downside to relying on the community for teaching resources.

But I also notice that… um…

Okay, look. I really am not trying to rag on this author. I’m not. They tried to share their knowledge with the world, and that’s a good thing, something worthy of praise. I’m glad they did it! I hope it helps someone.

But for the sake of example, here is the most recent entry in Phaser’s list of community tutorials. I have to link it, because it’s such a perfect example. Consider:

  • The post itself is a bulleted list of explanation followed by a single contiguous 250 lines of source code. (Not that there’s anything wrong with bulleted lists, mind you.) That code contains zero comments and zero blank lines.

  • This is only part two in what I think is a series aimed at beginners, yet the title and much of the prose focus on object pooling, a performance hack that’s easy to add later and that’s almost certainly unnecessary for a game this simple. There is no explanation of why this is done; the prose only says you’ll understand why it’s critical once you add a lot more game objects.

  • It turns out I only have two things to say here so I don’t know why I made this a bulleted list.

In short, it’s not really a guided explanation; it’s “look what I did”.

And that’s fine, and it can still be interesting. I’m not sure English is even this person’s first language, so I’m hardly going to criticize them for not writing a novel about platforming.

The trouble is that I doubt a beginner would walk away from this feeling very enlightened. They might be closer to having the game they wanted, so there’s still value in it, but it feels closer to having someone else do it for them. And an awful lot of tutorials I’ve seen — particularly of the “post on some blog” form (which I’m aware is the genre of thing I’m writing right now) — look similar.

This isn’t some huge social problem; it’s just people writing on their blog and contributing to the corpus of written knowledge. It does become a bit stickier when a large project relies on these community tutorials as its main set of teaching aids.


Again, I’m not ragging on Phaser here. I had a slightly frustrating experience with it, coming in knowing what I wanted but unable to find a description of the semantics anywhere, but I do sympathize. Teaching is hard, writing documentation is hard, and programmers would usually rather program than do either of those things. For free projects that run on volunteer work, and in an industry where anything other than programming is a little undervalued, getting good docs written can be tricky.

(Then again, Phaser sells books and plugins, so maybe they could hire a documentation writer. Or maybe the whole point is for you to buy the books?)

Some pretty good docs

Python has pretty good documentation. It introduces the language with a tutorial, then documents everything else in both a library and language reference.

This sounds an awful lot like Phaser’s setup, but there’s some considerable depth in the Python docs. The tutorial is highly narrative and walks through quite a few corners of the language, stopping to mention common pitfalls and possible use cases. I clicked an arbitrary heading and found a pleasant, informative read that somehow avoids being bewilderingly dense.

The API docs also take on a narrative tone — even something as humble as the collections module offers numerous examples, use cases, patterns, recipes, and hints of interesting ways you might extend the existing types.

I’m being a little vague and hand-wavey here, but it’s hard to give specific examples without just quoting two pages of Python documentation. Hopefully you can see right away what I mean if you just take a look at them. They’re good docs, Bront.

I’ve likewise always enjoyed the SQLAlchemy documentation, which follows much the same structure as the main Python documentation. SQLAlchemy is a database abstraction layer plus ORM, so it can do a lot of subtly intertwined stuff, and the complexity of the docs reflects this. Figuring out how to do very advanced things correctly, in particular, can be challenging. But for the most part it does a very thorough job of introducing you to a large library with a particular philosophy and how to best work alongside it.

I softly contrast this with, say, the Perl documentation.

It’s gotten better since I first learned Perl, but Perl’s docs are still a bit of a strange beast. They exist as a flat collection of manpage-like documents with terse names like perlootut. The documentation is certainly thorough, but much of it has a strange… allocation of detail.

For example, perllol — the explanation of how to make a list of lists, which somehow merits its own separate documentation — offers no fewer than nine similar variations of the same code for reading a file into a nested lists of words on each line. Where Python offers examples for a variety of different problems, Perl shows you a lot of subtly different ways to do the same basic thing.

A similar problem is that Perl’s docs sometimes offer far too much context; consider the references tutorial, which starts by explaining that references are a powerful “new” feature in Perl 5 (first released in 1994). It then explains why you might want to nest data structures… from a Perl 4 perspective, thus explaining why Perl 5 is so much better.

Some stuff I’ve tried

I don’t claim to be a great teacher. I like to talk about stuff I find interesting, and I try to do it in ways that are accessible to people who aren’t lugging around the mountain of context I already have. This being just some blog, it’s hard to tell how well that works, but I do my best.

I also know that I learn best when I can understand what’s going on, rather than just seeing surface-level cause and effect. Of course, with complex subjects, it’s hard to develop an understanding before you’ve seen the cause and effect a few times, so there’s a balancing act between showing examples and trying to provide an explanation. Too many concrete examples feel like rote memorization; too much abstract theory feels disconnected from anything tangible.

The attempt I’m most pleased with is probably my post on Perlin noise. It covers a fairly specific subject, which made it much easier. It builds up one step at a time from scratch, with visualizations at every point. It offers some interpretations of what’s going on. It clearly explains some possible extensions to the idea, but distinguishes those from the core concept.

It is a little math-heavy, I grant you, but that was hard to avoid with a fundamentally mathematical topic. I had to be economical with the background information, so I let the math be a little dense in places.

But the best part about it by far is that I learned a lot about Perlin noise in the process of writing it. In several places I realized I couldn’t explain what was going on in a satisfying way, so I had to dig deeper into it before I could write about it. Perhaps there’s a good guideline hidden in there: don’t try to teach as much as you know?

I’m also fairly happy with my series on making Doom maps, though they meander into tangents a little more often. It’s hard to talk about something like Doom without meandering, since it’s a convoluted ecosystem that’s grown organically over the course of 24 years and has at least three ways of doing anything.


And finally there’s the book I’m trying to write, which is sort of about game development.

One of my biggest grievances with game development teaching in particular is how often it leaves out important touches. Very few guides will tell you how to make a title screen or menu, how to handle death, how to get a Mario-style variable jump height. They’ll show you how to build a clearly unfinished demo game, then leave you to your own devices.

I realized that the only reliable way to show how to build a game is to build a real game, then write about it. So the book is laid out as a narrative of how I wrote my first few games, complete with stumbling blocks and dead ends and tiny bits of polish.

I have no idea how well this will work, or whether recapping my own mistakes will be interesting or distracting for a beginner, but it ought to be an interesting experiment.

No, Netflix Hasn’t Won The War on Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/no-netflix-hasnt-won-the-war-on-piracy-170604/

Recently a hacker group, or hacker, going by the name TheDarkOverlord (TDO) published the premiere episode of the fifth season of Netflix’s Orange is The New Black, followed by nine more episodes a few hours later.

TDO obtained the videos from Larson Studios, which didn’t pay the 50 bitcoin ransom TDO had requested. The hackers then briefly turned their attention to Netflix, before releasing the shows online.

In the aftermath, a flurry of articles claimed that Netflix’s refusal to pay means that it is winning the war on piracy. Torrents are irrelevant or no longer a real threat and piracy is pointless, they concluded.

One of the main reasons cited is a decline in torrent traffic over the years, as reported by the network equipment company Sandvine.

“Last year, BitTorrent traffic reached 1.73 percent of peak period downstream traffic in North America. That’s down from the 60 percent share peer-to-peer file sharing had in 2003. Netflix was responsible for 35.15 percent of downstream traffic,” one reporter wrote.

Piracy pointless?

Even Wired, a reputable technology news site, jumped on the bandwagon.

“It’s not that torrenting is so onerous. But compared to legitimate streaming, the process of downloading a torrenting client, finding a legit file, waiting for it to download, and watching it on a laptop (or mirroring it to a television) hardly seems worth it,” the articles states.

These and many similar articles suggest that Netflix’s ease of use is superior to piracy. Netflix is winning the war on piracy, which is pretty much reduced to a fringe activity carried out by old school data hoarders, they claimed.

But is that really the case?

I wholeheartedly agree that Netflix is a great alternative to piracy, and admit that torrents are not as dominant as they were before. But, everybody who thinks that piracy is limited to torrents, need to educate themselves properly.

Piracy has evolved quite a bit over the past several years and streaming is now the main source to satisfy people’s ‘illegal’ viewing demands.

Whether it’s through pirate streaming sites, mobile apps or dedicated media players hooked to TVs; it’s not hard to argue that piracy is easier and more convenient than it has even been in the past. And arguably, more popular too.

The statistics are dazzling. According to piracy monitoring outfit MUSO there are half a billion visits to video pirate sites every day. Roughly 60% of these are to streaming sites.

While there has been a small decline in streaming visits over the past year, MUSO’s data doesn’t cover the explosion of media player piracy, which means that there is likely a significant increase in piracy overall.

TorrentFreak contacted the aforementioned network equipment company Sandvine, which said that we’re “on to something.”

Unfortunately, they currently have no data to quantify the amount of pirate streaming activity. This is, in part, because many of these streams are hosted by legitimate companies such as Google.

Torrents may not be dominant anymore, but with hundreds of millions of visits to streaming pirate sites per day, and many more via media players and other apps, piracy is still very much alive. Just ask the Motion Picture Association.

I would even argue that piracy is more of a threat to Netflix than it has ever been before.

To illustrate, here is a screenshot from one of the most visited streaming piracy sites online. The site in question receives millions of views per day and featured two Netflix shows, “13 Reasons Why” and the leaked “Orange is The New Black,” in its daily “most viewed” section recently.

Netflix shows among the “most viewed” pirate streams

If you look at a random streaming site, you’ll see that they offer an overview of thousands of popular movies and TV-shows, far more than Netflix. Pirate streaming sites have more content than Netflix, often in high quality, and it doesn’t cost a penny.

Throw in the explosive growth of piracy-capable media players that can bring this content directly to the TV-screen, and you’ll start to realize the magnitude of this threat.

In a way, the boost in streaming piracy is a bigger threat to Netflix than the traditional Hollywood studios. Hollywood still has its exclusive release windows and a superior viewing experience at the box office. All Netflix content is instantly pirated, or already available long before they add it to their catalog.

Sure, pirate sites might not appeal to the average middle-class news columnist who’s been subscribed to Netflix for years, but for tens of millions of less fortunate people, who can do without another monthly charge on their household bill, it’s an easy choice.

Not the right choice, legally speaking, but that doesn’t seem to bother them much.

That’s illustrated by tens of thousands of people from all over the world commenting with their public Facebook accounts, on movies and TV-shows that were obviously pirated.

Pirate comments on a streaming site

Of course, if piracy disappeared overnight then only a fraction of these pirates would pay for a Netflix subscription, but saying that piracy is irrelevant for the streaming giant may be a bit much.

Netflix itself is all too aware of this it seems. The company has launched its own “Global Copyright Protection Group,” an anti-piracy division that’s on par with those of many major Hollywood studios.

Netflix isn’t winning the war on piracy; it just got started….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New “Out of Control” Denuvo Piracy Protection Cracked

Post Syndicated from Andy original https://torrentfreak.com/new-control-denuvo-piracy-protection-cracked-170602/

Like many games in recent times, indie title RiME uses Denuvo anti-piracy technology to keep the swashbucklers away. It won’t stay that way for long.

Earlier this week, RiME developer Tequila Works grabbed a few headlines after stating it would remove the Denuvo protection from its game, should it fall to crackers.

“I have seen some conversations about our use of Denuvo anti-tamper, and I wanted to take a moment to address it,” RiME community manager Dariuas wrote on Steam forums.

“RiME is a very personal experience told through both sight and sound. When a game is cracked, it runs the risk of creating issues with both of those items, and we want to do everything we can to preserve this quality in RiME.”

Dariuas concluded that a Denuvo-free version of RiME would be released if the game was cracked. Within days of the announcement and right on cue, pirates struck.

In a fanfare of celebrations, rising cracking star Baldman announced that he had defeated the latest v4+ iteration of Denuvo and dumped a cracked copy of RiME online. While encouraging people to buy what he describes as a “super nice” game, Baldman was less complimentary about Denuvo.

Labeling the anti-tamper technology a “huge abomination,” the cracker said that Denuvo’s creators had really upped their efforts this time out. People like Baldman who work on Denuvo talk of the protection calling on code ‘triggers.’ For RiME, things were reportedly amped up to 11.

“In Rime that ugly creature went out of control – how do you like three fucking hundreds of THOUSANDS calls to ‘triggers’ during initial game launch and savegame loading? Did you wonder why game loading times are so long – here is the answer,” Baldman explained.

“In previous games like Sniper: Ghost Warrior 3, NieR Automata, Prey there were only about 1000 ‘triggers’ called, so we have x300 here.”

But according to the cracker, the 300,000 calls to triggers was a mere “warmup” for Denuvo. After just 30 minutes of gameplay, the count rose to two million, a figure he delivered with shocked expletives.

One of the main points of criticism for protections like Denuvo is that they take a toll on both game performance and gaming hardware. Baldman, who speaks English as a second language, reports that in RiME things have got massively out of hand which negatively affects the game.

“Protection now calls about 10-30 triggers every second during actual gameplay, slowing game down. In previous games like Sniper: Ghost Warrior 3, NieR Automata, Prey there were only about 1-2 ‘triggers’ called every several minutes during gameplay, so do the math.”

Only making matters worse, the cracker says, is the fact the triggers are heavily obfuscated under a virtual machine, which further affects performance. However, thanks to RiME’s developers making good on their word, any protection-related problems will soon be a thing of the past.

“Today, we got word that there was a crack which would bypass Denuvo,” Dariuas wrote last night.

“Upon receiving this news, we worked to test this and verify that it was, in fact, the case. We have now confirmed that it is. As such, we at [publisher] Team Grey Box are following through on our promise from earlier this week that we will be replacing the current build of RiME with one that does not contain Denuvo.”

So while gamers wait for Denuvo to get stripped from RiME and pirates celebrate, the company behind the anti-piracy technology will be considering its options. If what Baldman claims is true, it sounds like more than just a little desperation is in the air.

Worryingly for Denuvo, not even throwing the kitchen sink at the problem has had much effect.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Introspection

Post Syndicated from Eevee original https://eev.ee/blog/2017/05/28/introspection/

This month, IndustrialRobot has generously donated in order to ask:

How do you go about learning about yourself? Has your view of yourself changed recently? How did you handle it?

Whoof. That’s incredibly abstract and open-ended — there’s a lot I could say, but most of it is hard to turn into words.


The first example to come to mind — and the most conspicuous, at least from where I’m sitting — has been the transition from technical to creative since quitting my tech job. I think I touched on this a year ago, but it’s become all the more pronounced since then.

I quit in part because I wanted more time to work on my own projects. Two years ago, those projects included such things as: giving the Python ecosystem a better imaging library, designing an alternative to regular expressions, building a Very Correct IRC bot framework, and a few more things along similar lines. The goals were all to solve problems — not hugely important ones, but mildly inconvenient ones that I thought I could bring something novel to. Problem-solving for its own sake.

Now that I had all the time in the world to work on these things, I… didn’t. It turned out they were almost as much of a slog as my job had been!

The problem, I think, was that there was no point.

This was really weird to realize and come to terms with. I do like solving problems for its own sake; it’s interesting and educational. And most of the programming folks I know and surround myself with have that same drive and use it to create interesting tools like Twisted. So besides taking for granted that this was the kind of stuff I wanted to do, it seemed like the kind of stuff I should want to do.

But even if I create a really interesting tool, what do I have? I don’t have a thing; I have a tool that can be used to build things. If I want a thing, I have to either now build it myself — starting from nearly zero despite all the work on the tool, because it can only do so much in isolation — or convince a bunch of other people to use my tool to build things. Then they’d be depending on my tool, which means I have to maintain and support it, which is even more time and effort poured into this non-thing.

Despite frequently being drawn to think about solving abstract tooling problems, it seems I truly want to make things. This is probably why I have a lot of abandoned projects boldly described as “let’s solve X problem forever!” — I go to scratch the itch, I do just enough work that it doesn’t itch any more, and then I lose interest.

I spent a few months quietly flailing over this minor existential crisis. I’d spent years daydreaming about making tools; what did I have if not that drive? I was having to force myself to work on what I thought were my passion projects.

Meanwhile, I’d vaguely intended to do some game development, but for some reason dragged my feet forever and then took my sweet time dipping my toes in the water. I did work on a text adventure, Runed Awakening, on and off… but it was a fractal of creative decisions and I had a hard time making all of them. It might’ve been too ambitious, despite feeling small, and that might’ve discouraged me from pursuing other kinds of games earlier.

A big part of it might have been the same reason I took so long to even give art a serious try. I thought of myself as a technical person, and art is a thing for creative people, so I’m simply disqualified, right? Maybe the same thing applies to games.

Lord knows I had enough trouble when I tried. I’d orbited the Doom community for years but never released a single finished level. I did finally give it a shot again, now that I had the time. Six months into my funemployment, I wrote a three-part guide on making Doom levels. Three months after that, I finally released one of my own.

I suppose that opened the floodgates; a couple weeks later, glip and I decided to try making something for the PICO-8, and then we did that (almost exactly a year ago!). Then kept doing it.

It’s been incredibly rewarding — far moreso than any “pure” tooling problem I’ve ever approached. Moreso than even something like veekun, which is a useful thing. People have thoughts and opinions on games. Games give people feelings, which they then tell you about. Most of the commentary on a reference website is that something is missing or incorrect.

I like doing creative work. There was never a singular moment when this dawned on me; it was a slow process over the course of a year or more. I probably should’ve had an inkling when I started drawing, half a year before I quit; even my early (and very rough) daily comics made people laugh, and I liked that a lot. Even the most well-crafted software doesn’t tend to bring joy to people, but amateur art can.

I still like doing technical work, but I prefer when it’s a means to a creative end. And, just as important, I prefer when it has a clear and constrained scope. “Make a library/tool for X” is a nebulous problem that could go in a great many directions; “make a bot that tweets Perlin noise” has a pretty definitive finish line. It was interesting to write a little physics engine, but I would’ve hated doing it if it weren’t for a game I were making and didn’t have the clear scope of “do what I need for this game”.


It feels like creative work is something I’ve been wanting to do for a long time. If this were a made-for-TV movie, I would’ve discovered this impulse one day and immediately revealed myself as a natural-born artistic genius of immense unrealized talent.

That didn’t happen. Instead I’ve found that even something as mundane as having ideas is a skill, and while it’s one I enjoy, I’ve barely ever exercised it at all. I have plenty of ideas with technical work, but I run into brick walls all the time with creative stuff.

How do I theme this area? Well, I don’t know. How do I think of something? I don’t know that either. It’s a strange paradox to have an urge to create things but not quite know what those things are.

It’s such a new and completely different kind of problem. There’s no right answer, or even an answer I can check for “correctness”. I can do anything. With no landmarks to start from, it’s easy to feel completely lost and just draw blanks.

I’ve essentially recalibrated the texture of stuff I work on, and I have to find some completely new ways to approach problems. I haven’t found them yet. I don’t think they’re anything that can be told or taught. But I’m starting to get there, and part of it is just accepting that I can’t treat these like problems with clear best solutions and clear algorithms to find those solutions.

A particularly glaring irony is that I’ve had a really tough problem designing abstract spaces, even though that’s exactly the kind of architecture I praise in Doom. It’s much trickier than it looks — a good abstract design is reminiscent of something without quite being that something.

I suppose it’s similar to a struggle I’ve had with art. I’m drawn to a cartoony style, and cartooning is also a mild form of abstraction, of whittling away details to leave only what’s most important. I’m reminded in particular of the forest background in fox flux — I was completely lost on how to make something reminiscent of a tree line. I knew enough to know that drawing trees would’ve made the background far too busy, but trees are naturally busy, so how do you represent that?

The answer glip gave me was to make big chunky leaf shapes around the edges and where light levels change. Merely overlapping those shapes implies depth well enough to convey the overall shape of the tree. The result works very well and looks very simple — yet it took a lot of effort just to get to the idea.

It reminds me of mathematical research, in a way? You know the general outcome you want, and you know the tools at your disposal, and it’s up to you to make some creative leaps. I don’t think there’s a way to directly learn how to approach that kind of problem; all you can do is look at what others have done and let it fuel your imagination.


I think I’m getting a little distracted here, but this is stuff that’s been rattling around lately.

If there’s a more personal meaning to the tree story, it’s that this is a thing I can do. I can learn it, and it makes sense to me, despite being a huge nerd.

Two and a half years ago, I never would’ve thought I’d ever make an entire game from scratch and do all the art for it. It was completely unfathomable. Maybe we can do a lot of things we don’t expect we’re capable of, if only we give them a serious shot.

And ask for help, of course. I have a hell of a time doing that. I did a painting recently that factored in mountains of glip’s advice, and on some level I feel like I didn’t quite do it myself, even though every stroke was made by my hand. Hell, I don’t even look at references nearly as much as I should. It feels like cheating, somehow? I know that’s ridiculous, but my natural impulse is to put my head down and figure it out myself. Maybe I’ve been doing that for too long with programming. Trust me, it doesn’t work quite so well in a brand new field.


I’m getting distracted again!

To answer your actual questions: how do I go about learning about myself? I don’t! It happens completely by accident. I’ll consciously examine my surface-level thoughts or behaviors or whatever, sure, but the serious fundamental revelations have all caught me completely by surprise — sometimes slowly, sometimes suddenly.

Most of them also came from listening to the people who observe me from the outside: I only started drawing in the first place because of some ridiculous deal I made with glip. At the time I thought they just wanted everyone to draw because art is their thing, but now I’m starting to suspect they’d caught on after eight years of watching me lament that I couldn’t draw.

I don’t know how I handle such discoveries, either. What is handling? I imagine someone discovering something and trying to come to grips with it, but I don’t know that I have quite that experience — my grappling usually comes earlier, when I’m still trying to figure the thing out despite not knowing that there’s a thing to find out. Once I know it, it’s on the table; I can’t un-know it or reject it meaningfully. All I can do is figure out what to do with it, and I approach that the same way I approach every other problem: by flailing at it and hoping for the best.

This isn’t quite 2000 words. Sorry. I’ve run out of things to say about me. This paragraph is very conspicuous filler. Banana. Atmosphere. Vocation.

Julia language for Raspberry Pi

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/julia-language-raspberry-pi/

Julia is a free and open-source general purpose programming language made specifically for scientific computing. It combines the ease of writing in high-level languages like Python and Ruby with the technical power of MATLAB and Mathematica and the speed of C. Julia is ideal for university-level scientific programming and it’s used in research.

Julia language logo

Some time ago Viral Shah, one of the language’s co-creators, got in touch with us at the Raspberry Pi Foundation to say his team was working on a port of Julia to the ARM platform, specifically for the Raspberry Pi. Since then, they’ve done sterling work to add support for ARM. We’re happy to announce that we’ve now added Julia to the Raspbian repository, and that all Raspberry Pi models are supported!

Not only did the Julia team port the language itself to the Pi, but they also added support for GPIO, the Sense HAT and Minecraft. What I find really interesting is that when they came to visit and show us a demo, they took a completely different approach to the Sense HAT than I’d seen before: Simon, one of the Julia developers, started by loading the Julia logo into a matrix within the Jupyter notebook and then displayed it on the Sense HAT LED matrix. He then did some matrix transformations and the Sense HAT showed the effect of these manipulations.

Viral says:

The combination of Julia’s performance and Pi’s hardware unlocks new possibilities. Julia on the Pi will attract new communities and drive applications in universities, research labs and compute modules. Instead of shipping the data elsewhere for advanced analytics, it can simply be processed on the Pi itself in Julia.

Our port to ARM took a while, since we started at a time when LLVM on ARM was not fully mature. We had a bunch of people contributing to it – chipping away for a long time. Yichao did a bunch of the hard work, since he was using it for his experiments. The folks at the Berkeley Race car project also put Julia and JUMP on their self-driving cars, giving a pretty compelling application. We think we will see many more applications.

I organised an Intro to Julia session for the Cambridge Python user group earlier this week, and rather than everyone having to install Julia, Jupyter and all the additional modules on their own laptops, we just set up a room full of Raspberry Pis and prepared an SD card image. This was much easier and also meant we could use the Sense HAT to display output.

Intro to Julia language session at Raspberry Pi Foundation
Getting started with Julia language on Raspbian
Julia language logo on the Sense HAT LED array

Simon kindly led the session, and before long we were using Julia to generate the Mandelbrot fractal and display it on the Sense HAT:

Ben Nuttall on Twitter

@richwareham’s Sense HAT Mandelbrot fractal with @JuliaLanguage at @campython https://t.co/8FK7Vrpwwf

Naturally, one of the attendees, Rich Wareham, progressed to the Julia set – find his code here: gist.github.com/bennuttall/…

Last year at JuliaCon, there were two talks about Julia on the Pi. You can watch them on YouTube:

Install Julia on your Raspberry Pi with:

sudo apt update
sudo apt install julia

You can install the Jupyter notebook for Julia with:

sudo apt install julia libzmq3-dev python3-zmq
sudo pip3 install jupyter
julia -e 'Pkg.add("IJulia");'

And you can easily install extra packages from the Julia console:

Pkg.add("SenseHat")

The Julia team have also created a resources website for getting started with Julia on the Pi: juliaberry.github.io

Julia team visiting Pi Towers

There never was a story of more joy / Than this of Julia and her Raspberry Pi

Many thanks to Viral Shah, Yichao Yu, Tim Besard, Valentin Churavy, Jameson Nash, Tony Kelman, Avik Sengupta and Simon Byrne for their work on the port. We’re all really excited to see what people do with Julia on Raspberry Pi, and we look forward to welcoming Julia programmers to the Raspberry Pi community.

The post Julia language for Raspberry Pi appeared first on Raspberry Pi.

A day with AIY Voice Projects Kit – The MagPi 57 aftermath

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/aiy-voice-projects-kit-magpi-57-aftermath/

Hi folks, Rob here. It’s been a crazy day or so here over at The MagPi and Raspberry Pi as we try to answer all your questions and look at all the cool stuff you’re doing with the new AIY Voice Projects Kit that we bundled with issue 57. While it has been busy, it’s also been a lot of fun.

Got a question?

We know lots of you have got your hands on issue 57, but a lot more of you will have questions to ask. Here’s a quick FAQ before we go over the fun stuff you’ve been doing:

Which stores stock The MagPi in [insert country]?

The original edition of The MagPi is only currently stocked in bricks-and-mortar stores in the UK, Ireland, and the US:

  • In the UK, you can find copies at WHSmith, Asda, Tesco, and Sainsbury’s
  • In the US, you can find them at Barnes and Noble and at Micro Center
  • In Ireland, we’re in Tesco and Easons

Unfortunately, this means you will find very little (if any) stock of issue 57 in stores in other countries. Even Canada (we’ve been asked this a lot!)…

The map below shows the locations to which stock has been shipped (please note, though, that this doesn’t indicate live stock):

My Barnes and Noble still only has issue 55!

Issue 57 should have been in Barnes & Noble stores yesterday, but stock sometimes takes a few days to spread and get onto shelves. Keep trying over the next few days. We’re skipping issue 56 in the US so you can get 57 at the same time (you’ll be getting the issues at the same time from now on).

If I start a new subscription, will I get issue 57?

Yes. We have limited copies for new subscribers. It’s available on all new print subscriptions. You need to specify that you want issue 57 when you subscribe.

Will you be restocking online?

We’re looking into it. If we manage to, keep an eye on our social media channels and the blog for more details.

Is there any way to get the AIY Voice Projects Kit on its own?

Not yet, but you can sign up to Google’s mailing list to be notified when they become available.

Rob asked us to do no evil with our Raspberry Pi: how legally binding is that?

Highest galactic law. Here is a picture of me pointing at you to remind you of this.

Image of Rob with the free AIY kit

Please do not do evil with your Raspberry Pi

OK, with that out of the way, here’s the cool stuff!

AIY Voice Projects Kit builds

A lot of you built the kit very quickly, including Raspberry Pi Certified Educator Lorraine Underwood, who managed it before lunch.

Lorraine Underwood on Twitter

Ha, cool. I made it! Top notch instructions and pics @TheMagP1 Not going to finish the whole thing before youngest is out of nursery. Gah!!

We love Andy Grimley’s shot as the HAT seems to be floating. We had no idea it could levitate!

Andy Grimley on Twitter

This is awesome @TheMagP1 #AIYProjects

A few people reached out to tell us they were building it with children for their weekend project. These messages really are one of the best parts of our job.

Screenshot of Facebook comment on AIY kit

Screenshot of tweet about AIY kit

Screenshot of tweet about AIY kit

What have people been making with it? Domhnall O’Hanlon made the basic assistant setup, and photographed it in the stunning surroundings of the National Botanic Gardens of Ireland:

Domhnall O Hanlon on Twitter

Took my @Raspberry_Pi #AIYProjects on a field trip to the National Botanic Gardens. Thanks @TheMagP1! #edchatie #edtech https://t.co/f5dR9JBDEx

Friend of The MagPi David Pride has a cool idea:

David Pride on Twitter

@Raspberry_Pi @TheMagP1 Can feel a weekend mashup happening with the new #AIYProjects kit & my latest car boot find (the bird, not the cat!)

Check out Bastiaan Slee’s hack of an old IoT device:

Bastiaan Slee on Twitter

@TheMagP1 I’ve given my Nabaztag a second life with #AIYProjects https://t.co/udtWaAMz2x

Bastiaan Slee on Twitter

Hacking time with the Nabaztag and #AIYProjects ! https://t.co/udtWaAMz2x

Finally, Sandy Macdonald is doing a giveaway of the issue. Go and enter: a simple retweet could win you a great prize!

Sandy Macdonald on Twitter

I’m giving away this copy of @TheMagP1 with the @Raspberry_Pi #AIYProjects free, inc. p&p worldwide. RT to enter. Closes 9am BST tomorrow.

If you have got your hands on the AIY Voice Projects Kit, do show us what you’ve made with it! Remember to use the #AIYProjects hashtag on Twitter to show off your project as well.

There’s also a dedicated forum for discussing the AIY Voice Projects Kit which you can find on the main Raspberry Pi forum. Check it out if you have something to share or if you’re having any problems.

Yesterday I promised a double-dose of Picard gifs. So, what’s twice as good as a Picard gif? A Sisko gif, of course! See you next time…

No Title

No Description

The post A day with AIY Voice Projects Kit – The MagPi 57 aftermath appeared first on Raspberry Pi.

#CharityTuesday: What do kids say about Code Club?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/code-club-kids/

We’ve recently released a series of new Code Club videos on our YouTube channel. These range from advice on setting up your own Code Club to testimonials from kids and volunteers. To offer a little more information on the themes of each video, we’ll be releasing #CharityTuesday blog posts for each, starting with the reason for it all: the kids.

What do kids say about Code Club?

The team visited Liverpool Central Library to find out what the children at their Code Club think about the club and its activities, and what they’re taking away from attending.

“It makes me all excited inside…”

Code Clubs are weekly after-school coding clubs for 9 – 11 year olds. Children learn to create games, animations and websites using our specially created resources, with the support of awesome volunteers. We visited Liverpool Central Library to find out what the children at their Code Club think about their coding club.

We love to hear the wonderful stories of exploration and growth from the kids that attend Code Clubs. The changes our volunteers see in many of their club members are both heart-warming and extraordinary.

Code Club kids two girls at Code Club

“It makes me all excited inside.”

“There’s a lad who comes to my club who was really not confident about coding. He said he was rubbish at maths and such when he first started,” explains Dan Powell, Code Club Regional Coordinator for the South East. “After he’d done a couple of terms he told us, ‘Tuesday is my favourite day now because I get to come to Code Club: it makes my brain feel sparkly.’ He’s now writing his own adventure game in Scratch!”

Code Club kids

“I think the best part of it is being able to interact with other people and to share ideas on projects.”

“I love the sentiments at the end of this advert a couple of my newbies made in Scratch,” continues Lorna Gibson, Regional Coordinator for Scotland. “The bit where they say that nobody is left behind and everyone has fun made me teary.”

Here is their wonderful Scratch advert:

Get involved in Code Club!

Code Club is a nationwide network of volunteer-led after-school coding clubs for children. It offers a great place for children of all abilities to learn and build upon their skills amongst like-minded peers.

There are currently over 10,000 active Code Clubs across the world and official Code Club communities in ten countries. If you want to find out more, visit the Code Club UK website, or Code Club International if you are outside of the UK.

The post #CharityTuesday: What do kids say about Code Club? appeared first on Raspberry Pi.

Tor exit node operator arrested in Russia (TorServers.net blog)

Post Syndicated from ris original https://lwn.net/Articles/720231/rss

On April 12 Dmitry Bogatov, a mathematician and Debian maintainer, was arrested
in Russia
for “incitation to terrorism” because of some messages that
went through his Tor exit node. “Though, the very nature of Bogatov
case is a controversial one, as it mixes technical and legal arguments, and
makes necessary both strong legal and technical expertise involved. Indeed,
as a Tor exit node operator, Dmitry does not have control and
responsibility on the content and traffic that passes through his node: it
would be the same as accusing someone who has a knife stolen from her house
for the murder committed with this knife by a stranger.
” The Debian
Project made a brief statement.

Sense HAT Emulator Upgrade

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/sense-hat-emulator-upgrade/

Last year, we partnered with Trinket to develop a web-based emulator for the Sense HAT, the multipurpose add-on board for the Raspberry Pi. Today, we are proud to announce an exciting new upgrade to the emulator. We hope this will make it even easier for you to design amazing experiments with the Sense HAT!

What’s new?

The original release of the emulator didn’t fully support all of the Sense HAT features. Specifically, the movement sensors were not emulated. Thanks to funding from the UK Space Agency, we are delighted to announce that a new round of development has just been completed. From today, the movement sensors are fully supported. The emulator also comes with a shiny new 3D interface, Astro Pi skin mode, and Pygame event handling. Click the ▶︎ button below to see what’s new!

Upgraded sensors

On a physical Sense HAT, real sensors react to changes in environmental conditions like fluctuations in temperature or humidity. The emulator has sliders which are designed to simulate this. However, emulating the movement sensor is a bit more complicated. The upgrade introduces a 3D slider, which is essentially a model of the Sense HAT that you can move with your mouse. Moving the model affects the readings provided by the accelerometer, gyroscope, and magnetometer sensors.

Code written in this emulator is directly portable to a physical Raspberry Pi and Sense HAT without modification. This means you can now develop and test programs using the movement sensors from any internet-connected computer, anywhere in the world.

Astro Pi mode

Astro Pi is our series of competitions offering students the chance to have their code run in space! The code is run on two space-hardened Raspberry Pi units, with attached Sense HATs, on the International Space Station.

Image of Astro Pi unit Sense HAT emulator upgrade

Astro Pi skin mode

There are a number of practical things that can catch you out when you are porting your Sense HAT code to an Astro Pi unit, though, such as the orientation of the screen and joystick. Just as having a 3D-printed Astro Pi case enables you to discover and overcome these, so does the Astro Pi skin mode in this emulator. In the bottom right-hand panel, there is an Astro Pi button which enables the mode: click it again to go back to the Sense HAT.

The joystick and push buttons are operated by pressing your keyboard keys: use the cursor keys and Enter for the joystick, and U, D, L, R, A, and B for the buttons.

Sense Hat resources for Code Clubs

Image of gallery of Code Club Sense HAT projects Sense HAT emulator upgrade

Click the image to visit the Code Club projects page

We also have a new range of Code Club resources which are based on the emulator. Of these, three use the environmental sensors and two use the movement sensors. The resources are an ideal way for any Code Club to get into physical computing.

The technology

The 3D models in the emulator are represented entirely with HTML and CSS. “This project pushed the Trinket team, and the 3D web, to its limit,” says Elliott Hauser, CEO of Trinket. “Our first step was to test whether pure 3D HTML/CSS was feasible, using Julian Garnier’s Tridiv.”

Sense HAT 3D image mockup Sense HAT emulator upgrade

The Trinket team’s preliminary 3D model of the Sense HAT

“We added JavaScript rotation logic and the proof of concept worked!” Elliot continues. “Countless iterations, SVG textures, and pixel-pushing tweaks later, the finished emulator is far more than the sum of its parts.”

Sense HAT emulator 3d image final version Sense HAT emulator upgrade

The finished Sense HAT model: doesn’t it look amazing?

Check out this blog post from Trinket for more on the technology and mathematics behind the models.

One of the compromises we’ve had to make is browser support. Unfortunately, browsers like Firefox and Microsoft Edge don’t fully support this technology yet. Instead, we recommend that you use Chrome, Safari, or Opera to access the emulator.

Where do I start?

If you’re new to the Sense HAT, you can simply copy and paste many of the code examples from our educational resources, like this one. Alternatively, you can check out our Sense HAT Essentials e-book. For a complete list of all the functions you can use, have a look at the Sense HAT API reference here.

The post Sense HAT Emulator Upgrade appeared first on Raspberry Pi.

Commenting Policy for This Blog

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/commenting_poli.html

Over the past few months, I have been watching my blog comments decline in civility. I blame it in part on the contentious US election and its aftermath. It’s also a consequence of not requiring visitors to register in order to post comments, and of our tolerance for impassioned conversation. Whatever the causes, I’m tired of it. Partisan nastiness is driving away visitors who might otherwise have valuable insights to offer.

I have been engaging in more active comment moderation. What that means is that I have been quicker to delete posts that are rude, insulting, or off-topic. This is my blog. I consider the comments section as analogous to a gathering at my home. It’s not a town square. Everyone is expected to be polite and respectful, and if you’re an unpleasant guest, I’m going to ask you to leave. Your freedom of speech does not compel me to publish your words.

I like people who disagree with me. I like debate. I even like arguments. But I expect everyone to behave as if they’ve been invited into my home.

I realize that I sometimes express opinions on political matters; I find they are relevant to security at all levels. On those posts, I welcome on-topic comments regarding those opinions. I don’t welcome people pissing and moaning about the fact that I’ve expressed my opinion on something other than security technology. As I said, it’s my blog.

So, please… Assume good faith. Be polite. Minimize profanity. Argue facts, not personalities. Stay on topic. If you want a model to emulate, look at Clive Robinson’s posts.

Schneier on Security is not a professional operation. There’s no advertising, so no revenue to hire staff. My part-time moderator — paid out of my own pocket — and I do what we can when we can. If you see a comment that’s spam, or off-topic, or an ad hominem attack, flag it and be patient. Don’t reply or engage; we’ll get to it. And we won’t always post an explanation when we delete something.

My own stance on privacy and anonymity means that I’m not going to require commenters to register a name or e-mail address, so that isn’t an option. And I really don’t want to disable comments.

I dislike having to deal with this problem. I’ve been proud and happy to see how interesting and useful the comments section has been all these years. I’ve watched many blogs and discussion groups descend into toxicity as a result of trolls and drive-by ideologues derailing the conversations of regular posters. I’m not going to let that happen here.

Pie vs. π vs. Pi Day 2017

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-day-2017/

It’s the fourteenth day of the third month! And if you’re a Brit, that means absolutely nothing. But take that date, flip reverse it like you’re American, and BOOM! It’s 3.14 or, as the cool kids call it, Pi Day.

In honour of this wonderful day, here are some awesome Pi/pie/π doohickies that we hope you’ll all enjoy!

Pi versus pie

screenshot of text message conversational misunderstanding: Pi versus pie versus P.I.

Dramatised re-enactment of actual events

Have you found yourself embroiled in a textual or verbal conversation similar to the one above? Are you tired of having to explain that you’re not playing Minecraft on a piece of pastry, or that your lights aren’t being controlled by confectionery? Worry no more, for we have provided the following graphic as a visual aid to help you to introduce the uninitiated to the Raspberry Pi.*

Pi vs Pie Pi Day infographic

Print off the PDF version for your classroom, Code Club, office, bedroom, locker door, Grandma, neighbour, local bus stop, newsletter, Christmas cards, and more!

Do it for the Pi (but please don’t eat it!)

This gem of a music video found its way to us via Twitter, and, in the moment we watched the preview snippet, the earworm lodged itself firmly into our brains and hearts.

Creative Mind Frame / 1-UP

So this whole Pi(e) Day business I’ve been posting about… With 3.14 coming up Ohm-I and I decided it was appropriate to do our Patreon song about the Raspberry Pi computer. I mean it’s only right… right? But why stop there?

This is just one of many awesome videos from Creative Mind Frame and we highly recommend checking out more of his brilliant work.

But what about pie?

How could we publish a Pi Day blog post without mentioning pie? Here in the UK, I like to think we’re more of a savoury pie nation than our American friends. As far as deserts go, whether you head into a cafe, pull up a chair at your grandmother’s table, or simply browse the aisles of your local supermarket, you’re likely to find a wider array of tarts than pies on offer. Because of this, let me direct you towards our second Queen, Her Sublime Majesty Mary Berry, and this recipe for Mary’s Bakewell Tart. Raspberry jam and almonds? Yes, please!

Image of a Bakewell Tart

Nommmmmmmmmmmmmmmmmmmm…
Photo from BBC Food.

If the Bakewell doesn’t do it for you, you heathens, check out Rosanna Pansino‘s Mini Raspberry Pi Pies.

MINI RASPBERRY PI PIES – NERDY NUMMIES

Today I made Mini Raspberry Pi Pies in celebration of Pi Day (March 14th)! I really enjoy making nerdy themed goodies and decorating them. I’m not a pro, but I love baking as a hobby. Please let me know what kind of treat you would like me to make next.

Last, but by no means least…

What exactly IS Pi?

An excellent question. Here’s one of YouTube’s finest, maths-enthusiast The Odd 1s Out, to argue the case for his favourite number. Also, I stole his Pi/Pie Day artwork for today’s thumbnail so…thanks!

Why Pi is Awesome (Vi Hart Rebuttal)

Happy Pi day everyone! Go checkout Vi Hart’s channel➤ https://www.youtube.com/user/Vihart She’s made a 2016 version!! :O ➤https://www.youtube.com/watch?v=vydPOjRVcSg&nohtml5=False 2014 version➤https://www.youtube.com/watch?v=5iUh_CSjaSw ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Website ➤ http://tinyurl.com/za5gw22 Tumblr ➤ http://theodd1sout.tumblr.com Facebook ➤ https://www.facebook.com/theodd1sout Twitter ➤ https://twitter.com/Theodd1sout Tapastic ➤ https://tapastic.com/theodd1sout

Whatever you do this Pi Day, make sure you have a great one. And if you build something great, learn something wonderful, or simply make a pie, be sure to share it using #PiDay and tag us so we can see it!

*Big up to Sam for the awesome graphic and the “YES!” he exclaimed when I asked him to draw a Pi versus a Pie…Street Fighter-style.

 

The post Pie vs. π vs. Pi Day 2017 appeared first on Raspberry Pi.

New – Instance Size Flexibility for EC2 Reserved Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-instance-size-flexibility-for-ec2-reserved-instances/

Reserved Instances allow AWS customers to receive a significant discount on their EC2 usage (up to 75% when compared to On-Demand pricing), along with capacity reservation when the RIs are purchased for use in a specific Availability Zone (AZ).

Late last year we made Reserved Instances more flexible with the launch of Regional RIs that apply the discount to any AZ in the Region, along with Convertible RIs that allow you to change the instance family and other parameters associated with a Reserved Instance. Both types of RIs reduce your management overhead and provide you with additional options. When you use Regional RIs you can launch an instance without having to worry about launching in the AZ that is eligible for the RI discount. When you use Convertible RIs you can ensure that your RIs remain well-fitted to your usage, even as your choice of instance types and sizes varies over time.

Instance Size Flexibility
Effective March 1, your existing Regional RIs are even more flexible! All Regional Linux/UNIX RIs with shared tenancy now apply to all sizes of instances within an instance family and AWS region, even if you are using them across multiple accounts via Consolidated Billing. This will further reduce the time that you spend managing your RIs and will let you be even more creative and innovative with your use of compute resources.

All new and existing RIs are sized according to a normalization factor that is based on the instance size:

Instance Size
Normalization Factor
nano 0.25
micro 0.5
small 1
medium 2
large 4
xlarge 8
2xlarge 16
4xlarge 32
8xlarge 64
10xlarge 80
32xlarge 256

Let’s say you already own an RI for a c4.8xlarge. This RI now applies to any usage of a Linux/UNIX C4 instance with shared tenancy in the region. This could be:

  • One c4.8xlarge instance.
  • Two c4.4xlarge instances.
  • Four c4.2xlarge instances.
  • Sixteen c4.large instances.

It also includes other combinations such as one c4.4xlarge and eight c4.large instances.

If you own an RI that is smaller than the instance that you are running, you will be charged the pro-rated, On-Demand price for the excess. This means that you could buy an RI for a c4.4xlarge, use that instance most of the time, but scale up to a c4.8xlarge instance on occasion. We’ll do the math and you’ll pay only half of the On-Demand, per-hour price for the larger instance (as always, our goal is to give you access to compute power at the lowest possible cost). If you own an RI for a large instance and run a smaller instance, the RI price will apply to the smaller instance. However, the unused portion of the reservation will not accumulate.

Now Available
This new-found flexibility is available now and will be applied automatically to your Regional Linux/UNIX RIs with shared tenancy, without any effort on your part.

Jeff;

Utopia

Post Syndicated from Eevee original https://eev.ee/blog/2017/03/08/utopia/

It’s been a while, but someone’s back on the Patreon blog topic tier! IndustrialRobot asks:

What does your personal utopia look like? Do you think we (as mankind) can achieve it? Why/why not?

Hm.

I spent the month up to my eyeballs in a jam game, but this question was in the back of my mind a lot. I could use it as a springboard to opine about anything, especially in the current climate: politics, religion, nationalism, war, economics, etc., etc. But all of that has been done to death by people who actually know what they’re talking about.

The question does say “personal”. So in a less abstract sense… what do I want the world to look like?

Mostly, I want everyone to have the freedom to make things.

I’ve been having a surprisingly hard time writing the rest of this without veering directly into the ravines of “basic income is good” and “maybe capitalism is suboptimal”. Those are true, but not really the tone I want here, and anyway they’ve been done to death by better writers than I. I’ve talked this out with Mel a few times, and it sounds much better aloud, so I’m going to try to drop my Blog Voice and just… talk.

*ahem*

Art versus business

So, art. Art is good.

I’m construing “art” very broadly here. More broadly than “media”, too. I’m including shitty robots, weird Twitter almost-bots, weird Twitter non-bots, even a great deal of open source software. Anything that even remotely resembles creative work — driven perhaps by curiosity, perhaps by practicality, but always by a soul bursting with ideas and a palpable need to get them out.

Western culture thrives on art. Most culture thrives on art. I’m not remotely qualified to defend this, but I suspect you could define culture in terms of art. It’s pretty important.

You’d think this would be reflected in how we discuss art, but often… it’s not. Tell me how often you’ve heard some of these gems.

  • I could do that.”
  • My eight-year-old kid could do that.”
  • Jokes about the worthlessness of liberal arts degrees.
  • Jokes about people trying to write novels in their spare time, the subtext being that only dreamy losers try to write novels, or something.
  • The caricature of a hippie working on a screenplay at Starbucks.

Oh, and then there was the guy who made a bot to scrape tons of art from artists who were using Patreon as a paywall — and a primary source of income. The justification was that artists shouldn’t expect to make a living off of, er, doing art, and should instead get “real jobs”.

I do wonder. How many of the people repeating these sentiments listen to music, or go to movies, or bought an iPhone because it’s prettier? Are those things not art that took real work to create? Is creating those things not a “real job”?

Perhaps a “real job” has to be one that’s not enjoyable, not a passion? And yet I can’t recall ever hearing anyone say that Taylor Swift should get a “real job”. Or that, say, pro football players should get “real jobs”. What do pro football players even do? They play a game a few times a year, and somehow this drives the flow of unimaginable amounts of money. We dress it up in the more serious-sounding “sport”, but it’s a game in the same general genre as hopscotch. There’s nothing wrong with that, but somehow it gets virtually none of the scorn that art does.

Another possible explanation is America’s partly-Christian, partly-capitalist attitude that you deserve exactly whatever you happen to have at the moment. (Whereas I deserve much more and will be getting it any day now.) Rich people are rich because they earned it, and we don’t question that further. Poor people are poor because they failed to earn it, and we don’t question that further, either. To do so would suggest that the system is somehow unfair, and hard work does not perfectly correlate with any particular measure of success.

I’m sure that factors in, but it’s not quite satisfying: I’ve also seen a good deal of spite aimed at people who are making a fairly decent chunk through Patreon or similar. Something is missing.

I thought, at first, that the key might be the American worship of work. Work is an inherent virtue. Politicians run entire campaigns based on how many jobs they’re going to create. Notably, no one seems too bothered about whether the work is useful, as long as someone decided to pay you for it.

Finally I stumbled upon the key. America doesn’t actually worship work. America worships business. Business means a company is deciding to pay you. Business means legitimacy. Business is what separates a hobby from a career.

And this presents a problem for art.

If you want to provide a service or sell a product, that’ll be hard, but America will at least try to look like it supports you. People are impressed that you’re an entrepreneur, a small business owner. Politicians will brag about policies made in your favor, whether or not they’re stabbing you in the back.

Small businesses have a particular structure they can develop into. You can divide work up. You can have someone in sales, someone in accounting. You can provide specifications and pay a factory to make your product. You can defer all of the non-creative work to someone else, whether that means experts in a particular field or unskilled labor.

But if your work is inherently creative, you can’t do that. The very thing you’re making is your idea in your style, driven by your experience. This is not work that’s readily parallelizable. Even if you sell physical merchandise and register as an LLC and have a dedicated workspace and do various other formal business-y things, the basic structure will still look the same: a single person doing the thing they enjoy. A hobbyist.

Consider the bulleted list from above. Those are all individual painters or artists or authors or screenwriters. The kinds of artists who earn respect without question are generally those managed by a business, those with branding: musical artists signed to labels, actors working for a studio. Even football players are part of a tangle of business.

(This doesn’t mean that business automatically confers respect, of course; tech in particular is full of anecdotes about nerds’ disdain for people whose jobs are design or UI or documentation or whathaveyou. But a businessy look seems to be a significant advantage.)

It seems that although art is a large part of what informs culture, we have a culture that defines “serious” endeavors in such a way that independent art cannot possibly be “serious”.

Art versus money

Which wouldn’t really matter at all, except that we also have a culture that expects you to pay for food and whatnot.

The reasoning isn’t too outlandish. Food is produced from a combination of work and resources. In exchange for getting the food, you should give back some of your own work and resources.

Obviously this is riddled with subtle flaws, but let’s roll with it for now and look at a case study. Like, uh, me!

Mel and I built and released two games together in the six weeks between mid-January and the end of February. Together, those games have made $1,000 in sales. The sales trail off fairly quickly within a few days of release, so we’ll call that the total gross for our effort.

I, dumb, having never actually sold anything before, thought this was phenomenal. Then I had the misfortune of doing some math.

Itch takes at least 10%, so we’re down to $900 net. Divided over six weeks, that’s $150 per week, before taxes — or $3.75 per hour if we’d been working full time.

Ah, but wait! There are two of us. And we hadn’t been working full time — we’d been working nearly every waking hour, which is at least twice “full time” hours. So we really made less than a dollar an hour. Even less than that, if you assume overtime pay.

From the perspective of capitalism, what is our incentive to do this? Between us, we easily have over thirty years of experience doing the things we do, and we spent weeks in crunch mode working on something, all to earn a small fraction of minimum wage. Did we not contribute back our own work and resources? Was our work worth so much less than waiting tables?

Waiting tables is a perfectly respectable way to earn a living, mind you. Ah, but wait! I’ve accidentally done something clever here. It is generally expected that you tip your waiter, because waiters are underpaid by the business, because the business assumes they’ll be tipped. Not tipping is actually, almost impressively, one of the rudest things you can do. And yet it’s not expected that you tip an artist whose work you enjoy, even though many such artists aren’t being paid at all.

Now, to be perfectly fair, both games were released for free. Even a dollar an hour is infinitely more than the zero dollars I was expecting — and I’m amazed and thankful we got as much as we did! Thank you so much. I bring it up not as a complaint, but as an armchair analysis of our systems of incentives.

People can take art for granted and whatever, yes, but there are several other factors at play here that hamper the ability for art to make money.

For one, I don’t want to sell my work. I suspect a great deal of independent artists and writers and open source developers (!) feel the same way. I create things because I want to, because I have to, because I feel so compelled to create that having a non-creative full-time job was making me miserable. I create things for the sake of expressing an idea. Attaching a price tag to something reduces the number of people who’ll experience it. In other words, selling my work would make it less valuable in my eyes, in much the same way that adding banner ads to my writing would make it less valuable.

And yet, I’m forced to sell something in some way, or else I’ll have to find someone who wants me to do bland mechanical work on their ideas in exchange for money… at the cost of producing sharply less work of my own. Thank goodness for Patreon, at least.

There’s also the reverse problem, in that people often don’t want to buy creative work. Everyone does sometimes, but only sometimes. It’s kind of a weird situation, and the internet has exacerbated it considerably.

Consider that if I write a book and print it on paper, that costs something. I have to pay for the paper and the ink and the use of someone else’s printer. If I want one more book, I have to pay a little more. I can cut those costs pretty considerable by printing a lot of books at once, but each copy still has a price, a marginal cost. If I then gave those books away, I would be actively losing money. So I can pretty well justify charging for a book.

Along comes the internet. Suddenly, copying costs nothing. Not only does it cost nothing, but it’s the fundamental operation. When you download a file or receive an email or visit a web site, you’re really getting a copy! Even the process which ultimately shows it on your screen involves a number of copies. This is so natural that we don’t even call it copying, don’t even think of it as copying.

True, bandwidth does cost something, but the rate is virtually nothing until you start looking at very big numbers indeed. I pay $60/mo for hosting this blog and a half dozen other sites — even that’s way more than I need, honestly, but downgrading would be a hassle — and I get 6TB of bandwidth. Even the longest of my posts haven’t exceeded 100KB. A post could be read by 64 million people before I’d start having a problem. If that were the population of a country, it’d be the 23rd largest in the world, between Italy and the UK.

How, then, do I justify charging for my writing? (Yes, I realize the irony in using my blog as an example in a post I’m being paid $88 to write.)

Well, I do pour effort and expertise and a fraction of my finite lifetime into it. But it doesn’t cost me anything tangible — I already had this hosting for something else! — and it’s easier all around to just put it online.

The same idea applies to a vast bulk of what’s online, and now suddenly we have a bit of a problem. Not only are we used to getting everything for free online, but we never bothered to build any sensible payment infrastructure. You still have to pay for everything by typing in a cryptic sequence of numbers from a little physical plastic card, which will then give you a small loan and charge the seller 30¢ plus 2.9% for the “convenience”.

If a website could say “pay 5¢ to read this” and you clicked a button in your browser and that was that, we might be onto something. But with our current setup, it costs far more than 5¢ to transfer 5¢, even though it’s just a number in a computer somewhere. The only people with the power and resources to fix this don’t want to fix it — they’d rather be the ones charging you the 30¢ plus 2.9%.

That leads to another factor of platforms and publishers, which are more than happy to eat a chunk of your earnings even when you do sell stuff. Google Play, the App Store, Steam, and anecdotally many other big-name comparative platforms all take 30% of your sales. A third! And that’s good! It seems common among book publishers to take 85% to 90%. For ebook sales — i.e., ones that don’t actually cost anything — they may generously lower that to a mere 75% to 85%.

Bless Patreon for only taking 5%. Itch.io is even better: it defaults to 10%, but gives you a slider, which you can set to anything from 0% to 100%.

I’ve mentioned all this before, so here’s a more novel thought: finite disposable income. Your audience only has so much money to spend on media right now. You can try to be more compelling to encourage them to spend more of it, rather than saving it, but ultimately everyone has a limit before they just plain run out of money.

Now, popularity is heavily influenced by social and network effects, so it tends to create a power law distribution: a few things are ridiculously hyperpopular, and then there’s a steep drop to a long tail of more modestly popular things.

If a new hyperpopular thing comes out, everyone is likely to want to buy it… but then that eats away a significant chunk of that finite pool of money that could’ve gone to less popular things.

This isn’t bad, and buying a popular thing doesn’t make you a bad person; it’s just what happens. I don’t think there’s any satisfying alternative that doesn’t involve radically changing the way we think about our economy.

Taylor Swift, who I’m only picking on because her infosec account follows me on Twitter, has sold tens of millions of albums and is worth something like a quarter of a billion dollars. Does she need more? If not, should she make all her albums free from now on?

Maybe she does, and maybe she shouldn’t. The alternative is for someone to somehow prevent her from making more money, which doesn’t sit well. Yet it feels almost heretical to even ask if someone “needs” more money, because we take for granted that she’s earned it — in part by being invested in by a record label and heavily advertised. The virtue is work, right? Don’t a lot of people work just as hard? (“But you have to be talented too!” Then please explain how wildly incompetent CEOs still make millions, and leave burning businesses only to be immediately hired by new ones? Anyway, are we really willing to bet there is no one equally talented but not as popular by sheer happenstance?)

It’s kind of a moot question anyway, since she’s probably under contract with billionaires and it’s not up to her.

Where the hell was I going with this.


Right, so. Money. Everyone needs some. But making it off art can be tricky, unless you’re one of the lucky handful who strike gold.

And I’m still pretty goddamn lucky to be able to even try this! I doubt I would’ve even gotten into game development by now if I were still working for an SF tech company — it just drained so much of my creative energy, and it’s enough of an uphill battle for me to get stuff done in the first place.

How many people do I know who are bursting with ideas, but have to work a tedious job to keep the lights on, and are too tired at the end of the day to get those ideas out? Make no mistake, making stuff takes work — a lot of it. And that’s if you’re already pretty good at the artform. If you want to learn to draw or paint or write or code, you have to do just as much work first, with much more frustration, and not as much to show for it.

Utopia

So there’s my utopia. I want to see a world where people have the breathing room to create the things they dream about and share them with the rest of us.

Can it happen? Maybe. I think the cultural issues are a fairly big blocker; we’d be much better off if we treated independent art with the same reverence as, say, people who play with a ball for twelve hours a year. Or if we treated liberal arts degrees as just as good as computer science degrees. (“But STEM can change the world!” Okay. How many people with computer science degrees would you estimate are changing the world, and how many are making a website 1% faster or keeping a lumbering COBOL beast running or trying to trick 1% more people into clicking on ads?)

I don’t really mean stuff like piracy, either. Piracy is a thing, but it’s… complicated. In my experience it’s not even artists who care the most about piracy; it’s massive publishers, the sort who see artists as a sponge to squeeze money out of. You know, the same people who make everything difficult to actually buy, infest it with DRM so it doesn’t work on half the stuff you own, and don’t even sell it in half the world.

I mean treating art as a free-floating commodity, detached from anyone who created it. I mean neo-Nazis adopting a comic book character as their mascot, against the creator’s wishes. I mean politicians and even media conglomerates using someone else’s music in well-funded videos and ads without even asking. I mean assuming Google Image Search, wonder that it is, is some kind of magical free art machine. I mean the snotty Reddit post I found while looking up Patreon’s fee structure, where some doofus was insisting that Patreon couldn’t possibly pay for a full-time YouTuber’s time, because not having a job meant they had lots of time to spare.

Maybe I should go one step further: everyone should create at least once or twice. Everyone should know what it’s like to have crafted something out of nothing, to be a fucking god within the microcosm of a computer screen or a sewing machine or a pottery table. Everyone should know that spark of inspiration that we don’t seem to know how to teach in math or science classes, even though it’s the entire basis of those as well. Everyone should know that there’s a good goddamn reason I listed open source software as a kind of art at the beginning of this post.

Basic income and more arts funding for public schools. If Uber can get billions of dollars for putting little car icons on top of Google Maps and not actually doing any of their own goddamn service themselves, I think we can afford to pump more cash into webcomics and indie games and, yes, even underwater basket weaving.

International Women’s Day: Girls at Code Club

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/international-womens-day-2017/

On International Women’s Day and every day, Raspberry Pi and Code Club are determined to support girls and women to fulfil their potential in the field of computing.

Code Club provides computing opportunities for kids aged nine to eleven within their local communities, and 40 percent of the children attending our 5000-plus UK clubs are girls. Code Club aims to inspire them to get excited about computer science and digital making, and to help them develop the skills and knowledge to succeed.

Big Birthday Bash Code Club Raspberry Pi Bag

Code Club’s broad appeal

From the very beginning, Code Club was designed to appeal equally to girls and boys. Co-founder Clare Sutcliffe describes how she took care to avoid anything that evoked gendered stereotypes:

When I was first designing Code Club – its brand, tone of voice and content – it was all with a gender-neutral feel firmly in mind. Anything that felt too gendered was ditched.

The resources that children use are selected to have broad appeal, engaging a wide range of interests. Code Club’s hosts and volunteers provide an environment that is welcoming and supportive.

Two girls coding at Code Club

A crucial challenge for the future is to sustain an interest in computing in girls as they enter their teenage years. As in other areas of science, technology, engineering and maths; early success for girls doesn’t yet feed through into pursuing higher qualifications or entering related careers in large numbers. What can we all do to make sure that interested and talented young women know that this exciting field is for them?

The post International Women’s Day: Girls at Code Club appeared first on Raspberry Pi.

TEN BUCKS! TEN FREAKIN’ BUCKS! Zero W aftermath

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/ten-freakin-bucks-zero-w-aftermath/

Tuesday saw the launch of our brand-new $10 Raspberry Pi Zero W, the next step in the evolution of our tiniest computer, now equipped with wireless LAN and Bluetooth.

Steve Anderson 🇪🇺 on Twitter

looks around house* “I’ve got too many SBCs. Really must get rid of some…” *new @Raspberry_Pi Zero W released* “SHUT UP AND TAKE MY MONEY!

As we hoped, the Zero W was very well received, with units flying off the virtual shelves of our official distributors.

The Pi Hut on Twitter

Over 4000 #PiZeroW in first batch of parcels for the postie.

By close of business on launch day, Zero Ws were winging their way to tens of thousands of excited makers, all eager to retrofit their existing Zero projects, or find new ways to build with the updated tech.

Facebook Raspberry Pi Zero W

We wanted to highlight some of the best responses we’ve received over the last few days: a mix of tweets, status updates and videos that made us smile.

Andy definitely wins the prize for most excitable launch day video. His enthusiasm is infectious!

Andy’s Pick: Pi Zero W

Today, Raspberry Pi launched the Pi Zero W, an upgrade to their $10 Pi Zero, adding Wi-Fi and Bluetooth to the tiny computer. For the full episode, visit twit.tv/mbw/548

Pi Borg wasted no time in fitting the Zero W into one of their Pololu kits. We’re looking forward to seeing it in action at the Big Birthday Weekend on Saturday.

Raspberry Pi Zero W robot!

We’ve built a robot using the new Raspberry Pi Zero W, a Pololu kit hacked to fit some bigger motors and our secret new motor controller being revealed on Friday… stay tuned! http://www.piborg.org

Raspberry Pi Foundation CEO Philip Colligan took the Zero W along with him yesterday when he joined the Secretary of State for Culture, Media and Sport to help launch the UK Government’s Digital Strategy.

STEAM Co. on Twitter

CreativityIsGREAT DEFINED. @philipcolligan on @Raspberry_Pi launched with #UKDigitalStrategy @dcms @beisgovuk @MattHancockMP @BBCRoryCJ https://t.co/6s2Loetqwj

And there’s always an eruption of excitement from the Comms team when Wil jumps on board!

Wil Wheaton on Twitter

Oh boy!! @Raspberry_Pi zero with WiFi on-board is available, and @pimoroni has some really neat kits!! https://t.co/dqQzE5KHyD

We also saw some brilliant launch videos from members of our community.

NEW Raspberry Pi Zero Wireless – $10 with WiFi + Bluetooth!

On the 5th anniversary of the launch of the original Raspberry Pi in 2012, the Foundation have decided to treat the community with a brand new product. A fork of the Pi Zero, but with added WiFi and Bluetooth, say hello to the Raspberry Pi Zero Wireless!

Pi Zero W with wifi, bluetooth and a brand new official case

Raspberry Pi Zero W newly launched today sports WiFi and Bluetooth and costs $10 + shipping and taxes. More information here http://raspi.tv/?p=9964 Also a brand new case.

We even became a Twitter Moment which, for many of us avid Tweeters, was kinda a big deal. Plus, well… pizza.

This tiny device has wireless LAN and HDMI and costs less than a pizza

The Raspberry Pi has sold more than 12 million devices around the world in various forms. The latest – the Pi Zero W – solves a key problem with the original by adding built-in wireless LAN and bluetooth functionality.

All in all, a great fifth birthday launch day was had by all.

James @raspjamberlin on Twitter

I would love to take a moment to wish @Raspberry_Pi a very happy 5th birthday! Congratulations to everyone that works so hard to give us Pi

If you ordered a Pi Zero W, make sure you share your projects with us across all social media or in the comments below. We can’t wait to see what you get up to with our newborn bundle of joy!

 

The post TEN BUCKS! TEN FREAKIN’ BUCKS! Zero W aftermath appeared first on Raspberry Pi.

The Economics of Hybrid Cloud Storage

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hybrid-cloud-storage-economics/

“Hybrid Cloud” has jumped into the IT vernacular over the last few years. Hybrid Cloud solutions intelligently divide processing and data storage between on-premise and off-premise resources to maximize efficiency. Businesses seamlessly extend their data processing and data storage capabilities to the cloud enabling them to manage unusual or fluctuating demands for services. More recently businesses are utilizing cloud computing and storage resources for their day-to-day operations instead of building out their own infrastructure.

Companies in the media and entertainment industry are candidates for considering the hybrid cloud as on any given day these organizations ingest and process large amounts of data in the form of video and audio files. Effectively processing and storing this data is paramount in managing the cost of a project and keeping it on schedule. Below we’ll examine the data storage aspects of the hybrid cloud when considering such a solution for a media and entertainment organization.

The Classic Storage Environment

In the media and entertainment industry, much of the video and audio collected is either never used or reviewed once and then archived. A rough estimate is that 10% of all the audio and video collected is used in the various drafts produced. That means that 90% of the data is archived: stored on the local storage systems or perhaps saved off to tape. This archived data can not be deleted until the project owner agrees, an event that can take months and sometimes years.

Using local storage to keep this archived data means you have to “overbuy” your on-premise storage to accommodate the maximum amount of data you might ever need to hold. While this allows the data to be easily accessed and restored, you have to purchase or lease substantially more storage than you really need.

As a consequence, many organizations decided to use tape storage for their archived data to reduce the need for on-premise data storage. They soon discovered the hidden costs of tape systems: ongoing tape maintenance, supply costs, and continuing personnel expenses. In addition, to recover an archived video or audio file from tape was often slow, cumbersome, and fraught with error.

Hybrid Cloud Storage

Cantemo’s Media Asset Management Portal can identify and automatically route video and audio data to a storage destination – on-premise, cloud, tape, etc. – as needed. Let’s consider a model where 20% of the data ingested is needed for the duration of a given project. The remaining 80% is evaluated and then determined that it can be archived, although we might need to access a video or audio clip at a later time. What is the best destination for the Cantemo Portal to route video and audio that optimizes both cost and access? Let’s review each of our choices: on-premise disk, tape, and cloud storage.

Data Destinations

To compare the three solutions, we’ve considered the cost of each system over a five year period for: initial purchase cost, ongoing costs and supplies, maintenance costs, personnel cost for operations, and subscription costs.

  • On-Premise Disk Storage – On-premise storage can range from a 1 petabyte NAS (Network Attached Storage) system to a multi-petabyte SAN (Storage Area Network). The cost ranges from $12/terabyte/month to $20/terabyte/month (or more). These figures assume new equipment at “street” prices where available. These systems are used for instant access to the data over a high-speed network connection. The data, or a proxy, can be altered multiple times and versioning is required.
  • Tape Storage – Typically these are LTO (Linear Tape-Open) systems with a minimum of two local tape systems, operational costs, etc. The data is stored, typically in batch mode, and accessed infrequently. The tapes can be stored on-site or off-site. Off-site storage costs more. The cost for LTO tape ranges from $7/terabyte/month to $10/terabyte/month, with much of that being the ongoing operational costs. The design includes one incremental tape per day, 2-week retention, first week on-site, second week off-site, with weekly pickup/drop-off. Also included are weekly, monthly, and yearly full backups, rotated on/off site as needed for tape testing, data recovery, etc.
  • Cloud Storage – The cost of cloud storage has come down over the last few years and currently ranges from $5/terabyte/month to $25/terabyte/month for storage depending on the vendor. Video and audio stored in cloud storage is typically easy to locate and readily available for recovery if needed. In most cases, there are minimal operational costs as, for example, the Cantemo Portal software is designed to locate and recover files that are required, but not present on the on-premise storage system.

Of course, a given organization will have their own costs, but in general they should fall within the ranges noted above.

Comparing Storage Costs

In comparing costs of the different methods noted above, we’ll present three scenarios. For each scenario we’ll use data storage amounts of 100 terabytes, 1 petabyte, and 2 petabytes. Each table is the same format, all we’ve done is change how the data is distributed: on-premise, tape, or cloud. The math can be adapted for any set of numbers you wish to use.

SCENARIO 1 – 100% of data is in on-premise storage

Scenario 1 Data Stored Data Stored Data Stored
Data stored On-Premise: 100% 100 TB 1,000 TB 2,000 TB
On-premise cost range Monthly Cost Monthly Cost Monthly Cost
Low – $12/TB/Month $1,200 $12,000 $24,000
High – $20/TB/Month $2,000 $20,000 $40,000

SCENARIO 2 – 20% of data is in on-premise storage and 80% of data is on LTO Tape

Scenario 2 Data Stored Data Stored Data Stored
Data stored On-Premise: 20% 20 TB 200 TB 400 TB
Data stored Tape: 80% 80 TB 800 TB 1,600 TB
On-premise cost range Monthly Cost Monthly Cost Monthly Cost
Low – $12/TB/Month $240 $2,400 $4,800
High – $20/TB/Month $400 $4,000 $8,000
LTO Tape cost range Monthly Cost Monthly Cost Monthly Cost
Low – $7/TB/Month $560 $5,600 $11,200
High – $10/TB/Month $800 $8,000 $16,000
TOTAL Cost of Scenario 2 Monthly Cost Monthly Cost Monthly Cost
Low $800 $8,000 $16,000
High $1,200 $12,000 $24,000

Using tape to store 80% of the data can reduce the cost 33% over just using on-premise data storage.

SCENARIO 3 – 20% of data is in on-premise storage and 80% of data is in cloud storage

Scenario 3 Data Stored Data Stored Data Stored
Data stored On-Premise: 20% 20 TB 200 TB 400 TB
Data stored in Cloud: 80% 80 TB 800 TB 1,600 TB
On-premise cost range Monthly Cost Monthly Cost Monthly Cost
Low – $12/TB/Month $240 $2,400 $4,800
High – $20/TB/Month $400 $4,000 $8,000
LTO Tape cost range Monthly Cost Monthly Cost Monthly Cost
Low – $5/TB/Month $400 $4,000 $8,000
High – $25/TB/Month $2,000 $20,000 $40,000
TOTAL Cost of Scenario 3 Monthly Cost Monthly Cost Monthly Cost
Low $640 $6,400 $12,800
High $2,400 $24,000 $48,000

Storing 80% of the data in the cloud can lead a 46% savings on the low end, but could actually be more expensive depending on the vendor selected.

Separate the Costs

Often, cloud storage costs are combined with cloud computing costs in the Hybrid Cloud model, thus hiding the true cost of the cloud storage, perhaps, until it’s too late. The savings gained by using cloud computing services a few times a day may be completely offset by the high cost of cloud storage, which you would be using the entire time. Here are some recommendations.

  1. Ask to have your Hybrid Cloud costs broken out into computing and storage costs, it should be clear what you are paying for each service.
  2. Consider moving the cloud data storage cost to a low cost provider such as Backblaze B2 Cloud Storage, which charges only $5/terabyte/month for cloud storage. This is particularly useful for archived data that still needs to be accessible as Backblaze cloud storage is readily available.
  3. If compute, data distribution, and data archiving services are required, the Cantemo Portal allows you to designate different cloud storage vendors depending on the usage. For example, data requiring computing services can be stored with Amazon S3 and data designated for archiving can be stored in Backblaze. This allows you optimize access, while minimizing costs.

Considering Hybrid Data Storage

Today, most companies in the Media and Entertainment industry have large amounts of data. The hybrid cloud has the potential to change how the industry does business by moving to cloud-based platforms that allow for global collaboration around the clock. In these scenarios, the amount of data created and stored will be staggering, even by today’s standards. As a consequence, it will be paramount for you to know the most cost efficient way to store and access your data.

The latest version of Cantemo Portal includes native integration to Backblaze B2 Cloud Storage, making it easy to create custom rules for archiving to the cloud and access archived files when needed.

(Author’s note: I used on-premise throughout this document as it is the common vernacular used in the tech industry. Apologies to those grammatically offended.)

The post The Economics of Hybrid Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Some moon math

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/some-moon-math.html

So “Brianna Wu” (famous for gamergate) is trending, and because I love punishment, I clicked on it to see why. Apparently she tweeted that Elon Musk’s plan to go to the moon is bad, because once there he can drop rocks on the Earth with the power of 100s of nuclear bombs. People are mocking her for the stupidity of this.

But the math checks out.

First of all, she probably got the idea from Heinlein’s book The Moon is a Harsh Mistress where the rebel moon colonists do just that. I doubt she did her own math, and relied upon Heinlein to do it for her. But let’s do the math ourselves.

Let’s say that we want to stand at the height of the moon and drop a rock. How big a rock do we need to equal the energy of an atomic bomb? To make things simple, let’s assume the size of bombs we want is that of the one dropped on Hiroshima.

As we know from high school physics, the energy of a dropped object (ignoring air) is:

energy = 0.5 * mass * velocity * velocity

Solving for mass (the size of the rock), the equation is:

mass = 2 * energy/(velocity * velocity)

We choose “energy” as that of an atomic bomb, but what is “velocity” in this equation, the speed of something dropped from the height of the moon?

The answer is something close to the escape velocity, which is defined as the speed of something dropped infinitely far away from the Earth. The moon isn’t infinitely far away (only 250,000 miles away), but it’s close.

How close? Well, let’s use the formula for escape velocity from Wikipedia [*]:

where G is the “gravitational constant”, M is the “mass of Earth”, and r is the radius. Plugging in “radius of earth” and we get an escape velocity from the surface of the Earth of 11.18 km/s, which matches what Google tells us. Plugging in the radius of the moon’s orbit, we get 1.44 km/s [*]. Thus, we get the following as the speed of an object dropped from the height of the moon to the surface of the earth, barring air resistance [*]:

9.74 km/s

Plugging these numbers in gets the following result:

So the answer for the mass of the rock, dropped from the moon, to equal a Hiroshima blast, is 1.3 billion grams, or 1.3 million kilograms, or 1.3 thousand metric tons.

Well, that’s a fine number and all, but what does that equal? Is that the size of Rhode Island? or just a big truck?

The answer is: nearly the same mass as the Space Shuttle during launch (2.03 million kilograms [*]). Or, a rock about 24 feet on a side.

That’s big rock, but not so big that it’s impractical, especially since things weigh 1/6th as on Earth. In Heinlein’s books, instead of shooting rocks via rockets, it shot them into space using a railgun, magnetic rings. Since the moon doesn’t have an atmosphere, you don’t need to shoot things straight up. Instead, you can accelerate them horizontally across the moon’s surface, to an escape velocity of 5,000 mph (escape velocity from moon’s surface). As the moon’s surface curves away, they’ll head out into space (or toward Earth)

Thus, Elon Musk would need to:

  • go the moon
  • setup a colony, underground
  • mine iron ore
  • build a magnetic launch gun
  • build fields full of solar panels for energy
  • mine some rock
  • cover it in iron (for magnet gun to hold onto)
  • bomb earth

At that point, he could drop hundreds of “nukes” on top of us. I, for one, would welcome our Lunar overlords. Free Luna!


Update: I’ve made a number of short cuts, but I don’t think they’ll affect the math much.

We don’t need escape velocity for the moon as a whole, just enough to reach the point where Earth’s gravity takes over. On the other hand, we need to kill the speed of the Moons’s orbit (2,000 miles per hour) in order to get down to Earth, or we just end up orbiting the Earth. I just assume the two roughly cancel each other out and ignore it.

I also ignore the atmosphere. Meteors from outer space hitting the earth of this size tend to disintegrate or blow up before reaching the surface. The Chelyabinsk meteor, the one in all those dashcam videos from 2013, was roughly 5 times the size of our moon rocks, and blew up in the atmosphere, high above the surface, with about 5 times the energy of a Hiroshima bomb. Presumably, we want our moon rocks to reach the surface, so they’ll need some protection. Probably make them longer and thinner, and put an ablative heat shield up from, and wrap them in something strong like iron.

I don’t know how much this will slow down the rock. Presumably, if coming straight down, it won’t slow down by much, but if coming in at a steep angle (as meteors do), then it could slow down quite a lot.

Update: First version of this post used “height of moon”, which Wolfram Alfa interpreted as “diameter of moon”. This error was found by . The current version of this post changes this to the correct value “radius of moon’s orbit”.

Update: I made a stupid error about Earth’s gravitational strength at the height of the Moon’s orbit. I’ve changed the equations to fix this.