Tag Archives: water

Cloud Babble: The Jargon of Cloud Storage

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/what-is-cloud-computing/

Cloud Babble

One of the things we in the technology business are good at is coming up with names, phrases, euphemisms, and acronyms for the stuff that we create. The Cloud Storage market is no different, and we’d like to help by illuminating some of the cloud storage related terms that you might come across. We know this is just a start, so please feel free to add in your favorites in the comments section below and we’ll update this post accordingly.

Clouds

The cloud is really just a collection of purpose built servers. In a public cloud the servers are shared between multiple unrelated tenants. In a private cloud, the servers are dedicated to a single tenant or sometimes a group of related tenants. A public cloud is off-site, while a private cloud can be on-site or off-site – or on-prem or off-prem, if you prefer.

Both Sides Now: Hybrid Clouds

Speaking of on-prem and off-prem, there are Hybrid Clouds or Hybrid Data Clouds depending on what you need. Both are based on the idea that you extend your local resources (typically on-prem) to the cloud (typically off-prem) as needed. This extension is controlled by software that decides, based on rules you define, what needs to be done where.

A Hybrid Data Cloud is specific to data. For example, you can set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

A Hybrid Cloud is similar to a Hybrid Data Cloud except it also extends compute. For example, at the end of the quarter, you can spin up order processing application instances off-prem as needed to add to your on-prem capacity. Of course, determining where the transactional data used and created by these applications resides can be an interesting systems design challenge.

Clouds in my Coffee: Fog

Typically, public and private clouds live in large buildings called data centers. Full of servers, networking equipment, and clean air, data centers need lots of power, lots of networking bandwidth, and lots of space. This often limits where data centers are located. The further away you are from a data center, the longer it generally takes to get your data to and from there. This is known as latency. That’s where “Fog” comes in.

Fog is often referred to as clouds close to the ground. Fog, in our cloud world, is basically having a “little” data center near you. This can make data storage and even cloud based processing faster for everyone nearby. Data, and less so processing, can be transferred to/from the Fog to the Cloud when time is less a factor. Data could also be aggregated in the Fog and sent to the Cloud. For example, your electric meter could report its minute-by-minute status to the Fog for diagnostic purposes. Then once a day the aggregated data could be send to the power company’s Cloud for billing purposes.

Another term used in place of Fog is Edge, as in computing at the Edge. In either case, a given cloud (data center) usually has multiple Edges (little data centers) connected to it. The connection between the Edge and the Cloud is sometimes known as the middle-mile. The network in the middle-mile can be less robust than that required to support a stand-alone data center. For example, the middle-mile can use 1 Gbps lines, versus a data center, which would require multiple 10 Gbps lines.

Heavy Clouds No Rain: Data

We’re all aware that we are creating, processing, and storing data faster than ever before. All of this data is stored in either a structured or more likely an unstructured way. Databases and data warehouses are structured ways to store data, but a vast amount of data is unstructured – meaning the schema and data access requirements are not known until the data is queried. A large pool of unstructured data in a flat architecture can be referred to as a Data Lake.

A Data Lake is often created so we can perform some type of “big data” analysis. In an over simplified example, let’s extend the lake metaphor a bit and ask the question; “how many fish are in our lake?” To get an answer, we take a sufficient sample of our lake’s water (data), count the number of fish we find, and extrapolate based on the size of the lake to get an answer within a given confidence interval.

A Data Lake is usually found in the cloud, an excellent place to store large amounts of non-transactional data. Watch out as this can lead to our data having too much Data Gravity or being locked in the Hotel California. This could also create a Data Silo, thereby making a potential data Lift-and-Shift impossible. Let me explain:

  • Data Gravity — Generally, the more data you collect in one spot, the harder it is to move. When you store data in a public cloud, you have to pay egress and/or network charges to download the data to another public cloud or even to your own on-premise systems. Some public cloud vendors charge a lot more than others, meaning that depending on your public cloud provider, your data could financially have a lot more gravity than you expected.
  • Hotel California — This is like Data Gravity but to a lesser scale. Your data is in the Hotel California if, to paraphrase, “your data can check out any time you want, but it can never leave.” If the cost of downloading your data is limiting the things you want to do with that data, then your data is in the Hotel California. Data is generally most valuable when used, and with cloud storage that can include archived data. This assumes of course that the archived data is readily available, and affordable, to download. When considering a cloud storage project always figure in the cost of using your own data.
  • Data Silo — Over the years, businesses have suffered from organizational silos as information is not shared between different groups, but instead needs to travel up to the top of the silo before it can be transferred to another silo. If your data is “trapped” in a given cloud by the cost it takes to share such data, then you may have a Data Silo, and that’s exactly opposite of what the cloud should do.
  • Lift-and-Shift — This term is used to define the movement of data or applications from one data center to another or from on-prem to off-prem systems. The move generally occurs all at once and once everything is moved, systems are operational and data is available at the new location with few, if any, changes. If your data has too much gravity or is locked in a hotel, a data lift-and-shift may break the bank.

I Can See Clearly Now

Hopefully, the cloudy terms we’ve covered are well, less cloudy. As we mentioned in the beginning, our compilation is just a start, so please feel free to add in your favorite cloud term in the comments section below and we’ll update this post with your contributions. Keep your entries “clean,” and please no words or phrases that are really adverts for your company. Thanks.

The post Cloud Babble: The Jargon of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

facepunch: the facial recognition punch clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/facepunch-facial-recognition/

Get on board with facial recognition and clock your screen time with facepunch, the facial recognition punch clock from dekuNukem.

dekuNukem facepunch raspberry pi facial recognition

image c/o dekuNukem

How it works

dekuNukem uses a Raspberry Pi 3, the Raspberry Pi camera module, and an OLED screen for the build. You don’t strictly need to include the OLED board, but it definitely adds to the overall effect, letting you view your daily and weekly screen time at a glance without having to access your Raspberry Pi for data.

As dekuNukem explains in the GitHub repo for the build, they used a perf board to mount the screen and attached it to the Raspberry Pi. This is a nice, simple means of pulling the whole project together without loose wires or the need for a modified case.

dekuNukem facepunch raspberry pi facial recognition

image c/o dekuNukem

This face_recognition library lets the Pi + camera register your face. You’ll also need a well lit 400×400 photograph of yourself to act as a reference for the library. From there, a few commands should get you started.

Uses for facial recognition

You could simply use facepunch for its intended purpose, but here at Pi Towers we’ve been discussing further uses for the build. We’re all guilty of sitting for too long at our desks, so why not incorporate a “get up and walk around” notification? How about a flashing LED that tells you to “drink some water”? You could even go a little deeper (though possibly a little Big Brother) and set up an “I’m back at my desk” notification on Slack, to let your colleagues know you’re available.

You could also take this foray into facial recognition and incorporate it into home automation projects: a user-identifying Magic Mirror, perhaps, or a doorbell that recognises friends and family.

What would you do with facial recognition on a Raspberry Pi?

The post facepunch: the facial recognition punch clock appeared first on Raspberry Pi.

Scale Your Web Application — One Step at a Time

Post Syndicated from Saurabh Shrivastava original https://aws.amazon.com/blogs/architecture/scale-your-web-application-one-step-at-a-time/

I often encounter people experiencing frustration as they attempt to scale their e-commerce or WordPress site—particularly around the cost and complexity related to scaling. When I talk to customers about their scaling plans, they often mention phrases such as horizontal scaling and microservices, but usually people aren’t sure about how to dive in and effectively scale their sites.

Now let’s talk about different scaling options. For instance if your current workload is in a traditional data center, you can leverage the cloud for your on-premises solution. This way you can scale to achieve greater efficiency with less cost. It’s not necessary to set up a whole powerhouse to light a few bulbs. If your workload is already in the cloud, you can use one of the available out-of-the-box options.

Designing your API in microservices and adding horizontal scaling might seem like the best choice, unless your web application is already running in an on-premises environment and you’ll need to quickly scale it because of unexpected large spikes in web traffic.

So how to handle this situation? Take things one step at a time when scaling and you may find horizontal scaling isn’t the right choice, after all.

For example, assume you have a tech news website where you did an early-look review of an upcoming—and highly-anticipated—smartphone launch, which went viral. The review, a blog post on your website, includes both video and pictures. Comments are enabled for the post and readers can also rate it. For example, if your website is hosted on a traditional Linux with a LAMP stack, you may find yourself with immediate scaling problems.

Let’s get more details on the current scenario and dig out more:

  • Where are images and videos stored?
  • How many read/write requests are received per second? Per minute?
  • What is the level of security required?
  • Are these synchronous or asynchronous requests?

We’ll also want to consider the following if your website has a transactional load like e-commerce or banking:

How is the website handling sessions?

  • Do you have any compliance requests—like the Payment Card Industry Data Security Standard (PCI DSS compliance) —if your website is using its own payment gateway?
  • How are you recording customer behavior data and fulfilling your analytics needs?
  • What are your loading balancing considerations (scaling, caching, session maintenance, etc.)?

So, if we take this one step at a time:

Step 1: Ease server load. We need to quickly handle spikes in traffic, generated by activity on the blog post, so let’s reduce server load by moving image and video to some third -party content delivery network (CDN). AWS provides Amazon CloudFront as a CDN solution, which is highly scalable with built-in security to verify origin access identity and handle any DDoS attacks. CloudFront can direct traffic to your on-premises or cloud-hosted server with its 113 Points of Presence (102 Edge Locations and 11 Regional Edge Caches) in 56 cities across 24 countries, which provides efficient caching.
Step 2: Reduce read load by adding more read replicas. MySQL provides a nice mirror replication for databases. Oracle has its own Oracle plug for replication and AWS RDS provide up to five read replicas, which can span across the region and even the Amazon database Amazon Aurora can have 15 read replicas with Amazon Aurora autoscaling support. If a workload is highly variable, you should consider Amazon Aurora Serverless database  to achieve high efficiency and reduced cost. While most mirror technologies do asynchronous replication, AWS RDS can provide synchronous multi-AZ replication, which is good for disaster recovery but not for scalability. Asynchronous replication to mirror instance means replication data can sometimes be stale if network bandwidth is low, so you need to plan and design your application accordingly.

I recommend that you always use a read replica for any reporting needs and try to move non-critical GET services to read replica and reduce the load on the master database. In this case, loading comments associated with a blog can be fetched from a read replica—as it can handle some delay—in case there is any issue with asynchronous reflection.

Step 3: Reduce write requests. This can be achieved by introducing queue to process the asynchronous message. Amazon Simple Queue Service (Amazon SQS) is a highly-scalable queue, which can handle any kind of work-message load. You can process data, like rating and review; or calculate Deal Quality Score (DQS) using batch processing via an SQS queue. If your workload is in AWS, I recommend using a job-observer pattern by setting up Auto Scaling to automatically increase or decrease the number of batch servers, using the number of SQS messages, with Amazon CloudWatch, as the trigger.  For on-premises workloads, you can use SQS SDK to create an Amazon SQS queue that holds messages until they’re processed by your stack. Or you can use Amazon SNS  to fan out your message processing in parallel for different purposes like adding a watermark in an image, generating a thumbnail, etc.

Step 4: Introduce a more robust caching engine. You can use Amazon Elastic Cache for Memcached or Redis to reduce write requests. Memcached and Redis have different use cases so if you can afford to lose and recover your cache from your database, use Memcached. If you are looking for more robust data persistence and complex data structure, use Redis. In AWS, these are managed services, which means AWS takes care of the workload for you and you can also deploy them in your on-premises instances or use a hybrid approach.

Step 5: Scale your server. If there are still issues, it’s time to scale your server.  For the greatest cost-effectiveness and unlimited scalability, I suggest always using horizontal scaling. However, use cases like database vertical scaling may be a better choice until you are good with sharding; or use Amazon Aurora Serverless for variable workloads. It will be wise to use Auto Scaling to manage your workload effectively for horizontal scaling. Also, to achieve that, you need to persist the session. Amazon DynamoDB can handle session persistence across instances.

If your server is on premises, consider creating a multisite architecture, which will help you achieve quick scalability as required and provide a good disaster recovery solution.  You can pick and choose individual services like Amazon Route 53, AWS CloudFormation, Amazon SQS, Amazon SNS, Amazon RDS, etc. depending on your needs.

Your multisite architecture will look like the following diagram:

In this architecture, you can run your regular workload on premises, and use your AWS workload as required for scalability and disaster recovery. Using Route 53, you can direct a precise percentage of users to an AWS workload.

If you decide to move all of your workloads to AWS, the recommended multi-AZ architecture would look like the following:

In this architecture, you are using a multi-AZ distributed workload for high availability. You can have a multi-region setup and use Route53 to distribute your workload between AWS Regions. CloudFront helps you to scale and distribute static content via an S3 bucket and DynamoDB, maintaining your application state so that Auto Scaling can apply horizontal scaling without loss of session data. At the database layer, RDS with multi-AZ standby provides high availability and read replica helps achieve scalability.

This is a high-level strategy to help you think through the scalability of your workload by using AWS even if your workload in on premises and not in the cloud…yet.

I highly recommend creating a hybrid, multisite model by placing your on-premises environment replica in the public cloud like AWS Cloud, and using Amazon Route53 DNS Service and Elastic Load Balancing to route traffic between on-premises and cloud environments. AWS now supports load balancing between AWS and on-premises environments to help you scale your cloud environment quickly, whenever required, and reduce it further by applying Amazon auto-scaling and placing a threshold on your on-premises traffic using Route 53.

Top 10 Most Pirated Movies of The Week on BitTorrent – 01/15/18

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-011518/

This week we have three newcomers in our chart.

The Shape of Water is the most downloaded movie again.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (…) The Shape of Water (DVDScr) 8.0 / trailer
2 (4) Jumanji: Welcome to the Jungle (HDTS) 7.3 / trailer
3 (1) Blade Runner 2049 8.9 / trailer
4 (2) Coco (DVDscr) 8.9 / trailer
5 (…) Jigsaw 6.0 / trailer
6 (3) Justice League 7.1 / trailer
7 (5) Dunkirk 8.3 / trailer
8 (…) A Bad mom’s Christmas 5.7 / trailer
9 (9) Renegades 5.5 / trailer
10 (5) Bright 6.7 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirate Streaming on Facebook is a Seriously Risky Business

Post Syndicated from Andy original https://torrentfreak.com/pirate-streaming-on-facebook-is-a-seriously-risky-business-180114/

For more than a year the British public has been warned about the supposed dangers of Kodi piracy.

Dozens of headlines have claimed consequences ranging from system-destroying malware to prison sentences. Fortunately, most of them can be filed under “tabloid nonsense.”

That being said, there is an extremely important issue that deserves much closer attention, particularly given a shift in the UK legal climate during 2017. We’re talking about live streaming copyrighted content on Facebook, which is both incredibly easy and frighteningly risky.

This week it was revealed that 34-year-old Craig Foster from the UK had been given an ultimatum from Sky to pay a £5,000 settlement fee. The media giant discovered that he’d live-streamed the Anthony Joshua v Wladimir Klitschko fight on Facebook and wanted compensation to make a potential court case disappear.

While it may seem initially odd to use the word, Foster was lucky.

Under last year’s Digital Economy Act, he could’ve been jailed for up to ten years for distributing copyright-infringing content to the public, if he had “reason to believe that communicating the work to the public [would] cause loss to the owner of the copyright, or [would] expose the owner of the copyright to a risk of loss.”

Clearly, as a purchaser of the £19.95 pay-per-view himself, he would’ve appreciated that the event costs money. With that in mind, a court would likely find that he would have been aware that Sky would have been exposed to a “risk of loss”. Sky claim that 4,250 people watched the stream but the way the law is written, no specific level of loss is required for a breach of the law.

But it’s not just the threat of a jail sentence that’s the problem. People streaming live sports on Facebook are sitting ducks.

In Foster’s case, the fight he streamed was watermarked, which means that Sky put a tracking code into it which identified him personally as the buyer of the event. When he (or his friend, as Foster claims) streamed it on Facebook, it was trivial for Sky to capture the watermark and track it back to his Sky account.

Equally, it would be simplicity itself to see that the name on the Sky account had exactly the same name and details as Foster’s Facebook account. So, to most observers, it would appear that not only had Foster purchased the event, but he was also streaming it to Facebook illegally.

It’s important to keep something else in mind. No cooperation between Sky and Facebook would’ve been necessary to obtain Foster’s details. Take the amount of information most people share on Facebook, combine that with the information Sky already had, and the company’s anti-piracy team would have had a very easy job.

Now compare this situation with an upload of the same stream to a torrent site.

While the video capture would still contain Foster’s watermark, which would indicate the source, to prove he also distributed the video Sky would’ve needed to get inside a torrent swarm. From there they would need to capture the IP address of the initial seeder and take the case to court, to force an ISP to hand over that person’s details.

Presuming they were the same person, Sky would have a case, with a broadly similar level of evidence to that presented in the current matter. However, it would’ve taken them months to get their man and cost large sums of money to get there. It’s very unlikely that £5,000 would cover the costs, meaning a much, much bigger bill for the culprit.

Or, confident that Foster was behind the leak based on the watermark alone, Sky could’ve gone straight to the police. That never ends well.

The bottom line is that while live-streaming on Facebook is simplicity itself, people who do it casually from their own account (especially with watermarked content) are asking for trouble.

Nailing Foster was the piracy equivalent of shooting fish in a barrel but the worrying part is that he probably never gave his (or his friend’s…) alleged infringement a second thought. With a click or two, the fight was live and he was staring down the barrel of a potential jail sentence, had Sky not gone the civil route.

It’s scary stuff and not enough is being done to warn people of the consequences. Forget the scare stories attempting to deter people from watching fights or movies on Kodi, thoughtlessly streaming them to the public on social media is the real danger.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sky Hits Man With £5k ‘Fine’ For Pirating Boxing on Facebook

Post Syndicated from Andy original https://torrentfreak.com/sky-hits-man-with-5k-fine-for-pirating-boxing-on-facebook-180108/

When people download content online using BitTorrent, they also distribute that content to others. This unlawful distribution attracts negative attention from rightsholders, who have sued hundreds of thousands of individuals worldwide.

Streaming is considered a much safer method to obtain content, since it’s difficult for content owners to track downloaders. However, the same can’t be said about those who stream content to the web for the benefit of others, as an interesting case in the UK has just revealed.

It involves 34-year-old Craig Foster who received several scary letters from lawyers representing broadcaster Sky. The company alleged that during last April’s bout between Anthony Joshua’s and Wladimir Klitschko, Foster live-streamed the multiple world title fight on Facebook Live.

Financially, this was a major problem for Sky, law firm Foot Anstey LLP told Foster. According to their calculations, at least 4,250 people watched the stream without paying Sky Box Office the going rate of £19.95 each. Tapped into Sky’s computers, the broadcaster concluded that Foster owed the company £85,000.

But according to The Mirror, father-of-one Foster wasn’t actually to blame.

“I’d paid for the boxing, it wasn’t like I was making any money. My iPad was signed in to my Facebook account and my friend just started streaming the fight. I didn’t think anything of it, then a few days later they cut my subscription,” Foster said.

“They’re demanding the names and addresses of all my mates who were round that night but I’m not going to give them up. I said I’d take the rap.”

While Foster says he won’t turn in the culprit, there’s no doubt that the fight stream originated from his Sky account. The TV giant embeds watermarks in its broadcasts which enables it to see who paid for an event, should a copy of one turn up on the Internet.

As we reported last year following the Mayweather v McGregor super-fight, the codes are clearly visible with the naked eye.

Sky watermarks, as seen in the Mayweather v McGregor fight

While taking the rap for someone else’s infringing behavior isn’t something anyone should do lightly, it appears that Scarborough-based Foster did just that.

According to Neil Parkes, who specializes in media litigation, content protection and contentious IP at Foot Anstey, Foster accepted responsibility and agreed to pay a settlement.

“Mr Foster broke the law,” Parkes said. “He has acknowledged his wrongdoing, apologised and signed a legally binding agreement to pay a sum of £5,000 to Sky.”

The Mirror, however, has Foster backtracking. He says he wasn’t given enough time to consider his position and now wants to fight Sky in court.

“It’s heavy-handed. I’ve apologized and told them we were drunk,” Foster said.

“I know streaming the fight was wrong. I didn’t stop my friend but I was watching the boxing. I’m just a bloke who had a few drinks with his friends.”

Unless he can find a law firm willing to fight his corner at a hugely cut-down rate, Foster will find this kind of legal fisticuffs to be a massively expensive proposition, one in which he will start out as the clear underdog.

Not only was Foster’s Sky account the originating source, both his iPad and his Facebook account were used to stream the fight. On top of what appears to be a signed confession, he also promised not to do anything else like this in future. Furthermore, he even agreed to issue an apology that Sky can use in future anti-piracy messages.

Of course, Foster might indeed be a noble gentleman but he should be aware that as a civil matter, this fight would be decided on the balance of probabilities, not beyond reasonable doubt. If the judge decides 51% in Sky’s favor, he suffers a knockout along with a huge financial headache.

No one wants a £5,000 bill but that’s a drop in the ocean compared to the cost implications of losing this case.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Physics cheats

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/06/physics-cheats/

Anonymous asks:

something about how we tweak physics to “work” better in games?

Ho ho! Work. Get it? Like in physics…?

Hitboxes

Hitbox” is perhaps not the most accurate term, since the shape used for colliding with the environment and the shape used for detecting damage might be totally different. They’re usually the same in simple platformers, though, and that’s what most of my games have been.

The hitbox is the biggest physics fudge by far, and it exists because of a single massive approximation that (most) games make: you’re controlling a single entity in the abstract, not a physical body in great detail.

That is: when you walk with your real-world meat shell, you perform a complex dance of putting one foot in front of the other, a motion you spent years perfecting. When you walk in a video game, you press a single “walk” button. Your avatar may play an animation that moves its legs back and forth, but since you’re not actually controlling the legs independently (and since simulating them is way harder), the game just treats you like a simple shape. Fairly often, this is a box, or something very box-like.

An Eevee sprite standing on faux ground; the size of the underlying image and the hitbox are outlined

Since the player has no direct control over the exact placement of their limbs, it would be slightly frustrating to have them collide with the world. This is especially true in cases like the above, where the tail and left ear protrude significantly out from the main body. If that Eevee wanted to stand against a real-world wall, she would simply tilt her ear or tail out of the way, so there’s no reason for the ear to block her from standing against a game wall. To compensate for this, the ear and tail are left out of the collision box entirely and will simply jut into a wall if necessary — a goofy affordance that’s so common it doesn’t even register as unusual. As a bonus (assuming this same box is used for combat), she won’t take damage from projectiles that merely graze past an ear.

(One extra consideration for sprite games in particular: the hitbox ought to be horizontally symmetric around the sprite’s pivot — i.e. the point where the entity is truly considered to be standing — so that the hitbox doesn’t abruptly move when the entity turns around!)

Corners

Treating the player (and indeed most objects) as a box has one annoying side effect: boxes have corners. Corners can catch on other corners, even by a single pixel. Real-world bodies tend to be a bit rounder and squishier and this can tolerate grazing a corner; even real-world boxes will simply rotate a bit.

Ah, but in our faux physics world, we generally don’t want conscious actors (such as the player) to rotate, even with a realistic physics simulator! Real-world bodies are made of parts that will generally try to keep you upright, after all; you don’t tilt back and forth much.

One way to handle corners is to simply remove them from conscious actors. A hitbox doesn’t have to be a literal box, after all. A popular alternative — especially in Unity where it’s a standard asset — is the pill-shaped capsule, which has semicircles/hemispheres on the top and bottom and a cylindrical body in 3D. No corners, no problem.

Of course, that introduces a new problem: now the player can’t balance precariously on edges without their rounded bottom sliding them off. Alas.

If you’re stuck with corners, then, you may want to use a corner bump, a term I just made up. If the player would collide with a corner, but the collision is only by a few pixels, just nudge them to the side a bit and carry on.

An Eevee sprite trying to move sideways into a shallow ledge; the game bumps her upwards slightly, so she steps onto it instead

When the corner is horizontal, this creates stairs! This is, more or less kinda, how steps work in Doom: when the player tries to cross from one sector into another, if the height difference is 24 units or less, the game simply bumps them upwards to the height of the new floor and lets them continue on.

Implementing this in a game without Doom’s notion of sectors is a little trickier. In fact, I still haven’t done it. Collision detection based on rejection gets it for free, kinda, but it’s not very deterministic and it breaks other things. But that’s a whole other post.

Gravity

Gravity is pretty easy. Everything accelerates downwards all the time. What’s interesting are the exceptions.

Jumping

Jumping is a giant hack.

Think about how actual jumping works: you tense your legs, which generally involves bending your knees first, and then spring upwards. In a platformer, you can just leap whenever you feel like it, which is nonsense. Also you go like twenty feet into the air?

Worse, most platformers allow variable-height jumping, where your jump is lower if you let go of the jump button while you’re in the air. Normally, one would expect to have to decide how much force to put into the jump beforehand.

But of course this is about convenience of controls: when jumping is your primary action, you want to be able to do it immediately, without any windup for how high you want to jump.

(And then there’s double jumping? Come on.)

Air control is a similar phenomenon: usually you’d jump in a particular direction by controlling how you push off the ground with your feet, but in a video game, you don’t have feet! You only have the box. The compromise is to let you control your horizontal movement to a limit degree in midair, even though that doesn’t make any sense. (It’s way more fun, though, and overall gives you more movement options, which are good to have in an interactive medium.)

Air control also exposes an obvious place that game physics collide with the realistic model of serious physics engines. I’ve mentioned this before, but: if you use Real Physics™ and air control yourself into a wall, you might find that you’ll simply stick to the wall until you let go of the movement buttons. Why? Remember, player movement acts as though an external force were pushing you around (and from the perspective of a Real™ physics engine, this is exactly how you’d implement it) — so air-controlling into a wall is equivalent to pushing a book against a wall with your hand, and the friction with the wall holds you in place. Oops.

Ground sticking

Another place game physics conflict with physics engines is with running to the top of a slope. On a real hill, of course, you land on top of the slope and are probably glad of it; slopes are hard to climb!

An Eevee moves to the top of a slope, and rather than step onto the flat top, she goes flying off into the air

In a video game, you go flying. Because you’re a box. With momentum. So you hit the peak and keep going in the same direction. Which is diagonally upwards.

Projectiles

To make them more predictable, projectiles generally aren’t subject to gravity, at least as far as I’ve seen. The real world does not have such an exemption. The real world imposes gravity even on sniper rifles, which in a video game are often implemented as an instant trace unaffected by anything in the world because the bullet never actually exists in the world.

Resistance

Ah. Welcome to hell.

Water

Water is an interesting case, and offhand I don’t know the gritty details of how games implement it. In the real world, water applies a resistant drag force to movement — and that force is proportional to the square of velocity, which I’d completely forgotten until right now. I am almost positive that no game handles that correctly. But then, in real-world water, you can push against the water itself for movement, and games don’t simulate that either. What’s the rough equivalent?

The Sonic Physics Guide suggests that Sonic handles it by basically halving everything: acceleration, max speed, friction, etc. When Sonic enters water, his speed is cut; when Sonic exits water, his speed is increased.

That last bit feels validating — I could swear Metroid Prime did the same thing, and built my own solution around it, but couldn’t remember for sure. It makes no sense, of course, for a jump to become faster just because you happened to break the surface of the water, but it feels fantastic.

The thing I did was similar, except that I didn’t want to add a multiplier in a dozen places when you happen to be underwater (and remember which ones need it to be squared, etc.). So instead, I calculate everything completely as normal, so velocity is exactly the same as it would be on dry land — but the distance you would move gets halved. The effect seems to be pretty similar to most platformers with water, at least as far as I can tell. It hasn’t shown up in a published game and I only added this fairly recently, so I might be overlooking some reason this is a bad idea.

(One reason that comes to mind is that velocity is now a little white lie while underwater, so anything relying on velocity for interesting effects might be thrown off. Or maybe that’s correct, because velocity thresholds should be halved underwater too? Hm!)

Notably, air is also a fluid, so it should behave the same way (just with different constants). I definitely don’t think any games apply air drag that’s proportional to the square of velocity.

Friction

Friction is, in my experience, a little handwaved. Probably because real-world friction is so darn complicated.

Consider that in the real world, we want very high friction on the surfaces we walk on — shoes and tires are explicitly designed to increase it, even. We move by bracing a back foot against the ground and using that to push ourselves forward, so we want the ground to resist our push as much as possible.

In a game world, we are a box. We move by being pushed by some invisible outside force, so if the friction between ourselves and the ground is too high, we won’t be able to move at all! That’s complete nonsense physically, but it turns out to be handy in some cases — for example, highish friction can simulate walking through deep mud, which should be difficult due to fluid drag and low friction.

But the best-known example of the fakeness of game friction is video game ice. Walking on real-world ice is difficult because the low friction means low grip; your feet are likely to slip out from under you, and you’ll simply fall down and have trouble moving at all. In a video game, you can’t fall down, so you have the opposite experience: you spend most of your time sliding around uncontrollably. Yet ice is so common in video games (and perhaps so uncommon in places I’ve lived) that I, at least, had never really thought about this disparity until an hour or so ago.

Game friction vs real-world friction

Real-world friction is a force. It’s the normal force (which is the force exerted by the object on the surface) times some constant that depends on how the two materials interact.

Force is mass times acceleration, and platformers often ignore mass, so friction ought to be an acceleration — applied against the object’s movement, but never enough to push it backwards.

I haven’t made any games where variable friction plays a significant role, but my gut instinct is that low friction should mean the player accelerates more slowly but has a higher max speed, and high friction should mean the opposite. I see from my own source code that I didn’t even do what I just said, so let’s defer to some better-made and well-documented games: Sonic and Doom.

In Sonic, friction is a fixed value subtracted from the player’s velocity (regardless of direction) each tic. Sonic has a fixed framerate, so the units are really pixels per tic squared (i.e. acceleration), multiplied by an implicit 1 tic per tic. So far, so good.

But Sonic’s friction only applies if the player isn’t pressing or . Hang on, that isn’t friction at all; that’s just deceleration! That’s equivalent to jogging to a stop. If friction were lower, Sonic would take longer to stop, but otherwise this is only tangentially related to friction.

(In fairness, this approach would decently emulate friction for non-conscious sliding objects, which are never going to be pressing movement buttons. Also, we don’t have the Sonic source code, and the name “friction” is a fan invention; the Sonic Physics Guide already uses “deceleration” to describe the player’s acceleration when turning around.)

Okay, let’s try Doom. In Doom, the default friction is 90.625%.

Hang on, what?

Yes, in Doom, friction is a multiplier applied every tic. Doom runs at 35 tics per second, so this is a multiplier of 0.032 per second. Yikes!

This isn’t anything remotely like real friction, but it’s much easier to implement. With friction as acceleration, the game has to know both the direction of movement (so it can apply friction in the opposite direction) and the magnitude (so it doesn’t overshoot and launch the object in the other direction). That means taking a semi-costly square root and also writing extra code to cap the amount of friction. With a multiplier, neither is necessary; just multiply the whole velocity vector and you’re done.

There are some downsides. One is that objects will never actually stop, since multiplying by 3% repeatedly will never produce a result of zero — though eventually the speed will become small enough to either slip below a “minimum speed” threshold or simply no longer fit in a float representation. Another is that the units are fairly meaningless: with Doom’s default friction of 90.625%, about how long does it take for the player to stop? I have no idea, partly because “stop” is ambiguous here! If friction were an acceleration, I could divide it into the player’s max speed to get a time.

All that aside, what are the actual effects of changing Doom’s friction? What an excellent question that’s surprisingly tricky to answer. (Note that friction can’t be changed in original Doom, only in the Boom port and its derivatives.) Here’s what I’ve pieced together.

Doom’s “friction” is really two values. “Friction” itself is a multiplier applied to moving objects on every tic, but there’s also a move factor which defaults to \(\frac{1}{32} = 0.03125\) and is derived from friction for custom values.

Every tic, the player’s velocity is multiplied by friction, and then increased by their speed times the move factor.

$$
v(n) = v(n – 1) \times friction + speed \times move factor
$$

Eventually, the reduction from friction will balance out the speed boost. That happens when \(v(n) = v(n – 1)\), so we can rearrange it to find the player’s effective max speed:

$$
v = v \times friction + speed \times move factor \\
v – v \times friction = speed \times move factor \\
v = speed \times \frac{move factor}{1 – friction}
$$

For vanilla Doom’s move factor of 0.03125 and friction of 0.90625, that becomes:

$$
v = speed \times \frac{\frac{1}{32}}{1 – \frac{29}{32}} = speed \times \frac{\frac{1}{32}}{\frac{3}{32}} = \frac{1}{3} \times speed
$$

Curiously, “speed” is three times the maximum speed an actor can actually move. Doomguy’s run speed is 50, so in practice he moves a third of that, or 16⅔ units per tic. (Of course, this isn’t counting SR40, a bug that lets Doomguy run ~40% faster than intended diagonally.)

So now, what if you change friction? Even more curiously, the move factor is calculated completely differently depending on whether friction is higher or lower than the default Doom amount:

$$
move factor = \begin{cases}
\frac{133 – 128 \times friction}{544} &≈ 0.244 – 0.235 \times friction & \text{ if } friction \ge \frac{29}{32} \\
\frac{81920 \times friction – 70145}{1048576} &≈ 0.078 \times friction – 0.067 & \text{ otherwise }
\end{cases}
$$

That’s pretty weird? Complicating things further is that low friction (which means muddy terrain, remember) has an extra multiplier on its move factor, depending on how fast you’re already going — the idea is apparently that you have a hard time getting going, but it gets easier as you find your footing. The extra multiplier maxes out at 8, which makes the two halves of that function meet at the vanilla Doom value.

A graph of the relationship between friction and move factor

That very top point corresponds to the move factor from the original game. So no matter what you do to friction, the move factor becomes lower. At 0.85 and change, you can no longer move at all; below that, you move backwards.

From the formula above, it’s easy to see what changes to friction and move factor will do to Doomguy’s stable velocity. Move factor is in the numerator, so increasing it will increase stable velocity — but it can’t increase, so stable velocity can only ever decrease. Friction is in the denominator, but it’s subtracted from 1, so increasing friction will make the denominator a smaller value less than 1, i.e. increase stable velocity. Combined, we get this relationship between friction and stable velocity.

A graph showing stable velocity shooting up dramatically as friction increases

As friction approaches 1, stable velocity grows without bound. This makes sense, given the definition of \(v(n)\) — if friction is 1, the velocity from the previous tic isn’t reduced at all, so we just keep accelerating freely.

All of this is why I’m wary of using multipliers.

Anyway, this leaves me with one last question about the effects of Doom’s friction: how long does it take to reach stable velocity? Barring precision errors, we’ll never truly reach stable velocity, but let’s say within 5%. First we need a closed formula for the velocity after some number of tics. This is a simple recurrence relation, and you can write a few terms out yourself if you want to be sure this is right.

$$
v(n) = v_0 \times friction^n + speed \times move factor \times \frac{friction^n – 1}{friction – 1}
$$

Our initial velocity is zero, so the first term disappears. Set this equal to the stable formula and solve for n:

$$
speed \times move factor \times \frac{friction^n – 1}{friction – 1} = (1 – 5\%) \times speed \times \frac{move factor}{1 – friction} \\
friction^n – 1 = -(1 – 5\%) \\
n = \frac{\ln 5\%}{\ln friction}
$$

Speed” and move factor disappear entirely, which makes sense, and this is purely a function of friction (and how close we want to get). For vanilla Doom, that comes out to 30.4, which is a little less than a second. For other values of friction:

A graph of time to stability which leaps upwards dramatically towards the right

As friction increases (which in Doom terms means the surface is more slippery), it takes longer and longer to reach stable speed, which is in turn greater and greater. For lesser friction (i.e. mud), stable speed is lower, but reached fairly quickly. (Of course, the extra “getting going” multiplier while in mud adds some extra time here, but including that in the graph is a bit more complicated.)

I think this matches with my instincts above. How fascinating!

What’s that? This is way too much math and you hate it? Then don’t use multipliers in game physics.

Uh

That was a hell of a diversion!

I guess the goofiest stuff in basic game physics is really just about mapping player controls to in-game actions like jumping and deceleration; the rest consists of hacks to compensate for representing everything as a box.

Journeying with green sea turtles and the Arribada Initiative

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/sea-turtles/

Today, a guest post: Alasdair Davies, co-founder of Naturebytes, ZSL London’s Conservation Technology Specialist and Shuttleworth Foundation Fellow, shares the work of the Arribada Initiative. The project uses the Raspberry Pi Zero and camera module to follow the journey of green sea turtles. The footage captured from the backs of these magnificent creatures is just incredible – prepare to be blown away!

Pit Stop Camera on Green Sea Turtle 01

Footage from the new Arribada PS-C (pit-stop camera) video tag recently trialled on the island of Principe in unison with the Principe Trust. Engineered by Institute IRNAS (http://irnas.eu/) for the Arribada Initiative (http://blog.arribada.org/).

Access to affordable, open and customisable conservation technologies in the animal tracking world is often limited. I’ve been a conservation technologist for the past ten years, co-founding Naturebytes and working at ZSL London Zoo, and this was a problem that continued to frustrate me. It was inherently expensive to collect valuable data that was necessary to inform policy, to designate marine protected areas, or to identify threats to species.

In March this year, I got a supercharged opportunity to break through these barriers by becoming a Shuttleworth Foundation Fellow, meaning I had the time and resources to concentrate on cracking the problem. The Arribada Initiative was founded, and ten months later, the open source Arribada PS-C green sea turtle tag was born. The video above was captured two weeks ago in the waters of Principe Island, West Africa.

Alasdair Davies on Twitter

On route to Principe island with 10 second gen green sea #turtle tags for testing. This version has a video & accelerometer payload for behavioural studies, plus a nice wireless charging carry case made by @institute_irnas @ShuttleworthFdn

The tag comprises a Raspberry Pi Zero W sporting the Raspberry Pi camera module, a PiRA power management board, two lithium-ion cells, and a rather nice enclosure. It was built in unison with Institute IRNAS, and there’s a nice user-friendly wireless charging case to make it easy for the marine guards to replace the tags after their voyages at sea. When a tag is returned to one of the docking stations in the case, we use resin.io to manage it, download videos, and configure the tag remotely.

Green Sea Turtle Alasdair Davies Raspberry Pi
Green Sea Turtle Alasdair Davies Raspberry Pi

The tags can also be configured to take video clips at timed intervals, meaning we can now observe the presence of marine litter, plastic debris, before/after changes to the ocean environment due to nearby construction, pollution, and other threats.

Discarded fishing nets are lethal to sea turtles, so using this new tag at scale – now finally possible, as the Raspberry Pi Zero helps to drive down costs dramatically whilst retaining excellent video quality – offers real value to scientists in the field. Next year we will be releasing an optimised, affordable GPS version.

green sea turtle Alasdair Davies Raspberry Pi Arribada Initiative

To make this all possible we had to devise a quicker method of attaching the tag to the sea turtles too, so we came up with the “pit-stop” technique (which is what the PS in the name “Arribada PS-C” stands for). Just as a Formula 1 car would visit the pits to get its tyres changed, we literally switch out the tags on the beach when nesting females return, replacing them with freshly charged tags by using a quick-release base plate.

Alasdair Davies on Twitter

About 6 days left now until the first tagged nesting green sea #turtles return using our latest “pit-stop” removeable / replaceable tag method. Counting down the days @arribada_i @institute_irnas

To implement the system we first epoxy the base plate to the turtle, which minimises any possible stress to the turtles as the method is quick. Once the epoxy has dried we attach the tag. When the turtle has completed its nesting cycle (they visit the beach to lay eggs three to four times in a single season, every 10–14 days on average), we simply remove the base plate to complete the field work.

Green Sea Turtle Alasdair Davies Raspberry Pi
Green Sea Turtle Alasdair Davies Raspberry Pi

If you’d like to watch more wonderful videos of the green sea turtles’ adventures, there’s an entire YouTube playlist available here. And to keep up to date with the initiative, be sure to follow Arribada and Alasdair on Twitter.

The post Journeying with green sea turtles and the Arribada Initiative appeared first on Raspberry Pi.

Serverless @ re:Invent 2017

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/serverless-reinvent-2017/

At re:Invent 2014, we announced AWS Lambda, what is now the center of the serverless platform at AWS, and helped ignite the trend of companies building serverless applications.

This year, at re:Invent 2017, the topic of serverless was everywhere. We were incredibly excited to see the energy from everyone attending 7 workshops, 15 chalk talks, 20 skills sessions and 27 breakout sessions. Many of these sessions were repeated due to high demand, so we are happy to summarize and provide links to the recordings and slides of these sessions.

Over the course of the week leading up to and then the week of re:Invent, we also had over 15 new features and capabilities across a number of serverless services, including AWS Lambda, Amazon API Gateway, AWS [email protected]ge, AWS SAM, and the newly announced AWS Serverless Application Repository!

AWS Lambda

Amazon API Gateway

  • Amazon API Gateway Supports Endpoint Integrations with Private VPCs – You can now provide access to HTTP(S) resources within your VPC without exposing them directly to the public internet. This includes resources available over a VPN or Direct Connect connection!
  • Amazon API Gateway Supports Canary Release Deployments – You can now use canary release deployments to gradually roll out new APIs. This helps you more safely roll out API changes and limit the blast radius of new deployments.
  • Amazon API Gateway Supports Access Logging – The access logging feature lets you generate access logs in different formats such as CLF (Common Log Format), JSON, XML, and CSV. The access logs can be fed into your existing analytics or log processing tools so you can perform more in-depth analysis or take action in response to the log data.
  • Amazon API Gateway Customize Integration Timeouts – You can now set a custom timeout for your API calls as low as 50ms and as high as 29 seconds (the default is 30 seconds).
  • Amazon API Gateway Supports Generating SDK in Ruby – This is in addition to support for SDKs in Java, JavaScript, Android and iOS (Swift and Objective-C). The SDKs that Amazon API Gateway generates save you development time and come with a number of prebuilt capabilities, such as working with API keys, exponential back, and exception handling.

AWS Serverless Application Repository

Serverless Application Repository is a new service (currently in preview) that aids in the publication, discovery, and deployment of serverless applications. With it you’ll be able to find shared serverless applications that you can launch in your account, while also sharing ones that you’ve created for others to do the same.

AWS [email protected]

[email protected] now supports content-based dynamic origin selection, network calls from viewer events, and advanced response generation. This combination of capabilities greatly increases the use cases for [email protected], such as allowing you to send requests to different origins based on request information, showing selective content based on authentication, and dynamically watermarking images for each viewer.

AWS SAM

Twitch Launchpad live announcements

Other service announcements

Here are some of the other highlights that you might have missed. We think these could help you make great applications:

AWS re:Invent 2017 sessions

Coming up with the right mix of talks for an event like this can be quite a challenge. The Product, Marketing, and Developer Advocacy teams for Serverless at AWS spent weeks reading through dozens of talk ideas to boil it down to the final list.

From feedback at other AWS events and webinars, we knew that customers were looking for talks that focused on concrete examples of solving problems with serverless, how to perform common tasks such as deployment, CI/CD, monitoring, and troubleshooting, and to see customer and partner examples solving real world problems. To that extent we tried to settle on a good mix based on attendee experience and provide a track full of rich content.

Below are the recordings and slides of breakout sessions from re:Invent 2017. We’ve organized them for those getting started, those who are already beginning to build serverless applications, and the experts out there already running them at scale. Some of the videos and slides haven’t been posted yet, and so we will update this list as they become available.

Find the entire Serverless Track playlist on YouTube.

Talks for people new to Serverless

Advanced topics

Expert mode

Talks for specific use cases

Talks from AWS customers & partners

Looking to get hands-on with Serverless?

At re:Invent, we delivered instructor-led skills sessions to help attendees new to serverless applications get started quickly. The content from these sessions is already online and you can do the hands-on labs yourself!
Build a Serverless web application

Still looking for more?

We also recently completely overhauled the main Serverless landing page for AWS. This includes a new Resources page containing case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hamr-hard-drives/

HAMR drive illustration

During Q4, Backblaze deployed 100 petabytes worth of Seagate hard drives to our data centers. The newly deployed Seagate 10 and 12 TB drives are doing well and will help us meet our near term storage needs, but we know we’re going to need more drives — with higher capacities. That’s why the success of new hard drive technologies like Heat-Assisted Magnetic Recording (HAMR) from Seagate are very relevant to us here at Backblaze and to the storage industry in general. In today’s guest post we are pleased to have Mark Re, CTO at Seagate, give us an insider’s look behind the hard drive curtain to tell us how Seagate engineers are developing the HAMR technology and making it market ready starting in late 2018.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Guest Blog Post by Mark Re, Seagate Senior Vice President and Chief Technology Officer

Earlier this year Seagate announced plans to make the first hard drives using Heat-Assisted Magnetic Recording, or HAMR, available by the end of 2018 in pilot volumes. Even as today’s market has embraced 10TB+ drives, the need for 20TB+ drives remains imperative in the relative near term. HAMR is the Seagate research team’s next major advance in hard drive technology.

HAMR is a technology that over time will enable a big increase in the amount of data that can be stored on a disk. A small laser is attached to a recording head, designed to heat a tiny spot on the disk where the data will be written. This allows a smaller bit cell to be written as either a 0 or a 1. The smaller bit cell size enables more bits to be crammed into a given surface area — increasing the areal density of data, and increasing drive capacity.

It sounds almost simple, but the science and engineering expertise required, the research, experimentation, lab development and product development to perfect this technology has been enormous. Below is an overview of the HAMR technology and you can dig into the details in our technical brief that provides a point-by-point rundown describing several key advances enabling the HAMR design.

As much time and resources as have been committed to developing HAMR, the need for its increased data density is indisputable. Demand for data storage keeps increasing. Businesses’ ability to manage and leverage more capacity is a competitive necessity, and IT spending on capacity continues to increase.

History of Increasing Storage Capacity

For the last 50 years areal density in the hard disk drive has been growing faster than Moore’s law, which is a very good thing. After all, customers from data centers and cloud service providers to creative professionals and game enthusiasts rarely go shopping looking for a hard drive just like the one they bought two years ago. The demands of increasing data on storage capacities inevitably increase, thus the technology constantly evolves.

According to the Advanced Storage Technology Consortium, HAMR will be the next significant storage technology innovation to increase the amount of storage in the area available to store data, also called the disk’s “areal density.” We believe this boost in areal density will help fuel hard drive product development and growth through the next decade.

Why do we Need to Develop Higher-Capacity Hard Drives? Can’t Current Technologies do the Job?

Why is HAMR’s increased data density so important?

Data has become critical to all aspects of human life, changing how we’re educated and entertained. It affects and informs the ways we experience each other and interact with businesses and the wider world. IDC research shows the datasphere — all the data generated by the world’s businesses and billions of consumer endpoints — will continue to double in size every two years. IDC forecasts that by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1 ZB of data generated in 2016. IDC cites five key trends intensifying the role of data in changing our world: embedded systems and the Internet of Things (IoT), instantly available mobile and real-time data, cognitive artificial intelligence (AI) systems, increased security data requirements, and critically, the evolution of data from playing a business background to playing a life-critical role.

Consumers use the cloud to manage everything from family photos and videos to data about their health and exercise routines. Real-time data created by connected devices — everything from Fitbit, Alexa and smart phones to home security systems, solar systems and autonomous cars — are fueling the emerging Data Age. On top of the obvious business and consumer data growth, our critical infrastructure like power grids, water systems, hospitals, road infrastructure and public transportation all demand and add to the growth of real-time data. Data is now a vital element in the smooth operation of all aspects of daily life.

All of this entails a significant infrastructure cost behind the scenes with the insatiable, global appetite for data storage. While a variety of storage technologies will continue to advance in data density (Seagate announced the first 60TB 3.5-inch SSD unit for example), high-capacity hard drives serve as the primary foundational core of our interconnected, cloud and IoT-based dependence on data.

HAMR Hard Drive Technology

Seagate has been working on heat assisted magnetic recording (HAMR) in one form or another since the late 1990s. During this time we’ve made many breakthroughs in making reliable near field transducers, special high capacity HAMR media, and figuring out a way to put a laser on each and every head that is no larger than a grain of salt.

The development of HAMR has required Seagate to consider and overcome a myriad of scientific and technical challenges including new kinds of magnetic media, nano-plasmonic device design and fabrication, laser integration, high-temperature head-disk interactions, and thermal regulation.

A typical hard drive inside any computer or server contains one or more rigid disks coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code.

Increasing the amount of data you can store on a disk requires cramming magnetic regions closer together, which means the grains need to be smaller so they won’t interfere with each other.

Heat Assisted Magnetic Recording (HAMR) is the next step to enable us to increase the density of grains — or bit density. Current projections are that HAMR can achieve 5 Tbpsi (Terabits per square inch) on conventional HAMR media, and in the future will be able to achieve 10 Tbpsi or higher with bit patterned media (in which discrete dots are predefined on the media in regular, efficient, very dense patterns). These technologies will enable hard drives with capacities higher than 100 TB before 2030.

The major problem with packing bits so closely together is that if you do that on conventional magnetic media, the bits (and the data they represent) become thermally unstable, and may flip. So, to make the grains maintain their stability — their ability to store bits over a long period of time — we need to develop a recording media that has higher coercivity. That means it’s magnetically more stable during storage, but it is more difficult to change the magnetic characteristics of the media when writing (harder to flip a grain from a 0 to a 1 or vice versa).

That’s why HAMR’s first key hardware advance required developing a new recording media that keeps bits stable — using high anisotropy (or “hard”) magnetic materials such as iron-platinum alloy (FePt), which resist magnetic change at normal temperatures. Over years of HAMR development, Seagate researchers have tested and proven out a variety of FePt granular media films, with varying alloy composition and chemical ordering.

In fact the new media is so “hard” that conventional recording heads won’t be able to flip the bits, or write new data, under normal temperatures. If you add heat to the tiny spot on which you want to write data, you can make the media’s coercive field lower than the magnetic field provided by the recording head — in other words, enable the write head to flip that bit.

So, a challenge with HAMR has been to replace conventional perpendicular magnetic recording (PMR), in which the write head operates at room temperature, with a write technology that heats the thin film recording medium on the disk platter to temperatures above 400 °C. The basic principle is to heat a tiny region of several magnetic grains for a very short time (~1 nanoseconds) to a temperature high enough to make the media’s coercive field lower than the write head’s magnetic field. Immediately after the heat pulse, the region quickly cools down and the bit’s magnetic orientation is frozen in place.

Applying this dynamic nano-heating is where HAMR’s famous “laser” comes in. A plasmonic near-field transducer (NFT) has been integrated into the recording head, to heat the media and enable magnetic change at a specific point. Plasmonic NFTs are used to focus and confine light energy to regions smaller than the wavelength of light. This enables us to heat an extremely small region, measured in nanometers, on the disk media to reduce its magnetic coercivity,

Moving HAMR Forward

HAMR write head

As always in advanced engineering, the devil — or many devils — is in the details. As noted earlier, our technical brief provides a point-by-point short illustrated summary of HAMR’s key changes.

Although hard work remains, we believe this technology is nearly ready for commercialization. Seagate has the best engineers in the world working towards a goal of a 20 Terabyte drive by 2019. We hope we’ve given you a glimpse into the amount of engineering that goes into a hard drive. Keeping up with the world’s insatiable appetite to create, capture, store, secure, manage, analyze, rapidly access and share data is a challenge we work on every day.

With thousands of HAMR drives already being made in our manufacturing facilities, our internal and external supply chain is solidly in place, and volume manufacturing tools are online. This year we began shipping initial units for customer tests, and production units will ship to key customers by the end of 2018. Prepare for breakthrough capacities.

The post What is HAMR and How Does It Enable the High-Capacity Needs of the Future? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Matt’s steampunk radio jukebox

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/matts-steampunk-radio-jukebox/

Matt Van Gastel breathed new life into his great-grandparents’ 1930s Westinghouse with a Raspberry Pi, an amplifier HAT, Google Music, and some serious effort. The result is a really beautiful, striking piece.

Matt Van Gastel Steampunk Radio Raspberry Pi

The radio

With a background in radio electronics, Matt Van Gastel had always planned to restore his great-grandparents’ mid-30s Westinghouse radio. “I even found the original schematics glued to the bottom of the base of the main electronics assembly,” he explains in his Instructables walkthrough. However, considering the age of the piece and the cost of sourcing parts for a repair, he decided to take the project in a slightly different direction.



“I pulled the main electronics assembly out quite easily, it was held in by four flat head screws […] I decided to make a Steampunk themed Jukebox based off this main assembly and power it with a Raspberry Pi,” he writes.

The build

Matt added JustBoom’s Amp HAT to a Raspberry Pi 3 to boost the sound quality and functionality of the board.

He spent a weekend prototyping and testing the electronics before deciding on his final layout. After a little time playing around with different software, Matt chose Mopidy, a flexible music server written in Python. Mopidy lets him connect to his music-streaming service of choice, Google Music, and also allows airplay connectivity for other wireless devices.

Stripping out the old electronics from inside the Westinghouse radio easily made enough space for Matt’s new, much smaller, setup. Reserving various pieces for the final build, and scrubbing the entire unit to within an inch of its life with soap and water, he moved on to the aesthetics of the piece.

The steampunk

LED Nixie tubes, a 1950s DC voltmeter, and spray paint all contributed to the final look of the radio. It has a splendid steampunk look that works wonderfully with the vintage of the original radio.



Retrofit and steampunk Raspberry Pi builds

From old pub jukeboxes to Bakelite kitchen radios, we’ve seen lots of retrofit audio visual Pi projects over the years, with all kinds of functionality and in all sorts of styles.

Americana – does exactly what it says on the tin jukebox

For more steampunk inspiration, check out phrazelle’s laptop and Derek Woodroffe’s tentacle hat. And for more audiophile builds, Tijuana Rick’s 60s Wurlitzer and Steve Devlin’s 50s wallbox are stand-out examples.

The post Matt’s steampunk radio jukebox appeared first on Raspberry Pi.

The Operations Team Just Got Rich-er!

Post Syndicated from Yev original https://www.backblaze.com/blog/operations-team-just-got-rich-er/

We’re growing at a pretty rapid clip, and as we add more customers, we need people to help keep all of our hard drive spinning. Along with support, the other department that grows linearly with the number of customers that join us is the operations team, and they’ve just added a new member to their team, Rich! He joins us as a Network Systems Administrator! Lets take a moment to learn more about Rich, shall we?

What is your Backblaze Title?
Network Systems Administrator

Where are you originally from?
The Upper Peninsula of Michigan. Da UP, eh!

What attracted you to Backblaze?
The fact that it is a small tech company packed with highly intelligent people and a place where I can also be friends with my peers. I am also huge on cloud storage and backing up your past!

What do you expect to learn while being at Backblaze?
I look forward to expanding my Networking skills and System Administration skills while helping build the best Cloud Storage and Backup Company there is!

Where else have you worked?
I first started working in Data Centers at Viawest. I was previously an Infrastructure Engineer at Twitter and a Production Engineer at Groupon.

Where did you go to school?
I started at Finlandia University in Norther Michigan, carried onto Northwest Florida State and graduated with my A.S. from North Lake College in Dallas, TX. I then completed my B.S. Degree online at WGU.

What’s your dream job?
Sr. Network Engineer

Favorite place you’ve traveled?
I have traveled around a bit in my life. I really liked Dublin, Ireland but I have to say favorite has to be Puerto Vallarta, Mexico! Which is actually where I am getting married in 2019!

Favorite hobby?
Water is my life. I like to wakeboard and wakesurf. I also enjoy biking, hunting, fishing, camping, and anything that has to do with the great outdoors!

Of what achievement are you most proud?
I’m proud of moving up in my career as quickly as I have been. I am also very proud of being able to wakesurf behind a boat without a rope! Lol!

Star Trek or Star Wars?
Star Trek! I grew up on it!

Coke or Pepsi?
H2O 😀

Favorite food?
Mexican Food and Pizza!

Why do you like certain things?
Hmm…. because certain things make other certain things particularly certain!

Anything else you’d like you’d like to tell us?
Nope 😀

Who can say no to high quality H2O? Welcome to the team Rich!

The post The Operations Team Just Got Rich-er! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Collect Data Statistics Up to 5x Faster by Analyzing Only Predicate Columns with Amazon Redshift

Post Syndicated from George Caragea original https://aws.amazon.com/blogs/big-data/collect-data-statistics-up-to-5x-faster-by-analyzing-only-predicate-columns-with-amazon-redshift/

Amazon Redshift is a fast, fully managed, petabyte-scale data warehousing service that makes it simple and cost-effective to analyze all of your data. Many of our customers—including Boingo Wireless, Scholastic, Finra, Pinterest, and Foursquare—migrated to Amazon Redshift and achieved agility and faster time to insight, while dramatically reducing costs.

Query optimization and the need for accurate estimates

When a SQL query is submitted to Amazon Redshift, the query optimizer is in charge of generating all the possible ways to execute that query, and picking the fastest one. This can mean evaluating the cost of thousands, if not millions, of different execution plans.

The plan cost is calculated based on estimates of the data characteristics. For example, the characteristics could include the number of rows in each base table, the average width of a variable-length column, the number of distinct values in a column, and the most common values in a column. These estimates (or “statistics”) are computed in advance by running an ANALYZE command, and stored in the system catalog.

How do the query optimizer and ANALYZE work together?

An ideal scenario is to run ANALYZE after every ETL/ingestion job. This way, when running your workload, the query optimizer can use up-to-date data statistics, and choose the most optimal execution plan, given the updates.

However, running the ANALYZE command can add significant overhead to the data ingestion scripts. This can lead to customers not running ANALYZE on their data, and using default or stale estimates. The end result is usually the optimizer choosing a suboptimal execution plan that runs for longer than needed.

Analyzing predicate columns only

When you run a SQL query, the query optimizer requests statistics only on columns used in predicates in the SQL query (join predicates, filters in the WHERE clause and GROUP BY clauses). Consider the following query:

SELECT Avg(salary), 
       Min(hiredate), 
       deptname 
FROM   emp 
WHERE  state = 'CA' 
GROUP  BY deptname; 

In the query above, the optimizer requests statistics only on columns ‘state’ and ‘deptname’, but not on ‘salary’ and ‘hiredate’. If present, statistics on columns ‘salary’ and ‘hiredate’ are ignored, as they do not impact the cost of the execution plans considered.

Based on the optimizer functionality described earlier, the Amazon Redshift ANALYZE command has been updated to optionally collect information only about columns used in previous queries as part of a filter, join condition or a GROUP BY clause, and columns that are part of distribution or sort keys (predicate columns). There’s a recently introduced option for the ANALYZE command that only analyzes predicate columns:

ANALYZE <table name> PREDICATE COLUMNS;

By having Amazon Redshift collect information about predicate columns automatically, and analyzing those columns only, you’re able to reduce the time to run ANALYZE. For example, during the execution of the 99 queries in the TPC-DS workload, only 203 out of the 424 total columns are predicate columns (approximately 48%). By analyzing only the predicate columns for such a workload, the execution time for running ANALYZE can be significantly reduced.

From my experience in the data warehousing space, I have observed that about 20% of columns in a typical use case are marked predicate. In such a case, running ANALYZE PREDICATE COLUMNS can lead to a speedup of up to 5x relative to a full ANALYZE run.

If no information on predicate columns exists in the system (for example, a new table that has not been queried yet), ANALYZE PREDICATE COLUMNS collects statistics on all the columns. When queries on the table are run, Amazon Redshift collects information about predicate column usage, and subsequent runs of ANALYZE PREDICATE COLUMNS only operates on the predicate columns.

If the workload is relatively stable, and the set of predicate columns does not expand continuously over time, I recommend replacing all occurrences of the ANALYZE command with ANALYZE PREDICATE COLUMNS commands in your application and data ingestion code.

Using the Analyze/Vacuum utility

Several AWS customers are using the Analyze/Vacuum utility from the Redshift-Utils package to manage and automate their maintenance operations. By passing the –predicate-cols option to the Analyze/Vacuum utility, you can enable it to use the ANALYZE PREDICATE COLUMNS feature, providing you with the significant changes in overhead in a completely seamless manner.

Enhancements to logging for ANALYZE operations

When running ANALYZE with the PREDICATE COLUMNS option, the type of analyze run (Full vs Predicate Column), as well as information about the predicate columns encountered, is logged in the stl_analyze view:

SELECT status, 
       starttime, 
       prevtime, 
       num_predicate_cols, 
       num_new_predicate_cols 
FROM   stl_analyze;
     status   |    starttime        |   prevtime          | pred_cols | new_pred_cols
--------------+---------------------+---------------------+-----------+---------------
 Full         | 2017-11-09 01:15:47 |                     |         0 |             0
 PredicateCol | 2017-11-09 01:16:20 | 2017-11-09 01:15:47 |         2 |             2

AWS also enhanced the pg_statistic catalog table with two new pieces of information: the time stamp at which a column was marked as “predicate”, and the time stamp at which the column was last analyzed.

The Amazon Redshift documentation provides a view that allows a user to easily see which columns are marked as predicate, when they were marked as predicate, and when a column was last analyzed. For example, for the emp table used above, the output of the view could be as follows:

 SELECT col_name, 
       is_predicate, 
       first_predicate_use, 
       last_analyze 
FROM   predicate_columns 
WHERE  table_name = 'emp';

 col_name | is_predicate | first_predicate_use  |        last_analyze
----------+--------------+----------------------+----------------------------
 id       | f            |                      | 2017-11-09 01:15:47
 name     | f            |                      | 2017-11-09 01:15:47
 deptname | t            | 2017-11-09 01:16:03  | 2017-11-09 01:16:20
 age      | f            |                      | 2017-11-09 01:15:47
 salary   | f            |                      | 2017-11-09 01:15:47
 hiredate | f            |                      | 2017-11-09 01:15:47
 state    | t            | 2017-11-09 01:16:03  | 2017-11-09 01:16:20

Conclusion

After loading new data into an Amazon Redshift cluster, statistics need to be re-computed to guarantee performant query plans. By learning which column statistics are actually being used by the customer’s workload and collecting statistics only on those columns, Amazon Redshift is able to significantly reduce the amount of time needed for table maintenance during data loading workflows.


Additional Reading

Be sure to check out the Top 10 Tuning Techniques for Amazon Redshift, and the Advanced Table Design Playbook: Distribution Styles and Distribution Keys.


About the Author

George Caragea is a Senior Software Engineer with Amazon Redshift. He has been working on MPP Databases for over 6 years and is mainly interested in designing systems at scale. In his spare time, he enjoys being outdoors and on the water in the beautiful Bay Area and finishing the day exploring the rich local restaurant scene.

 

 

The Official Projects Book volume 3 — out now

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/projects-book-3/

Hey folks, Rob from The MagPi here with some very exciting news! The third volume of the Official Raspberry Pi Projects Book is out right this second, and we’ve packed its 200 pages with the very best Raspberry Pi projects and guides!

Cover of The Official Projects Book volume 3

A peek inside the projects book

We start you off with a neat beginners guide to programming in Python,  walking you from the very basics all the way through to building the classic videogame Pong from scratch!

Table of contents of The Official Projects Book volume 3

Check out what’s inside!

Then we showcase some of the most inspiring projects from around the community, such as a camera for taking photos of the moon, a smart art installation, amazing arcade machines, and much more.

An article about the Apollo Pi project in The Official Projects Book volume 3

Emulate the Apollo mission computers on the Raspberry Pi

Next, we ease you into a series of tutorials that will help you get the most out of your Raspberry Pi. Among other things, you’ll be learning how to sync your Pi to Dropbox, use it to create a waterproof camera, and even emulate an Amiga.

We’ve also assembled a load of reviews to let you know what you should be buying if you want to extend your Pi experience.

A review of the Pimoroni Enviro pHAT in The Official Projects Book volume 3

Learn more about Pimoroni’s Enviro pHAT

I am extremely proud of what the entire MagPi team has put together here, and I know you’ll enjoy reading it as much as we enjoyed creating it.

How to get yours

In the UK, you can get your copy of the new Official Raspberry Pi Projects Book at WH Smith and all good newsagents today. In the US, print copies will be available in stores such as Barnes & Noble very soon.

Or order a copy from the Raspberry Pi Press store — before the end of Sunday 26 November, you can use the code BLACKFRIDAY to get 10% off your purchase!

There’s also the digital version, which you can get via The MagPi Android and iOS apps. And, as always, there’s a free PDF, which is available here.

We think this new projects book is the perfect stocking filler, although we may be just a tad biased. Anyway, I hope you’ll love it!

Gif of Picard smiling at three children

The post The Official Projects Book volume 3 — out now appeared first on Raspberry Pi.

AWS IoT Update – Better Value with New Pricing Model

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-update-better-value-with-new-pricing-model/

Our customers are using AWS IoT to make their connected devices more intelligent. These devices collect & measure data in the field (below the ground, in the air, in the water, on factory floors and in hospital rooms) and use AWS IoT as their gateway to the AWS Cloud. Once connected to the cloud, customers can write device data to Amazon Simple Storage Service (S3) and Amazon DynamoDB, process data using Amazon Kinesis and AWS Lambda functions, initiate Amazon Simple Notification Service (SNS) push notifications, and much more.

New Pricing Model (20-40% Reduction)
Today we are making a change to the AWS IoT pricing model that will make it an even better value for you. Most customers will see a price reduction of 20-40%, with some receiving a significantly larger discount depending on their workload.

The original model was based on a charge for the number of messages that were sent to or from the service. This all-inclusive model was a good starting point, but also meant that some customers were effectively paying for parts of AWS IoT that they did not actually use. For example, some customers have devices that ping AWS IoT very frequently, with sparse rule sets that fire infrequently. Our new model is more fine-grained, with independent charges for each component (all prices are for devices that connect to the US East (Northern Virginia) Region):

Connectivity – Metered in 1 minute increments and based on the total time your devices are connected to AWS IoT. Priced at $0.08 per million minutes of connection (equivalent to $0.042 per device per year for 24/7 connectivity). Your devices can send keep-alive pings at 30 second to 20 minute intervals at no additional cost.

Messaging – Metered by the number of messages transmitted between your devices and AWS IoT. Pricing starts at $1 per million messages, with volume pricing falling as low as $0.70 per million. You may send and receive messages up to 128 kilobytes in size. Messages are metered in 5 kilobyte increments (up from 512 bytes previously). For example, an 8 kilobyte message is metered as two messages.

Rules Engine – Metered for each time a rule is triggered, and for the number of actions executed within a rule, with a minimum of one action per rule. Priced at $0.15 per million rules-triggered and $0.15 per million actions-executed. Rules that process a message in excess of 5 kilobytes are metered at the next multiple of the 5 kilobyte size. For example, a rule that processes an 8 kilobyte message is metered as two rules.

Device Shadow & Registry Updates – Metered on the number of operations to access or modify Device Shadow or Registry data, priced at $1.25 per million operations. Device Shadow and Registry operations are metered in 1 kilobyte increments of the Device Shadow or Registry record size. For example, an update to a 1.5 kilobyte Shadow record is metered as two operations.

The AWS Free Tier now offers a generous allocation of connection minutes, messages, triggered rules, rules actions, Shadow, and Registry usage, enough to operate a fleet of up to 50 devices. The new prices will take effect on January 1, 2018 with no effort on your part. At that time, the updated prices will be published on the AWS IoT Pricing page.

AWS IoT at re:Invent
We have an entire IoT track at this year’s AWS re:Invent. Here is a sampling:

We also have customer-led sessions from Philips, Panasonic, Enel, and Salesforce.

Jeff;

What We’re Thankful For

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/what-were-thankful-for/

All of us at Backblaze hope you have a wonderful Thanksgiving, and that you can enjoy it with family and friends. We asked everyone at Backblaze to express what they are thankful for. Here are their responses.

Fall leaves

What We’re Thankful For

Aside from friends, family, hobbies, health, etc. I’m thankful for my home. It’s not much, but it’s mine, and allows me to indulge in everything listed above. Or not, if I so choose. And coffee.

— Tony

I’m thankful for my wife Jen, and my other friends. I’m thankful that I like my coworkers and can call them friends too. I’m thankful for my health. I’m thankful that I was born into a middle class family in the US and that I have been very, very lucky because of that.

— Adam

Besides the most important things which are being thankful for my family, my health and my friends, I am very thankful for Backblaze. This is the first job I’ve ever had where I truly feel like I have a great work/life balance. With having 3 kids ages 8, 6 and 4, a husband that works crazy hours and my tennis career on the rise (kidding but I am on 4 teams) it’s really nice to feel like I have balance in my life. So cheers to Backblaze – where a girl can have it all!

— Shelby

I am thankful to work at a high-tech company that recognizes the contributions of engineers in their 40s and 50s.

— Jeannine

I am thankful for the music, the songs I’m singing. Thankful for all the joy they’re bringing. Who can live without it, I ask in all honesty? What would life be? Without a song or a dance what are we? So I say thank you for the music. For giving it to me!

— Yev

I’m thankful that I don’t look anything like the portrait my son draws of me…seriously.

— Natalie

I am thankful to work for a company that puts its people and product ahead of profits.

— James

I am thankful that even in the middle of disasters, turmoil, and violence, there are always people who commit amazing acts of generosity, courage, and kindness that restore my faith in mankind.

— Roderick

The future.

— Ahin

The Future

I am thankful for the current state of modern inexpensive broadband networking that allows me to stay in touch with friends and family that are far away, allows Backblaze to exist and pay my salary so I can live comfortably, and allows me to watch cat videos for free. The internet makes this an amazing time to be alive.

— Brian

Other than being thankful for family & good health, I’m quite thankful through the years I’ve avoided losing any of my 12+TB photo archive. 20 years of photoshoots, family photos and cell phone photos kept safe through changing storage media (floppy drives, flopticals, ZIP, JAZ, DVD-RAM, CD, DVD and hard drives), not to mention various technology/software solutions. It’s a data minefield out there, especially in the long run with changing media formats.

— Jim

I am thankful for non-profit organizations and their volunteers, such as IMAlive. Possibly the greatest gift you can give someone is empowerment, and an opportunity for them to recognize their own resilience and strength.

— Emily

I am thankful for my loving family, friends who make me laugh, a cool company to work for, talented co-workers who make me a better engineer, and beautiful Fall days in Wisconsin!

— Marjorie

Marjorie Wisconsin

I’m thankful for preschool drawings about thankfulness.

— Adam

I am thankful for new friends and working for a company that allows us to be ourselves.

— Annalisa

I’m thankful for my dog as I always find a reason to smile at him everyday. Yes, he still smells from his skunkin’ last week and he tracks mud in my house, but he came from the San Quentin puppy-prisoner program and I’m thankful I found him and that he found me! My vet is thankful as well.

— Terry

I’m thankful that my colleagues are also my friends outside of the office and that the rain season has started in California.

— Aaron

I’m thankful for family, friends, and beer. Mostly for family and friends, but beer is really nice too!

— Ken

There are so many amazing blessings that make up my daily life that I thank God for, so here I go – my basic needs of food, water and shelter, my husband and 2 daughters and the rest of the family (here and abroad) — their love, support, health, and safety, waking up to a new day every day, friends, music, my job, funny things, hugs and more hugs (who does not like hugs?).

— Cecilia

I am thankful to be blessed with a close-knit extended family, and for everything they do for my new, growing family. With a toddler and a second child on the way, it helps having so many extra sets of hands around to help with the kids!

— Zack

I’m thankful for family and friends, the opportunities my parents gave me by moving the U.S., and that all of us together at Backblaze have built a place to be proud of.

— Gleb

Aside for being thankful for family and friends, I am also thankful I live in a place with such natural beauty. Being so close to mountains and the ocean, and everything in between, is something that I don’t take for granted!

— Sona

I’m thankful for my wonderful wife, family, friends, and co-workers. I’m thankful for having a happy and healthy son, and the chance to watch him grow on a daily basis.

— Ariel

I am thankful for a dog-friendly workplace.

— LeAnn

I’m thankful for my amazing new wife and that she’s as much of a nerd as I am.

— Troy

I am thankful for every reunion with my siblings and families.

— Cecilia

I am thankful for my funny, strong-willed, happy daughter, my awesome husband, my family, and amazing friends. I am also thankful for the USA and all the opportunities that come with living here. Finally, I am thankful for Backblaze, a truly great place to work and for all of my co-workers/friends here.

— Natasha

I am thankful that I do not need to hunt and gather everyday to put food on the table but at the same time I feel that I don’t appreciate the food the sits before me as much as I should. So I use Thanksgiving to think about the people and the animals that put food on my family’s table.

— KC

I am thankful for my cat, Catnip. She’s been with me for 18 years and seen me through so many ups and downs. She’s been along my side through two long-term relationships, several moves, and one marriage. I know we don’t have much time together and feel blessed every day she’s here.

— JC

I am thankful for imperfection and misshapen candies. The imperceptible romance of sunsets through bus windows. The dream that family, friends, co-workers, and strangers are connected by love. I am thankful to my ancestors for enduring so much hardship so that I could be here enjoying Bay Area burritos.

— Damon

Autumn leaves

The post What We’re Thankful For appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

“The Commercial Usenet Stinks on All Sides,” Anti-Piracy Boss Says

Post Syndicated from Ernesto original https://torrentfreak.com/the-commercial-usenet-stinks-on-all-sides-anti-piracy-boss-says-171118/

Dutch anti-piracy group BREIN has targeted pirates of all shapes and sizes over the past several years.

It’s also one of the few groups keeping a close eye on Usenet piracy. Although Usenet and associated piracy are a few decades old already and relatively old-fashioned, the area still has millions of frequent users. This hasn’t escaped the attention of law enforcement.

Last week police in Germany launched one of the largest anti-piracy operations in recent history. Houses of dozens of suspects connected to Usenet forums were searched, with at least 1,000 gigabytes of data and numerous computers seized for evidence.

In their efforts, German authorities received help from international colleagues in the Netherlands, Spain, San Marino, Switzerland and Canada. Rightfully so, according to BREIN boss Tim Kuik, who describes Usenet as a refuge for pirates.

“Usenet was originally for text only. People were able to ask questions and exchange information via newsgroups. After it became possible to store video and music as Usenet text messages, it became a refuge for illegal copies of everything. That’s where the revenue model is based on today,” Kuik says.

BREIN states that uploaders, Usenet forums, and Usenet resellers all work in tandem. Resellers provide free accounts to popular uploaders, for example, which generates more traffic and demand for subscriptions. That’s how resellers and providers earn their money.

The same resellers also advertise on popular Usenet forums where links to pirated files are shared, suggesting that they specifically target these users. For example, one of the resellers targeted by BREIN in the past, was sponsoring one of the sites that were raided last week, BREIN notes.

Last year BREIN signed settlements with several Usenet uploaders. This was in part facilitated by a court order, directing Usenet provider Eweka to identify a former subscriber who supposedly shared infringing material.

Following this verdict, several Dutch Usenet servers were taken over by a San Marino company. But, according to BREIN this company can also be ordered to share customer information if needed.

“It is not unthinkable that this construction has been called into existence by Usenet companies who find themselves in hot water,” Kuik says.

According to BREIN it’s clear. Large parts of Usenet have turned into a playground for pirates and people who profit from copyright infringement. This all happens while the legitimate rightsholders don’t see a penny.

“For a long time, there’s been a certain smell to the commercial Usenet,” Kuik says. “It’s stinking on all sides.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Weekly roundup: Into the deep end

Post Syndicated from Eevee original https://eev.ee/dev/2017/11/13/weekly-roundup-into-the-deep-end/

  • cc: UI thing I was doing is actually, usably done! Hallelujah. Now for more of it.

  • idchoppers: I got in a surprise Rust mood and picked this up again. I didn’t get very far, mostly due to trying to coerce Rust into passing around interconnected pointers when it really didn’t want to, but I did add a tiny stub of a CLI (which should make future additions a bit less messy) and started stubbing out a map type.

  • fox flux: Some more player sprites, naturally. But also, a bunch of stuff! I drew and animated a heart pickup thing, with the intention that hearts are a little better spaced out and getting one is a bit more of an accomplishment. I also figured out how to do palette translation in a shader, ultimately culminating in a cool palette-preserving underwater effect. Oh, and I guess I implemented water. And a bunch of new movement stuff. And, yeah.

    I’m so glad I’m finally doing game mechanics stuff — getting the art right is nice and all, but this is finally something new that I can play. And show off, even!

  • doom: I made a set of Eevee mugshots. I don’t know why. Took about a day, and was pretty cool? I might write a bit about it.

    I also streamed Absolutely Killed, a Doom 1 episode of “gimmick” maps that I enjoyed quite a lot! The stream is on Twitch, at least until they nuke it in a week or two, and I used my custom mugshot the whole time.

I did not work on the final October blog post that I started on almost two weeks ago now; my bad. I’ve had my head pretty solidly stuck on fox flux for like a week, now that I’m finally working on the game parts and now just fiddling with the same set of animations forever. I’ve got a lot of writing I want to do as well, so I’ll try to get to that Real Soon™.

Hollywood Studios Force ISPs to Block Popcorn Time & Subtitle Sites

Post Syndicated from Andy original https://torrentfreak.com/court-orders-isps-to-block-popcorn-time-subtitle-websites-171113/

Early 2014, a new craze was sweeping the piracy world. Instead of relatively cumbersome text-heavy torrent sites, people were turning to a brand new application called Popcorn Time.

Dubbed the Netflix for Pirates due to its beautiful interface, Popcorn Time was soon a smash hit all over the planet. But with that fame came trouble, with anti-piracy outfits all over the world seeking to shut it down or at least pour cold water on its popularity.

In the meantime, however, the popularity of Kodi skyrocketed, something which pushed Popcorn Time out of the spotlight for a while. Nevertheless, the application in several different forms never went away and it still enjoys an impressive following today. This means that despite earlier action in several jurisdictions, Hollywood still has it on the radar.

The latest development comes out of Norway, where Disney Entertainment, Paramount Pictures Corporation, Columbia Pictures, Twentieth Century Fox Film Corporation, Universal City Studios and Warner Bros. have just taken 14 local Internet service providers to court.

The studios claimed that the ISPs (including Telenor, Nextgentel, Get, Altibox, Telia, Homenet, Ice Norge, Eidsiva Bredbånd and Lynet Internet) should undertake broad blocking action to ensure that three of the most popular Popcorn Time forks (located at popcorn-time.to, popcorntime.sh and popcorn-time.is) can no longer function in the region.

Since site-blocking necessarily covers the blocking of websites, there appears to have been much discussion over whether a software application can be considered a website. However, the court ultimately found that wasn’t really an issue, since each application requires websites to operate.

“Each of the three [Popcorn Time variants] must be considered a ‘site’, even though users access Popcorn Time in a way that is technically different from the way other pirate sites provide users with access to content, and although different components of the Popcorn Time service are retrieved from different domains,” the Oslo District Court’s ruling reads.

In respect of all three releases of Popcorn Time, the Court weighed the pros and cons of blocking, including whether blocking was needed at all. However, it ultimately decided that alternative methods for dealing with the sites do not exist since the rightsholders tried and ultimately failed to get cooperation from the sites’ operators.

“All sites have as their main purpose the purpose of facilitating infringement of protected works by giving the public unauthorized access to movies and TV shows. This happens without regard to the rights of others and imposes major losses on the licensees and the cultural industry in general,” the Court writes.

The Court also supported compelling ISPs to introduce the blocks, noting that they are “an appropriate and proportionate measure” that does not interfere with the Internet service providers’ freedom to operate nor anyone’s else’s right to freedom of expression.

But while the websites in question are located in three places (popcorn-time.to, popcorntime.sh and popcorn-time.is) the Court’s blocking order goes much further. Not only does it cover these key domains but also other third-party sites that Popcorn Time utilizes, such as platforms offering subtitles.

Popcorn-time.to related domains to be blocked: popcorn-time.to, popcorn-time.xyz, popcorn-time.se, iosinstaller.com, video4time.info, thepopcorntime.net, timepopcorn.info, time-popcorn.com, the-pop-corn-time.net, timepopcorn.net, time4videostream.com, ukfrnlge.xyz, opensubtitles.org, onlinesubtitles.com, popcorntime-update.xyz, plus subdomains.

Popcorntime.sh related domains to be blocked: Popcorntime.sh, api-fetch.website, yts.ag, opensubtitles.org, plus subdomains.

Popcorn-time.is related domains to be blocked: popcorn-time.is, yts.ag, yify.is, yts.ph, api-fetch.website, eztvapi.ml and opensubtitles.org, plus subdomains.

Separately, the Court ordered the ISPs to block torrent site YTS.ag and onlinesubtitles.com, opensubtitles.org, plus their subdomains.

Since no one appeared to represent the sites and the ISPs can’t be held responsible if they cooperate, the Court found that the studios had succeeding in their action and are entitled to compensation.

“The Court’s conclusions mean that the plaintiffs have won the case and, in principle, are entitled to compensation for their legal costs from the operators of the sites,” the Court notes. “This means that the operators of sites are ordered to pay the plaintiffs’ costs.”

Those costs amount to 570,000 kr (around US$70,000), an amount which the Court chose to split equally between the three Popcorn Time forks ($23,359 each). It seems unlikely the amounts will ever be recovered although there is still an opportunity for the parties to appeal.

In the meantime the ISPs have just days left to block the sites listed above. Once they’ve been put in place, the blocks will remain in place for five years.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons