Tag Archives: light

Scale Your Web Application — One Step at a Time

Post Syndicated from Saurabh Shrivastava original https://aws.amazon.com/blogs/architecture/scale-your-web-application-one-step-at-a-time/

I often encounter people experiencing frustration as they attempt to scale their e-commerce or WordPress site—particularly around the cost and complexity related to scaling. When I talk to customers about their scaling plans, they often mention phrases such as horizontal scaling and microservices, but usually people aren’t sure about how to dive in and effectively scale their sites.

Now let’s talk about different scaling options. For instance if your current workload is in a traditional data center, you can leverage the cloud for your on-premises solution. This way you can scale to achieve greater efficiency with less cost. It’s not necessary to set up a whole powerhouse to light a few bulbs. If your workload is already in the cloud, you can use one of the available out-of-the-box options.

Designing your API in microservices and adding horizontal scaling might seem like the best choice, unless your web application is already running in an on-premises environment and you’ll need to quickly scale it because of unexpected large spikes in web traffic.

So how to handle this situation? Take things one step at a time when scaling and you may find horizontal scaling isn’t the right choice, after all.

For example, assume you have a tech news website where you did an early-look review of an upcoming—and highly-anticipated—smartphone launch, which went viral. The review, a blog post on your website, includes both video and pictures. Comments are enabled for the post and readers can also rate it. For example, if your website is hosted on a traditional Linux with a LAMP stack, you may find yourself with immediate scaling problems.

Let’s get more details on the current scenario and dig out more:

  • Where are images and videos stored?
  • How many read/write requests are received per second? Per minute?
  • What is the level of security required?
  • Are these synchronous or asynchronous requests?

We’ll also want to consider the following if your website has a transactional load like e-commerce or banking:

How is the website handling sessions?

  • Do you have any compliance requests—like the Payment Card Industry Data Security Standard (PCI DSS compliance) —if your website is using its own payment gateway?
  • How are you recording customer behavior data and fulfilling your analytics needs?
  • What are your loading balancing considerations (scaling, caching, session maintenance, etc.)?

So, if we take this one step at a time:

Step 1: Ease server load. We need to quickly handle spikes in traffic, generated by activity on the blog post, so let’s reduce server load by moving image and video to some third -party content delivery network (CDN). AWS provides Amazon CloudFront as a CDN solution, which is highly scalable with built-in security to verify origin access identity and handle any DDoS attacks. CloudFront can direct traffic to your on-premises or cloud-hosted server with its 113 Points of Presence (102 Edge Locations and 11 Regional Edge Caches) in 56 cities across 24 countries, which provides efficient caching.
Step 2: Reduce read load by adding more read replicas. MySQL provides a nice mirror replication for databases. Oracle has its own Oracle plug for replication and AWS RDS provide up to five read replicas, which can span across the region and even the Amazon database Amazon Aurora can have 15 read replicas with Amazon Aurora autoscaling support. If a workload is highly variable, you should consider Amazon Aurora Serverless database  to achieve high efficiency and reduced cost. While most mirror technologies do asynchronous replication, AWS RDS can provide synchronous multi-AZ replication, which is good for disaster recovery but not for scalability. Asynchronous replication to mirror instance means replication data can sometimes be stale if network bandwidth is low, so you need to plan and design your application accordingly.

I recommend that you always use a read replica for any reporting needs and try to move non-critical GET services to read replica and reduce the load on the master database. In this case, loading comments associated with a blog can be fetched from a read replica—as it can handle some delay—in case there is any issue with asynchronous reflection.

Step 3: Reduce write requests. This can be achieved by introducing queue to process the asynchronous message. Amazon Simple Queue Service (Amazon SQS) is a highly-scalable queue, which can handle any kind of work-message load. You can process data, like rating and review; or calculate Deal Quality Score (DQS) using batch processing via an SQS queue. If your workload is in AWS, I recommend using a job-observer pattern by setting up Auto Scaling to automatically increase or decrease the number of batch servers, using the number of SQS messages, with Amazon CloudWatch, as the trigger.  For on-premises workloads, you can use SQS SDK to create an Amazon SQS queue that holds messages until they’re processed by your stack. Or you can use Amazon SNS  to fan out your message processing in parallel for different purposes like adding a watermark in an image, generating a thumbnail, etc.

Step 4: Introduce a more robust caching engine. You can use Amazon Elastic Cache for Memcached or Redis to reduce write requests. Memcached and Redis have different use cases so if you can afford to lose and recover your cache from your database, use Memcached. If you are looking for more robust data persistence and complex data structure, use Redis. In AWS, these are managed services, which means AWS takes care of the workload for you and you can also deploy them in your on-premises instances or use a hybrid approach.

Step 5: Scale your server. If there are still issues, it’s time to scale your server.  For the greatest cost-effectiveness and unlimited scalability, I suggest always using horizontal scaling. However, use cases like database vertical scaling may be a better choice until you are good with sharding; or use Amazon Aurora Serverless for variable workloads. It will be wise to use Auto Scaling to manage your workload effectively for horizontal scaling. Also, to achieve that, you need to persist the session. Amazon DynamoDB can handle session persistence across instances.

If your server is on premises, consider creating a multisite architecture, which will help you achieve quick scalability as required and provide a good disaster recovery solution.  You can pick and choose individual services like Amazon Route 53, AWS CloudFormation, Amazon SQS, Amazon SNS, Amazon RDS, etc. depending on your needs.

Your multisite architecture will look like the following diagram:

In this architecture, you can run your regular workload on premises, and use your AWS workload as required for scalability and disaster recovery. Using Route 53, you can direct a precise percentage of users to an AWS workload.

If you decide to move all of your workloads to AWS, the recommended multi-AZ architecture would look like the following:

In this architecture, you are using a multi-AZ distributed workload for high availability. You can have a multi-region setup and use Route53 to distribute your workload between AWS Regions. CloudFront helps you to scale and distribute static content via an S3 bucket and DynamoDB, maintaining your application state so that Auto Scaling can apply horizontal scaling without loss of session data. At the database layer, RDS with multi-AZ standby provides high availability and read replica helps achieve scalability.

This is a high-level strategy to help you think through the scalability of your workload by using AWS even if your workload in on premises and not in the cloud…yet.

I highly recommend creating a hybrid, multisite model by placing your on-premises environment replica in the public cloud like AWS Cloud, and using Amazon Route53 DNS Service and Elastic Load Balancing to route traffic between on-premises and cloud environments. AWS now supports load balancing between AWS and on-premises environments to help you scale your cloud environment quickly, whenever required, and reduce it further by applying Amazon auto-scaling and placing a threshold on your on-premises traffic using Route 53.

The Man from Earth Sequel ‘Pirated’ on The Pirate Bay – By Its Creators

Post Syndicated from Andy original https://torrentfreak.com/the-man-from-earth-sequel-pirated-on-the-pirate-bay-by-its-creators-180116/

More than a decade ago, Hollywood was struggling to get to grips with the file-sharing phenomenon. Sharing via BitTorrent was painted as a disease that could kill the movie industry, if it was allowed to take hold. Tough action was the only way to defeat it, the suits concluded.

In 2007, however, a most unusual turn of events showed that piracy could have a magical effect on the success of a movie.

After being produced on a tiny budget, a then little-known independent sci-fi film called “The Man from Earth” turned up on pirate sites, to the surprise of its creators.

“Originally, somebody got hold of a promotional screener DVD of ‘Jerome Bixby’s The Man from Earth’, ripped the file and posted the movie online before we knew what was even happening,” Man from Earth director Richard Schenkman informs TorrentFreak.

“A week or two before the DVD’s ‘street date’, we jumped 11,000% on the IMDb ‘Moviemeter’ and we were shocked.”

With pirates fueling interest in the movie, a member of the team took an unusual step. Producer Eric Wilkinson wrote to RLSlog, a popular piracy links site – not to berate pirates – but to thank them for catapulting the movie to fame.

“Our independent movie had next to no advertising budget and very little going for it until somebody ripped one of the DVD screeners and put the movie online for all to download. Most of the feedback from everyone who has downloaded ‘The Man From Earth’ has been overwhelmingly positive. People like our movie and are talking about it, all thanks to piracy on the net!” he wrote.

Richard Schenkman told TF this morning that availability on file-sharing networks was important for the movie, since it wasn’t available through legitimate means in most countries. So, the team called out to fans for help, if they’d pirated the movie and had liked what they’d seen.

“Once we realized what was going on, we asked people to make donations to our PayPal page if they saw the movie for free and liked it, because we had all worked for nothing for two years to bring it to the screen, and the only chance we had of surviving financially was to ask people to support us and the project,” Schenkman explains.

“And, happily, many people around the world did donate, although of course only a tiny fraction of the millions and millions of people who downloaded pirated copies.”

Following this early boost The Man from Earth went on to win multiple awards. And, a decade on, it boasts a hugely commendable 8/10 score on IMDb from more than 147,000 voters, with Netflix users leaving over 650,000 ratings, which reportedly translates to well over a million views.

It’s a performance director Richard Schenkman would like to repeat with his sequel: The Man from Earth: Holocene. This time, however, he won’t be leaving the piracy aspect to chance.

Yesterday the team behind the movie took matters into their own hands, uploading the movie to The Pirate Bay and other sites so that fans can help themselves.

“It was going to get uploaded regardless of what we did or didn’t do, and we figured that as long as this was inevitable, we would do the uploading ourselves and explain why we were doing it,” Schenkman informs TF.

“And, we would once again reach out to the filesharing community and remind them that while movies may be free to watch, they are not free to make, and we need their support.”

The release, listed here on The Pirate Bay, comes with detailed notes and a few friendly pointers on how the release can be further shared. It also informs people how they can show their appreciation if they like it.

The Man from Earth: Holocene on The Pirate Bay

“It’s a revolutionary global experiment in the honor system. We’re asking people: ‘If you watch our movie, and you like it, will you pay something directly to the people who made it?’,” Schenkman says.

“That’s why we’re so grateful to all of you who visit ManFromEarth.com and make a donation – of any size – if you’ve watched the movie without paying for it up front.”

In addition to using The Pirate Bay – which is often and incorrectly berated as a purely ‘pirate’ platform with no legitimate uses – the team has also teamed up with OpenSubtitles, so translations for the movie are available right from the beginning.

Other partners include MovieSaints.com, where fans can pay to see the movie from January 19 but get a full refund if they don’t enjoy it. It’s also available on Vimeo (see below) but the version seen by pirates is slightly different, and for good reason, Schenkman says.

“This version of the movie includes a greeting from me at the beginning, pointing out that we did indeed upload the movie ourselves, and asking people to visit manfromearth.com and make a donation if they can afford to, and if they enjoyed the film.

“The version we posted is very high-resolution, although we are also sharing some smaller files for those folks who have a slow Internet connection where they live,” he explains.

“We’re asking people to share ONLY this version of the movie — NOT to edit off the appeal message. And of course we’re asking people not to post the movie at YouTube or any other platform where someone (other than us) could profit financially from it. That would not be fair, nor in keeping with the spirit of what we’re trying to do.”

It’s not often we’re able to do this so it’s a pleasure to say that The Man from Earth: Holocene can be downloaded from The Pirate Bay, in various qualities and entirely legally, here. For those who want to show their appreciation, the tip jar is here.

"The Man from Earth: Holocene" Teaser Trailer from Richard Schenkman on Vimeo.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Game night 2: Detention, Viatoree, Paletta

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/16/game-night-2-detention-viatoree-paletta/

Game night continues with:

  • Detention
  • Viatoree
  • Paletta

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Detention

longish · inventory horror · jan 2017 · lin/mac/win · $12 on steam · website

Inventory horror” is a hell of a genre.

I think this one came from a Twitter thread where glip asked for indie horror recommendations. It’s apparently well-known enough to have a Wikipedia article, but I hadn’t heard of it before.

I love love love the aesthetic here. It’s obviously 2Dish from a side view (though there’s plenty of parallax in a lot of places), and it’s all done with… papercraft? I think of it as papercraft. Everything is built out of painted chunks that look like they were cut out of paper. It’s most obvious when watching the protagonist move around; her legs and skirt swivel as she walks.

Less obvious are the occasional places where tiny details repeat in the background because a paper cutout was reused. I don’t bring that up as a dig on the art; on the contrary, I really liked noticing that once or twice. It made the world feel like it was made with a tileset (albeit with very large chunky tiles), like it’s slightly artificial. I’m used to seeing sidescrollers made from tiles, of course, but the tiles are usually colorful and cartoony pixel art; big gritty full-color tiles are unusual and eerie.

And that’s a good thing in a horror game! Detention’s setting is already slightly unreal, and it’s made all the moreso by my Western perspective: it takes place in a Taiwanese school in the 60’s, a time when Taiwan was apparently under martial law. The Steam page tells you this, but I didn’t even know that much when we started playing, so I’d effectively been dropped somewhere on the globe and left to collect the details myself. Even figuring out we were in Taiwan (rather than mainland China) felt like an insight.

Thinking back, it was kind of a breath of fresh air. Games can be pretty heavy-handed about explaining the setting, but I never got that feeling from Detention. There’s more than enough context to get what’s going on, but there are no “stop and look at the camera while monologuing some exposition” moments. The developers are based in Taiwan, so it’s possible the setting is plenty familiar to them, and my perception of it is a complete accident. Either way, it certainly made an impact. Death of the author and whatnot, I suppose.

One thing in particular that stood out: none of the Chinese text in the environment is directly translated. The protagonist’s thoughts still give away what it says — “this is the nurse’s office” and the like — but that struck me as pretty different from simply repeating the text in English as though I were reading a sign in an RPG. The text is there, perfectly legible, but I can’t read it; I can only ask the protagonist to read it and offer her thoughts. It drives home that I’m experiencing the world through the eyes of the protagonist, who is their own person with their own impression of everything. Again, this is largely an emergent property of the game’s being designed in a culture that is not mine, but I’m left wondering how much thought went into this style of localization.

The game itself sees you wandering through a dark and twisted version of the protagonist’s school, collecting items and solving puzzles with them. There’s no direct combat, though some places feature a couple varieties of spirits called lingered which you have to carefully avoid. As the game progresses, the world starts to break down, alternating between increasingly abstract and increasingly concrete as we find out who the protagonist is and why she’s here.

The payoff is very personal and left a lasting impression… though as I look at the Wikipedia page now, it looks like the ending we got was the non-canon bad ending?! Well, hell. The bad ending is still great, then.

The whole game has a huge Silent Hill vibe, only without the combat and fog. Frankly, the genre might work better without combat; personal demons are more intimidating and meaningful when you can’t literally shoot them with a gun until they’re dead.

FINAL SCORE: 拾

Viatoree

short · platformer · sep 2013 · win · free on itch

I found this because @itchio tweeted about it, and the phrase “atmospheric platform exploration game” is the second most beautiful sequence of words in the English language.

The first paragraph on the itch.io page tells you the setup. That paragraph also contains more text than the entire game. In short: there are five things, and you need to find them. You can walk, jump, and extend your arms straight up to lift yourself to the ceiling. That’s it. No enemies, no shooting, no NPCs (more or less).

The result is, indeed, an atmospheric platform exploration game. The foreground is entirely 1-bit pixel art, save for the occasional white pixel to indicate someone’s eyes, and the background is only a few shades of the same purple hue. The game becomes less about playing and more about just looking at the environmental detail, appreciating how much texture the game manages to squeeze out of chunky colorless pixels. The world is still alive, too, much moreso than most platformers; tiny critters appear here and there, doing some wandering of their own, completely oblivious to you.

The game is really short, but it… just… makes me happy. I’m happy that this can exist, that not only is it okay for someone to make a very compact and short game, but that the result can still resonate with me. Not everything needs to be a sprawling epic or ask me to dedicate hours of time. It takes a few tiny ideas, runs with them, does what it came to do, and ends there. I love games like this.

That sounds silly to write out, but it’s been hard to get into my head! I do like experimenting, but I also feel compelled to reach for the grandiose, and grandiose experiment sounds more like mad science than creative exploration. For whatever reason, Viatoree convinced me that it’s okay to do a small thing, in a way that no other jam game has. It was probably the catalyst that led me to make Roguelike Simulator, and I thank it for that.

Unfortunately, we collected four of the five macguffins before hitting upon on a puzzle we couldn’t make heads or tails of. After about ten minutes of fruitless searching, I decided to abandon this one unfinished, rather than bore my couch partner to tears. Maybe I’ll go take another stab at it after I post this.

FINAL SCORE: ●●●●○

Paletta

medium · puzzle story · nov 2017 · win · free on itch

Paletta, another RPG Maker work, won second place in the month-long Indie Game Maker Contest 2017. Nice! Apparently MOOP came in fourth in the same jam; also nice! I guess that’s why both of them ended up on the itch front page.

The game is set in a world drained of color, and you have to go restore it. Each land contains one lost color, and each color gives you a corresponding spell, which is generally used for some light puzzle-solving in further lands. It’s a very cute and light-hearted game, and it actually does an impressive job of obscuring its RPG Maker roots.

The world feels a little small to me, despite having fairly spacious maps. The progression is pretty linear: you enter one land, talk to a small handful of NPCs, solve the one puzzle, get the color, and move on. I think all the areas were continuously connected, too, which may have thrown me off a bit — these areas are described as though they were vast regions, but they’re all a hundred feet wide and nestled right next to each other.

I love playing with color as a concept, and I wish the game had run further with it somehow. Rescuing a color does add some color back to the world, but at times it seemed like the color that reappeared was somewhat arbitrary? It’s not like you rescue green and now all the green is back. Thinking back on it now, I wonder if each rescued color actually changed a fixed set of sprites from gray to colorized? But it’s been a month (oops) and now I’m not sure.

I’m not trying to pick on the authors for the brevity of their jam game and also first game they’ve ever finished. I enjoyed playing it and found it plenty charming! It just happens that this time, what left the biggest impression on me was a nebulous feeling that something was missing. I think that’s still plenty important to ponder.

FINAL SCORE: ❤️💛💚💙💜

BitTorrent Client Transmission Suffers Remote Takeover Vulnerability

Post Syndicated from Ernesto original https://torrentfreak.com/bittorrent-client-transmission-suffers-remote-takeover-vulnerability-180116/

With millions of active users, Transmission is one of the most used BitTorrent clients around, particularly for Mac users.

The application has been around for more than a decade and has a great reputation. However, as with any other type of software, it is not immune to vulnerabilities.

One rather concerning flaw was made public by Google vulnerability researcher Tavis Ormandy a few days ago. The flaw allows outsiders to gain access to Transmission via DNS rebinding. This ultimately allows attackers to control the BitTorrent client and execute custom code.

Ormandy has published a patch, which was also shared with the private Transmission security list at the end of November. Transmission, however, has yet to address the issue in an update.

The relatively slow response was the reason why Ormandy decided to make it public before Project Zero’s usual 90-day window expired, Ars highlights. This allows other projects to address the vulnerability right away.

“I’m finding it frustrating that the transmission developers are not responding on their private security list,” Google’s vulnerability researcher writes. “I’ve never had an opensource project take this long to fix a vulnerability before, so I usually don’t even mention the 90 day limit if the vulnerability is in an open source project.”

A member of the Transmission developer team informed Ars that they will address this ASAP, noting that the issue only affects users who have remote control enabled with the default password. This means that people who disable it or change their password can easily ‘patch’ it until the official update comes out.

Interestingly, this isn’t the last BitTorrent related vulnerability Ormandy plans to expose. According to one of his tweets on the matter, this is just the “first of a few remote code execution flaws in various popular torrent clients.”

Judging from a message the researcher sent late November, uTorrent is on the list as well. Apparently, the company’s security email address wasn’t set up correctly at the time, so BitTorrent inventor Bram Cohen has been acting as a forwarding service.

uTorrent?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Eevee gained 2791 experience points

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/15/eevee-gained-2791-experience-points/

Eevee grew to level 31!

A year strongly defined by mixed success! Also, a lot of video games.

I ran three game jams, resulting in a total of 157 games existing that may not have otherwise, which is totally mindblowing?!

For GAMES MADE QUICK???, glip and I made NEON PHASE, a short little exploratory platformer. Honestly, I should give myself more credit for this and the rest of the LÖVE games I’ve based on the same codebase — I wove a physics engine (and everything else!) from scratch and it has held up remarkably well for a variety of different uses.

I successfully finished an HD version of Isaac’s Descent using my LÖVE engine, though it doesn’t have anything new over the original and I’ve only released it as a tech demo on Patreon.

For Strawberry Jam (NSFW!) we made fox flux (slightly NSFW!), which felt like a huge milestone: the first game where I made all the art! I mean, not counting Isaac’s Descent, which was for a very limited platform. It’s a pretty arbitrary milestone, yes, but it feels significant. I’ve been working on expanding the game into a longer and slightly less buggy experience, but the art is taking the longest by far. I must’ve spent weeks on player sprites alone.

We then set about working on Bolthaven, a sequel of sorts to NEON PHASE, and got decently far, and then abandond it. Oops.

We then started a cute little PICO-8 game, and forgot about it. Oops.

I was recruited to help with Chaos Composer, a more ambitious game glip started with someone else in Unity. I had to get used to Unity, and we squabbled a bit, but the game is finally about at the point where it’s “playable” and “maps” can be designed? It’s slightly on hold at the moment while we all finish up some other stuff, though.

We made a birthday game for two of our friends whose birthdays were very close together! Only they got to see it.

For Ludum Dare 38, we made Lunar Depot 38, a little “wave shooter” or whatever you call those? The AI is pretty rough, seeing as this was the first time I’d really made enemies and I had 72 hours to figure out how to do it, but I still think it’s pretty fun to play and I love the circular world.

I made Roguelike Simulator as an experiment with making something small and quick with a simple tool, and I had a lot of fun! I definitely want to do more stuff like this in the future.

And now we’re working on a game about Star Anise, my cat’s self-insert, which is looking to have more polish and depth than anything we’ve done so far! We’ve definitely come a long way in a year.

Somewhere along the line, I put out a call for a “potluck” project, where everyone would give me sprites of a given size without knowing what anyone else had contributed, and I would then make a game using only those sprites. Unfortunately, that stalled a few times: I tried using the Phaser JS library, but we didn’t get along; I tried LÖVE, but didn’t know where to go with the game; and then I decided to use this as an experiment with procedural generation, and didn’t get around to it. I still feel bad that everyone did work for me and I didn’t follow through, but I don’t know whether this will ever become a game.

veekun, alas, consumed months of my life. I finally got Sun and Moon loaded, but it took weeks of work since I was basically reinventing all the tooling we’d ever had from scratch, without even having most of that tooling available as a reference. It was worth it in the end, at least: Ultra Sun and Ultra Moon only took a few days to get loaded. But veekun itself is still missing some obvious Sun/Moon features, and the whole site needs an overhaul, and I just don’t know if I want to dedicate that much time to it when I have so much other stuff going on that’s much more interesting to me right now.

I finally turned my blog into more of a website, giving it a neat front page that lists a bunch of stuff I’ve done. I made a release category at last, though I’m still not quite in the habit of using it.

I wrote some blog posts, of course! I think the most interesting were JavaScript got better while I wasn’t looking and Object models. I was also asked to write a couple pieces for money for a column that then promptly shut down.

On a whim, I made a set of Eevee mugshots for Doom, which I think is a decent indication of my (pixel) art progress over the year?

I started idchoppers, a Doom parsing and manipulation library written in Rust, though it didn’t get very far and I’ve spent most of the time fighting with Rust because it won’t let me implement all my extremely bad ideas. It can do a couple things, at least, like flip maps very quickly and render maps to SVG.

I did toy around with music a little, but not a lot.

I wrote two short twines for Flora. They’re okay. I’m working on another; I think it’ll be better.

I didn’t do a lot of art overall, at least compared to the two previous years; most of my art effort over the year has gone into fox flux, which requires me to learn a whole lot of things. I did dip my toes into 3D modelling, most notably producing my current Twitter banner as well as this cool Star Anise animation. I wouldn’t mind doing more of that; maybe I’ll even try to make a low-poly pixel-textured 3D game sometime.

I restarted my book with a much better concept, though so far I’ve only written about half a chapter. Argh. I see that the vast majority of the work was done within the span of a single week, which is bad since that means I only worked on it for a week, but good since that means I can actually do a pretty good amount of work in only a week. I also did a lot of squabbling with tooling, which is hopefully mostly out of the way now.

My computer broke? That was an exciting week.


A lot of stuff, but the year as a whole still feels hit or miss. All the time I spent on veekun feels like a black void in the middle of the year, which seems like a good sign that I maybe don’t want to pour even more weeks into it in the near future.

Mostly, I want to do: more games, more art, more writing, more music.

I want to try out some tiny game making tools and make some tiny games with them — partly to get exposure to different things, partly to get more little ideas out into the world regularly, and partly to get more practice at letting myself have ideas. I have a couple tools in mind and I guess I’ll aim at a microgame every two months or so? I’d also like to finish the expanded fox flux by the end of the year, of course, though at the moment I can’t even gauge how long it might take.

I seriously lapsed on drawing last year, largely because fox flux pixel art took me so much time. So I want to draw more, and I want to get much faster at pixel art. It would probably help if I had a more concrete goal for drawing, so I might try to draw some short comics and write a little visual novel or something, which would also force me to aim for consistency.

I want to work on my book more, of course, but I also want to try my hand at a bit more fiction. I’ve had a blast writing dialogue for our games! I just shy away from longer-form writing for some reason — which seems ridiculous when a large part of my audience found me through my blog. I do think I’ve had some sort of breakthrough in the last month or two; I suddenly feel a good bit more confident about writing in general and figuring out what I want to say? One recent post I know I wrote in a single afternoon, which virtually never happens because I keep rewriting and rearranging stuff. Again, a visual novel would be a good excuse to practice writing fiction without getting too bogged down in details.

And, ah, music. I shy heavily away from music, since I have no idea what I’m doing, and also I seem to spend a lot of time fighting with tools. (Surprise.) I tried out SunVox for the first time just a few days ago and have been enjoying it quite a bit for making sound effects, so I might try it for music as well. And once again, visual novel background music is a pretty low-pressure thing to compose for. Hell, visual novels are small games, too, so that checks all the boxes. I guess I’ll go make a visual novel.

Here’s to twenty gayteen!

Weekly roundup: Happy birthday

Post Syndicated from Eevee original https://eev.ee/dev/2018/01/15/weekly-roundup-happy-birthday/

It was my birthday! I need to write a birthday post argh.

  • anise!!: Surprise! Mostly Anise. I refactored dialogue to be a bit less of a hairball; started making item pickups actually work; decided to reverse a former decision and expand the world a little bit (which unfortunately means the world map doesn’t quite fit all on the screen at once, oh well); finally got around to making animated tiles work (!!!); experimented with making sound effects in SunVox, with mixed success; and just general working on level design which takes incredibly far much longer than I ever expected.

  • misc: I wrote a userscript to highlight the game being currently played at GDQ, though it’s not quite so useful now that GDQ is over.

    I realize I don’t really know where a tiny oneoff thing like this should live, and I’ve left a trail of a good few of them. Hmm. I guess I could’ve written a release post for it, but it also seems like it should be in an index of stuff somewhere…?

  • ???: ???

US Govt Brands Torrent, Streaming & Cyberlocker Sites As Notorious Markets

Post Syndicated from Andy original https://torrentfreak.com/us-govt-brands-torrent-streaming-cyberlocker-sites-as-notorious-markets-180115/

In its annual “Out-of-Cycle Review of Notorious Markets” the office of the United States Trade Representative (USTR) has listed a long list of websites said to be involved in online piracy.

The list is compiled with high-level input from various trade groups, including the MPAA and RIAA who both submitted their recommendations (1,2) during early October last year.

With the word “allegedly” used more than two dozen times in the report, the US government notes that its report does not constitute cast-iron proof of illegal activity. However, it urges the countries from where the so-called “notorious markets” operate to take action where they can, while putting owners and facilitators on notice that their activities are under the spotlight.

“A goal of the List is to motivate appropriate action by owners, operators, and service providers in the private sector of these and similar markets, as well as governments, to reduce piracy and counterfeiting,” the report reads.

“USTR highlights the following marketplaces because they exemplify global counterfeiting and piracy concerns and because the scale of infringing activity in these marketplaces can cause significant harm to U.S. intellectual property (IP) owners, consumers, legitimate online platforms, and the economy.”

The report begins with a page titled “Issue Focus: Illicit Streaming Devices”. Unsurprisingly, particularly given their place in dozens of headlines last year, the segment focus on the set-top box phenomenon. The piece doesn’t list any apps or software tools as such but highlights the general position, claiming a cost to the US entertainment industry of $4-5 billion a year.

Torrent Sites

In common with previous years, the USTR goes on to list several of the world’s top torrent sites but due to changes in circumstances, others have been delisted. ExtraTorrent, which shut down May 2017, is one such example.

As the world’s most famous torrent site, The Pirate Bay gets a prominent mention, with the USTR noting that the site is of “symbolic importance as one of the longest-running and most vocal torrent sites. The USTR underlines the site’s resilience by noting its hydra-like form while revealing an apparent secret concerning its hosting arrangements.

“The Pirate Bay has allegedly had more than a dozen domains hosted in various countries around the world, applies a reverse proxy service, and uses a hosting provider in Vietnam to evade further enforcement action,” the USTR notes.

Other torrent sites singled out for criticism include RARBG, which was nominated for the listing by the movie industry. According to the USTR, the site is hosted in Bosnia and Herzegovina and has changed hosting services to prevent shutdowns in recent years.

1337x.to and the meta-search engine Torrentz2 are also given a prime mention, with the USTR noting that they are “two of the most popular torrent sites that allegedly infringe U.S. content industry’s copyrights.” Russia’s RuTracker is also targeted for criticism, with the government noting that it’s now one of the most popular torrent sites in the world.

Streaming & Cyberlockers

While torrent sites are still important, the USTR reserves considerable space in its report for streaming portals and cyberlocker-type services.

4Shared.com, a file-hosting site that has been targeted by dozens of millions of copyright notices, is reportedly no longer able to use major US payment providers. Nevertheless, the British Virgin Islands company still collects significant sums from premium accounts, advertising, and offshore payment processors, USTR notes.

Cyberlocker Rapidgator gets another prominent mention in 2017, with the USTR noting that the Russian-hosted platform generates millions of dollars every year through premium memberships while employing rewards and affiliate schemes.

Due to its increasing popularity as a hosting and streaming operation, Openload.co (Romania) is now a big target for the USTR. “The site is used frequently in combination with add-ons in illicit streaming devices. In November 2017, users visited Openload.co a staggering 270 million times,” the USTR writes.

Owned by a Swiss company and hosted in the Netherlands, the popular site Uploaded is also criticized by the US alongside France’s 1Fichier.com, which allegedly hosts pirate games while being largely unresponsive to takedown notices. Dopefile.pk, a Pakistan-based storage outfit, is also highlighted.

On the video streaming front, it’s perhaps no surprise that the USTR focuses on sites like FMovies (Sweden), GoStream (Vietnam), Movie4K.tv (Russia) and PrimeWire. An organization collectively known as the MovShare group which encompasses Nowvideo.sx, WholeCloud.net, NowDownload.cd, MeWatchSeries.to and WatchSeries.ac, among others, is also listed.

Unauthorized music / research papers

While most of the above are either focused on video or feature it as part of their repertoire, other sites are listed for their attention to music. Convert2MP3.net is named as one of the most popular stream-ripping sites in the world and is highlighted due to the prevalence of YouTube-downloader sites and the 2017 demise of YouTube-MP3.

“Convert2MP3.net does not appear to have permission from YouTube or other sites and does not have permission from right holders for a wide variety of music represented by major U.S. labels,” the USTR notes.

Given the amount of attention the site has received in 2017 as ‘The Pirate Bay of Research’, Libgen.io and Sci-Hub.io (not to mention the endless proxy and mirror sites that facilitate access) are given a detailed mention in this year’s report.

“Together these sites make it possible to download — all without permission and without remunerating authors, publishers or researchers — millions of copyrighted books by commercial publishers and university presses; scientific, technical and medical journal articles; and publications of technological standards,” the USTR writes.

Service providers

But it’s not only sites that are being put under pressure. Following a growing list of nominations in previous years, Swiss service provider Private Layer is again singled out as a rogue player in the market for hosting 1337x.to and Torrentz2.eu, among others.

“While the exact configuration of websites changes from year to year, this is the fourth consecutive year that the List has stressed the significant international trade impact of Private Layer’s hosting services and the allegedly infringing sites it hosts,” the USTR notes.

“Other listed and nominated sites may also be hosted by Private Layer but are using
reverse proxy services to obfuscate the true host from the public and from law enforcement.”

The USTR notes Switzerland’s efforts to close a legal loophole that restricts enforcement and looks forward to a positive outcome when the draft amendment is considered by parliament.

Perhaps a little surprisingly given its recent anti-piracy efforts and overtures to the US, Russia’s leading social network VK.com again gets a place on the new list. The USTR recognizes VK’s efforts but insists that more needs to be done.

Social networking and e-commerce

“In 2016, VK reached licensing agreements with major record companies, took steps to limit third-party applications dedicated to downloading infringing content from the site, and experimented with content recognition technologies,” the USTR writes.

“Despite these positive signals, VK reportedly continues to be a hub of infringing activity and the U.S. motion picture industry reports that they find thousands of infringing files on the site each month.”

Finally, in addition to traditional pirate sites, the US also lists online marketplaces that allegedly fail to meet appropriate standards. Re-added to the list in 2016 after a brief hiatus in 2015, China’s Alibaba is listed again in 2017. The development provoked an angry response from the company.

Describing his company as a “scapegoat”, Alibaba Group President Michael Evans said that his platform had achieved a 25% drop in takedown requests and has even been removing infringing listings before they make it online.

“In light of all this, it’s clear that no matter how much action we take and progress we make, the USTR is not actually interested in seeing tangible results,” Evans said in a statement.

The full list of sites in the Notorious Markets Report 2017 (pdf) can be found below.

– 1fichier.com – (cyberlocker)
– 4shared.com – (cyberlocker)
– convert2mp3.net – (stream-ripper)
– Dhgate.com (e-commerce)
– Dopefile.pl – (cyberlocker)
– Firestorm-servers.com (pirate gaming service)
– Fmovies.is, Fmovies.se, Fmovies.to – (streaming)
– Gostream.is, Gomovies.to, 123movieshd.to (streaming)
– Indiamart.com (e-commerce)
– Kinogo.club, kinogo.co (streaming host, platform)
– Libgen.io, sci-hub.io, libgen.pw, sci-hub.cc, sci-hub.bz, libgen.info, lib.rus.ec, bookfi.org, bookzz.org, booker.org, booksc.org, book4you.org, bookos-z1.org, booksee.org, b-ok.org (research downloads)
– Movshare Group – Nowvideo.sx, wholecloud.net, auroravid.to, bitvid.sx, nowdownload.ch, cloudtime.to, mewatchseries.to, watchseries.ac (streaming)
– Movie4k.tv (streaming)
– MP3VA.com (music)
– Openload.co (cyberlocker / streaming)
– 1337x.to (torrent site)
– Primewire.ag (streaming)
– Torrentz2, Torrentz2.me, Torrentz2.is (torrent site)
– Rarbg.to (torrent site)
– Rebel (domain company)
– Repelis.tv (movie and TV linking)
– RuTracker.org (torrent site)
– Rapidgator.net (cyberlocker)
– Taobao.com (e-commerce)
– The Pirate Bay (torrent site)
– TVPlus, TVBrowser, Kuaikan (streaming apps and addons, China)
– Uploaded.net (cyberlocker)
– VK.com (social networking)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

A Look Back At 2017 – Tools & News Highlights

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/01/look-back-2017-tools-news-highlights/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

A Look Back At 2017 – Tools & News Highlights

So here we are in 2018, taking a look back at 2017, quite a year it was. We somehow forgot to do this last year so just have the 2015 summary and the 2014 summary but no 2016 edition.

2017 News Stories

All kinds of things happened in 2017 starting with some pretty comical shit and the MongoDB Ransack – Over 33,000 Databases Hacked, I’ve personally had very poor experienced with MongoDB in general and I did notice the sloppy defaults (listen on all interfaces, no password) when I used it, I believe the defaults have been corrected – but I still don’t have a good impression of it.

Read the rest of A Look Back At 2017 – Tools & News Highlights now! Only available at Darknet.

Coalition Against Piracy Launches Landmark Case Against ‘Pirate’ Android Box Sellers

Post Syndicated from Andy original https://torrentfreak.com/coalition-against-piracy-launches-landmark-case-against-pirate-android-box-sellers-180112/

In 2017, anti-piracy enforcement went global when companies including Disney, HBO, Netflix, Amazon and NBCUniversal formed the Alliance for Creativity and Entertainment (ACE).

Soon after the Coalition Against Piracy (CAP) was announced. With a focus on Asia and backed by CASBAA, CAP counts many of the same companies among its members in addition to local TV providers such as StarHub.

From the outset, CAP has shown a keen interest in tackling unlicensed streaming, particularly that taking place via illicit set-top boxes stuffed with copyright-infringing apps and add-ons. One country under CAP’s spotlight is Singapore, where relevant law is said to be fuzzy at best, insufficient at worst. Now, however, a line in the sand might not be far away.

According to a court listing discovered by Singapore’s TodayOnline, today will see the Coalition Against Piracy’s general manager Neil Kevin Gane attempt to launch a pioneering private prosecution against set-top box distributor Synnex Trading and its client and wholesale goods retailer, An-Nahl.

Gane and CAP are said to be acting on behalf of four parties, one which is TV giant StarHub, a company with a huge interest in bringing media piracy under control in the region. It’s reported that they have also named Synnex Trading director Jia Xiaofen and An-Nahl director Abdul Nagib as defendants in their private criminal case after the parties failed to reach a settlement in an earlier process.

Contacted by TodayOnline, an employee of An-Nahl said the company no longer sells the boxes. However, Synnex is reportedly still selling them for S$219 each ($164) plus additional fees for maintenance and access to VOD. The company’s Facebook page is still active with the relevant offer presented prominently.

The importance of the case cannot be understated. While StarHub and other broadcasters have successfully prosecuted cases where people unlawfully decrypted broadcast signals, the provision of unlicensed streams isn’t specifically tackled by Singapore’s legislation. It’s now a major source of piracy in the region, as it is elsewhere around the globe.

Only time will tell how the process will play out but it’s clear that CAP and its members are prepared to invest significant sums into a prosecution for a favorable outcome. CAP believes that the supply of the boxes falls under Section 136 (3A) of the Copyright Act but only time will tell.

Last December, CAP separately called on the Singapore government to not only block ‘pirate’ streaming software but also unlicensed streams from entering the country.

“Within the Asia-Pacific region, Singapore is the worst in terms of availability of illicit streaming devices,” said CAP General Manager Neil Gane. “They have access to hundreds of illicit broadcasts of channels and video-on-demand content.”

CAP’s 21 members want the authorities to block the software inside devices that enables piracy but it’s far from clear how that can be achieved.

Update: The four companies taking the action are confirmed as Singtel, Starhub, Fox Network, and the English Premier League

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Announcing our new beta for the AWS Certified Security – Specialty exam

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-our-new-beta-for-the-aws-certified-security-specialty-exam/

Take the AWS Certified Security – Specialty beta exam for the chance to be among the first to hold this new AWS Certification. This beta exam allows experienced cloud security professionals to demonstrate and validate their expertise. Register today – this beta exam will only be available from January 15 to March 2!

About the exam

This beta exam validates that the successful candidate can effectively demonstrate knowledge of how to secure the AWS platform. The exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection.

The exam validates:

  • Familiarity with regional- and country-specific security and compliance regulations and meta issues that these regulations embody.
  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • Ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements.
  • An understanding of security operations and risk.

Learn more and register >>

Who is eligible

The beta is open to anyone who currently holds an Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

We have training and other resources to help you prepare for the beta exam:

AWS Security Fundamentals Digital| 3 Hours
This course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS Classroom | 3 Days
This course demonstrates how to efficiently use AWS security services to stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your data and systems in the cloud. The course highlights the security features of AWS key services including compute, storage, networking, and database services.

Online resources for Cloud Security and Compliance

Review documentation, whitepapers, and articles & tutorials related to cloud security and compliance.

Learn more and register >>

Please contact us if you have questions about exam registration.

Good luck!

Validate Your IT Security Expertise with the New AWS Certified Security – Specialty Beta Exam

Post Syndicated from Sara Snedeker original https://aws.amazon.com/blogs/security/validate-your-it-security-expertise-with-the-new-aws-certified-security-specialty-beta-exam/

AWS Training and Certification image

If you are an experienced cloud security professional, you can demonstrate and validate your expertise with the new AWS Certified Security – Specialty beta exam. This exam allows you to demonstrate your knowledge of incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. Register today – this beta exam will be available only from January 15 to March 2, 2018.

By taking this exam, you can validate your:

  • Familiarity with region-specific and country-specific security and compliance regulations and meta issues that these regulations include.
  • Understanding of data encryption methods and secure internet protocols, and the AWS mechanisms to implement them.
  • Working knowledge of AWS security services to provide a secure production environment.
  • Ability to make trade-off decisions with regard to cost, security, and deployment complexity when given a set of application requirements.

See the full list of security knowledge you can validate by taking this beta exam.

Who is eligible?

The beta exam is open to anyone who currently holds an AWS Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

You can take the following courses and use AWS cloud security resources and compliance resources to prepare for this exam.

AWS Security Fundamentals (digital, 3 hours)
This digital course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS (classroom, 3 days)
This instructor-led course demonstrates how to efficiently use AWS security services to help stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your AWS resources. The course highlights the security features of AWS compute, storage, networking, and database services.

If you have questions about this new beta exam, contact us.

Good luck with the exam!

– Sara

Graphite 1.1: Teaching an Old Dog New Tricks

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/01/11/graphite-1.1-teaching-an-old-dog-new-tricks/

The Road to Graphite 1.1

I started working on Graphite just over a year ago, when @obfuscurity asked me to help out with some issues blocking the Graphite 1.0 release. Little did I know that a year later, that would have resulted in 262 commits (and counting), and that with the help of the other Graphite maintainers (especially @deniszh, @iksaif & @cbowman0) we would have added a huge amount of new functionality to Graphite.

There are a huge number of new additions and updates in this release, in this post I’ll give a tour of some of the highlights including tag support, syntax and function updates, custom function plugins, and python 3.x support.

Tagging!

The single biggest feature in this release is the addition of tag support, which brings the ability to describe metrics in a much richer way and to write more flexible and expressive queries.

Traditionally series in Graphite are identified using a hierarchical naming scheme based on dot-separated segments called nodes. This works very well and is simple to map into a hierarchical structure like the whisper filesystem tree, but it means that the user has to know what each segment represents, and makes it very difficult to modify or extend the naming scheme since everything is based on the positions of the segments within the hierarchy.

The tagging system gives users the ability to encode information about the series in a collection of tag=value pairs which are used together with the series name to uniquely identify each series, and the ability to query series by specifying tag-based matching expressions rather than constructing glob-style selectors based on the positions of specific segments within the hierarchy. This is broadly similar to the system used by Prometheus and makes it possible to use Graphite as a long-term storage backend for metrics gathered by Prometheus with full tag support.

When using tags, series names are specified using the new tagged carbon format: name;tag1=value1;tag2=value2. This format is backward compatible with most existing carbon tooling, and makes it easy to adapt existing tools to produce tagged metrics simply by changing the metric names. The OpenMetrics format is also supported for ingestion, and is normalized into the standard Graphite format internally.

At its core, the tagging system is implemented as a tag database (TagDB) alongside the metrics that allows them to be efficiently queried by individual tag values rather than having to traverse the metrics tree looking for series that match the specified query. Internally the tag index is stored in one of a number of pluggable tag databases, currently supported options are the internal graphite-web database, redis, or an external system that implements the Graphite tagging HTTP API. Carbon automatically keeps the index up to date with any tagged series seen.

The new seriesByTag function is used to query the TagDB and will return a list of all the series that match the expressions passed to it. seriesByTag supports both exact and regular expression matches, and can be used anywhere you would previously have specified a metric name or glob expression.

There are new dedicated functions for grouping and aliasing series by tag (groupByTags and aliasByTags), and you can also use tags interchangeably with node numbers in the standard Graphite functions like aliasByNode, groupByNodes, asPercent, mapSeries, etc.

Piping Syntax & Function Updates

One of the huge strengths of the Graphite render API is the ability to chain together multiple functions to process data, but until now (unless you were using a tool like Grafana) writing chained queries could be painful as each function had to be wrapped around the previous one. With this release it is now possible to “pipe” the output of one processing function into the next, and to combine piped and nested functions.

For example:

alias(movingAverage(scaleToSeconds(sumSeries(stats_global.production.counters.api.requests.*.count),60),30),'api.avg')

Can now be written as:

sumSeries(stats_global.production.counters.api.requests.*.count)|scaleToSeconds(60)|movingAverage(30)|alias('api.avg')

OR

stats_global.production.counters.api.requests.*.count|sumSeries()|scaleToSeconds(60)|movingAverage(30)|alias('api.avg')

Another source of frustration with the old function API was the inconsistent implementation of aggregations, with different functions being used in different parts of the API, and some functions simply not being available. In 1.1 all functions that perform aggregation (whether across series or across time intervals) now support a consistent set of aggregations; average, median, sum, min, max, diff, stddev, count, range, multiply and last. This is part of a new approach to implementing functions that emphasises using shared building blocks to ensure consistency across the API and solve the problem of a particular function not working with the aggregation needed for a given task.

To that end a number of new functions have been added that each provide the same functionality as an entire family of “old” functions; aggregate, aggregateWithWildcards, movingWindow, filterSeries, highest, lowest and sortBy.

Each of these functions accepts an aggregation method parameter, for example aggregate(some.metric.*, 'sum') implements the same functionality as sumSeries(some.metric.*).

It can also be used with different aggregation methods to replace averageSeries, stddevSeries, multiplySeries, diffSeries, rangeOfSeries, minSeries, maxSeries and countSeries. All those functions are now implemented as aliases for aggregate, and it supports the previously-missing median and last aggregations.

The same is true for the other functions, and the summarize, smartSummarize, groupByNode, groupByNodes and the new groupByTags functions now all support the standard set of aggregations. Gone are the days of wishing that sortByMedian or highestRange were available!

For more information on the functions available check the function documentation.

Custom Functions

No matter how many functions are available there are always going to be specific use-cases where a custom function can perform analysis that wouldn’t otherwise be possible, or provide a convenient alias for a complicated function chain or specific set of parameters.

In Graphite 1.1 we added support for easily adding one-off custom functions, as well as for creating and sharing plugins that can provide one or more functions.

Each function plugin is packaged as a simple python module, and will be automatically loaded by Graphite when placed into the functions/custom folder.

An example of a simple function plugin that translates the name of every series passed to it into UPPERCASE:

from graphite.functions.params import Param, ParamTypes

def toUpperCase(requestContext, seriesList):
  """Custom function that changes series names to UPPERCASE"""
  for series in seriesList:
    series.name = series.name.upper()
  return seriesList

toUpperCase.group = 'Custom'
toUpperCase.params = [
  Param('seriesList', ParamTypes.seriesList, required=True),
]

SeriesFunctions = {
  'upper': toUpperCase,
}

Once installed the function is not only available for use within Grpahite, but is also exposed via the new Function API which allows the function definition and documentation to be automatically loaded by tools like Grafana. This means that users will be able to select and use the new function in exactly the same way as the internal functions.

More information on writing and using custom functions is available in the documentation.

Clustering Updates

One of the biggest changes from the 0.9 to 1.0 releases was the overhaul of the clustering code, and with 1.1.1 that process has been taken even further to optimize performance when using Graphite in a clustered deployment. In the past it was common for a request to require the frontend node to make multiple requests to the backend nodes to identify matching series and to fetch data, and the code for handling remote vs local series was overly complicated. In 1.1.1 we took a new approach where all render data requests pass through the same path internally, and multiple backend nodes are handled individually rather than grouped together into a single finder. This has greatly simplified the codebase, making it much easier to understand and reason about, while allowing much more flexibility in design of the finders. After these changes, render requests can now be answered with a single internal request to each backend node, and all requests for both remote and local data are executed in parallel.

To maintain the ability of graphite to scale out horizontally, the tagging system works seamlessly within a clustered environment, with each node responsible for the series stored on that node. Calls to load tagged series via seriesByTag are fanned out to the backend nodes and results are merged on the query node just like they are for non-tagged series.

Python 3 & Django 1.11 Support

Graphite 1.1 finally brings support for Python 3.x, both graphite-web and carbon are now tested against Python 2.7, 3.4, 3.5, 3.6 and PyPy. Django releases 1.8 through 1.11 are also supported. The work involved in sorting out the compatibility issues between Python 2.x and 3.x was quite involved, but it is a huge step forward for the long term support of the project! With the new Django 2.x series supporting only Python 3.x we will need to evaluate our long-term support for Python 2.x, but the Django 1.11 series is supported through 2020 so there is time to consider the options there.

Watch This Space

Efforts are underway to add support for the new functionality across the ecosystem of tools that work with Graphite, adding collectd tagging support, prometheus remote read & write with tags (and native Prometheus remote read/write support in Graphite) and last but not least Graphite tag support in Grafana.

We’re excited about the possibilities that the new capabilities in 1.1.x open up, and can’t wait to see how the community puts them to work.

Download the 1.1.1 release and check out the release notes here.

Judge Issues Devastating Order Against BitTorrent Copyright Troll

Post Syndicated from Ernesto original https://torrentfreak.com/judge-issues-devastating-order-bittorrent-copyright-troll-180110/

In recent years, file-sharers around the world have been pressured to pay significant settlement fees, or face legal repercussions.

These so-called “copyright trolling” efforts have been a common occurrence in the United States since the turn of the last decade.

Increasingly, however, courts are growing weary of these cases. Many districts have turned into no-go zones for copyright trolls and the people behind Prenda law were arrested and are being prosecuted in a criminal case.

In the Western District of Washington, the tide also appears to have turned. After Venice PI, a copyright holder of the film “Once Upon a Time in Venice”, sued a man who later passed away, concerns were raised over the validity of the evidence.

Venice PI responded to the concerns with a declaration explaining its data gathering technique and assuring the Court that false positives are out of the question.

That testimony didn’t help much though, as a recently filed minute order shows this week. The order applies to a dozen cases and prohibits the company from reaching out to any defendants until further notice, as there are several alarming issues that have to be resolved first.

One of the problems is that Venice PI declared that it’s owned by a company named Lost Dog Productions, which in turn is owned by Voltage Productions. Interestingly, these companies don’t appear in the usual records.

“A search of the California Secretary of State’s online database, however, reveals no registered entity with the name ‘Lost Dog’ or ‘Lost Dog Productions’,” the Court notes.

“Moreover, although ‘Voltage Pictures, LLC’ is registered with the California Secretary of State, and has the same address as Venice PI, LLC, the parent company named in plaintiff’s corporate disclosure form, ‘Voltage Productions, LLC,’ cannot be found in the California Secretary of State’s online database and does not appear to exist.”

In other words, the company that filed the lawsuit, as well as its parent company, are extremely questionable.

While the above is a reason for concern, it’s just the tip of the iceberg. The Court not only points out administrative errors, but it also has serious doubts about the evidence collection process. This was carried out by the German company MaverickEye, which used the tracking technology of another German company, GuardaLey.

GuardaLey CEO Benjamin Perino, who claims that he coded the tracking software, wrote a declaration explaining that the infringement detection system at issue “cannot yield a false positive.” However, the Court doubts this statement and Perino’s qualifications in general.

“Perino has been proffered as an expert, but his qualifications consist of a technical high school education and work experience unrelated to the peer-to-peer file-sharing technology known as BitTorrent,” the Court writes.

“Perino does not have the qualifications necessary to be considered an expert in the field in question, and his opinion that the surveillance program is incapable of error is both contrary to common sense and inconsistent with plaintiff’s counsel’s conduct in other matters in this district. Plaintiff has not submitted an adequate offer of proof”

It seems like the Court would prefer to see an assessment from a qualified independent expert instead of the person who wrote the software. For now, this means that the IP-address evidence, in these cases, is not good enough. That’s quite a blow for the copyright holder.

If that wasn’t enough the Court also highlights another issue that’s possibly even more problematic. When Venice PI requested the subpoenas to identify alleged pirates, they relied on declarations from Daniel Arheidt, a consultant for MaverickEye.

These declarations fail to mention, however, that MaverickEye has the proper paperwork to collect IP addresses.

“Nowhere in Arheidt’s declarations does he indicate that either he or MaverickEye is licensed in Washington to conduct private investigation work,” the order reads.

This is important, as doing private investigator work without a license is a gross misdemeanor in Washington. The copyright holder was aware of this requirement because it was brought up in related cases in the past.

“Plaintiff’s counsel has apparently been aware since October 2016, when he received a letter concerning LHF Productions, Inc. v. Collins, C16-1017 RSM, that Arheidt might be committing a crime by engaging in unlicensed surveillance of Washington citizens, but he did not disclose this fact to the Court.”

The order is very bad news for Venice PI. The company had hoped to score a few dozen easy settlements but the tables have now been turned. The Court instead asks the company to explain the deficiencies and provide additional details. In the meantime, the copyright holder is urged not to spend or transfer any of the settlement money that has been collected thus far.

The latter indicates that Venice PI might have to hand defendants their money back, which would be pretty unique.

The order suggests that the Judge is very suspicious of these trolling activities. In a footnote there’s a link to a Fight Copyright Trolls article which revealed that the same counsel dismissed several cases, allegedly to avoid having IP-address evidence scrutinized.

Even more bizarrely, in another footnote the Court also doubts if MaverickEye’s aforementioned consultant, Daniel Arheidt, actually exists.

“The Court has recently become aware that Arheidt is the latest in a series of German declarants (Darren M. Griffin, Daniel Macek, Daniel Susac, Tobias Fieser, Michael Patzer) who might be aliases or even fictitious.

“Plaintiff will not be permitted to rely on Arheidt’s declarations or underlying data without explaining to the Court’s satisfaction Arheidt’s relationship to the above-listed declarants and producing proof beyond a reasonable doubt of Arheidt’s existence,” the court adds.

These are serious allegations, to say the least.

If a copyright holder uses non-existent companies and questionable testimony from unqualified experts after obtaining evidence illegally to get a subpoena backed by a fictitious person….something’s not quite right.

A copy of the minute order, which affects a series of cases, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Weekly roundup: AOOOWR

Post Syndicated from Eevee original https://eev.ee/dev/2018/01/09/weekly-roundup-aooowr/

  • anise!!: Work continues! glip is busy with a big Flora update, so I’m left to just do code things in the meantime. I did some refactoring I’d been wanting to do for months (splitting apart the “map” and the “world” and the scene that draws the world), drew some final-ish menu art (it looks so good), switched to a vastly more accurate way to integrate position, added a bunch of transitions that make the game feel way more polished, and drew some pretty slick dialogue boxes. Nice!

    I’ll be continuing to work on this game during GAMES MADE QUICK??? 2.0, my jam for making games while watching AGDQ all week! Maybe join if you’re watching AGDQ all week!

  • art: I tried drawing a picture and this time I liked it. I also drew the header art for the aforementioned game jam, though I didn’t have time to finish it, but I think I pulled off a deliberate-looking scratchy sketchy style that’s appropriate for a game jam? Sure we’ll go with that.

  • blog: I finished a post about picking random numbers and a post about how game physics cheat. Which, ah, catches me up for December! Heck! I think I’ve found a slightly more casual style that feels easier to get down, though?

  • writing: I finally wrangled a sensible outline for a Twine I’ve been dragging my feet on, so now I don’t have any excuses! Oh no!

AWS Online Tech Talks – January 2018

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-january-2018/

Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

January 2018– Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday January 22

Analytics & Big Data
11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

Database
01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

Tuesday, January 23

Artificial Intelligence
9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

Containers

11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

Serverless
01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

Wednesday, January 24

DevOps
09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

Analytics & Big Data
11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
Lvl 300
Database
01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

Thursday, January 25

Artificial Intelligence
09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

Mobile
11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

IoT
01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

Monday, January 29

Enterprise
11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

Compute
01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

Tuesday, January 30

Security, Identity & Compliance
09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

Storage
11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

Compute
01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

Wednesday, January 31

Networking
09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

Enterprise
11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

Compute
01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

Thursday, February 1

Security, Identity & Compliance
09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

Storage
11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

Sky Hits Man With £5k ‘Fine’ For Pirating Boxing on Facebook

Post Syndicated from Andy original https://torrentfreak.com/sky-hits-man-with-5k-fine-for-pirating-boxing-on-facebook-180108/

When people download content online using BitTorrent, they also distribute that content to others. This unlawful distribution attracts negative attention from rightsholders, who have sued hundreds of thousands of individuals worldwide.

Streaming is considered a much safer method to obtain content, since it’s difficult for content owners to track downloaders. However, the same can’t be said about those who stream content to the web for the benefit of others, as an interesting case in the UK has just revealed.

It involves 34-year-old Craig Foster who received several scary letters from lawyers representing broadcaster Sky. The company alleged that during last April’s bout between Anthony Joshua’s and Wladimir Klitschko, Foster live-streamed the multiple world title fight on Facebook Live.

Financially, this was a major problem for Sky, law firm Foot Anstey LLP told Foster. According to their calculations, at least 4,250 people watched the stream without paying Sky Box Office the going rate of £19.95 each. Tapped into Sky’s computers, the broadcaster concluded that Foster owed the company £85,000.

But according to The Mirror, father-of-one Foster wasn’t actually to blame.

“I’d paid for the boxing, it wasn’t like I was making any money. My iPad was signed in to my Facebook account and my friend just started streaming the fight. I didn’t think anything of it, then a few days later they cut my subscription,” Foster said.

“They’re demanding the names and addresses of all my mates who were round that night but I’m not going to give them up. I said I’d take the rap.”

While Foster says he won’t turn in the culprit, there’s no doubt that the fight stream originated from his Sky account. The TV giant embeds watermarks in its broadcasts which enables it to see who paid for an event, should a copy of one turn up on the Internet.

As we reported last year following the Mayweather v McGregor super-fight, the codes are clearly visible with the naked eye.

Sky watermarks, as seen in the Mayweather v McGregor fight

While taking the rap for someone else’s infringing behavior isn’t something anyone should do lightly, it appears that Scarborough-based Foster did just that.

According to Neil Parkes, who specializes in media litigation, content protection and contentious IP at Foot Anstey, Foster accepted responsibility and agreed to pay a settlement.

“Mr Foster broke the law,” Parkes said. “He has acknowledged his wrongdoing, apologised and signed a legally binding agreement to pay a sum of £5,000 to Sky.”

The Mirror, however, has Foster backtracking. He says he wasn’t given enough time to consider his position and now wants to fight Sky in court.

“It’s heavy-handed. I’ve apologized and told them we were drunk,” Foster said.

“I know streaming the fight was wrong. I didn’t stop my friend but I was watching the boxing. I’m just a bloke who had a few drinks with his friends.”

Unless he can find a law firm willing to fight his corner at a hugely cut-down rate, Foster will find this kind of legal fisticuffs to be a massively expensive proposition, one in which he will start out as the clear underdog.

Not only was Foster’s Sky account the originating source, both his iPad and his Facebook account were used to stream the fight. On top of what appears to be a signed confession, he also promised not to do anything else like this in future. Furthermore, he even agreed to issue an apology that Sky can use in future anti-piracy messages.

Of course, Foster might indeed be a noble gentleman but he should be aware that as a civil matter, this fight would be decided on the balance of probabilities, not beyond reasonable doubt. If the judge decides 51% in Sky’s favor, he suffers a knockout along with a huge financial headache.

No one wants a £5,000 bill but that’s a drop in the ocean compared to the cost implications of losing this case.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Physics cheats

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/06/physics-cheats/

Anonymous asks:

something about how we tweak physics to “work” better in games?

Ho ho! Work. Get it? Like in physics…?

Hitboxes

Hitbox” is perhaps not the most accurate term, since the shape used for colliding with the environment and the shape used for detecting damage might be totally different. They’re usually the same in simple platformers, though, and that’s what most of my games have been.

The hitbox is the biggest physics fudge by far, and it exists because of a single massive approximation that (most) games make: you’re controlling a single entity in the abstract, not a physical body in great detail.

That is: when you walk with your real-world meat shell, you perform a complex dance of putting one foot in front of the other, a motion you spent years perfecting. When you walk in a video game, you press a single “walk” button. Your avatar may play an animation that moves its legs back and forth, but since you’re not actually controlling the legs independently (and since simulating them is way harder), the game just treats you like a simple shape. Fairly often, this is a box, or something very box-like.

An Eevee sprite standing on faux ground; the size of the underlying image and the hitbox are outlined

Since the player has no direct control over the exact placement of their limbs, it would be slightly frustrating to have them collide with the world. This is especially true in cases like the above, where the tail and left ear protrude significantly out from the main body. If that Eevee wanted to stand against a real-world wall, she would simply tilt her ear or tail out of the way, so there’s no reason for the ear to block her from standing against a game wall. To compensate for this, the ear and tail are left out of the collision box entirely and will simply jut into a wall if necessary — a goofy affordance that’s so common it doesn’t even register as unusual. As a bonus (assuming this same box is used for combat), she won’t take damage from projectiles that merely graze past an ear.

(One extra consideration for sprite games in particular: the hitbox ought to be horizontally symmetric around the sprite’s pivot — i.e. the point where the entity is truly considered to be standing — so that the hitbox doesn’t abruptly move when the entity turns around!)

Corners

Treating the player (and indeed most objects) as a box has one annoying side effect: boxes have corners. Corners can catch on other corners, even by a single pixel. Real-world bodies tend to be a bit rounder and squishier and this can tolerate grazing a corner; even real-world boxes will simply rotate a bit.

Ah, but in our faux physics world, we generally don’t want conscious actors (such as the player) to rotate, even with a realistic physics simulator! Real-world bodies are made of parts that will generally try to keep you upright, after all; you don’t tilt back and forth much.

One way to handle corners is to simply remove them from conscious actors. A hitbox doesn’t have to be a literal box, after all. A popular alternative — especially in Unity where it’s a standard asset — is the pill-shaped capsule, which has semicircles/hemispheres on the top and bottom and a cylindrical body in 3D. No corners, no problem.

Of course, that introduces a new problem: now the player can’t balance precariously on edges without their rounded bottom sliding them off. Alas.

If you’re stuck with corners, then, you may want to use a corner bump, a term I just made up. If the player would collide with a corner, but the collision is only by a few pixels, just nudge them to the side a bit and carry on.

An Eevee sprite trying to move sideways into a shallow ledge; the game bumps her upwards slightly, so she steps onto it instead

When the corner is horizontal, this creates stairs! This is, more or less kinda, how steps work in Doom: when the player tries to cross from one sector into another, if the height difference is 24 units or less, the game simply bumps them upwards to the height of the new floor and lets them continue on.

Implementing this in a game without Doom’s notion of sectors is a little trickier. In fact, I still haven’t done it. Collision detection based on rejection gets it for free, kinda, but it’s not very deterministic and it breaks other things. But that’s a whole other post.

Gravity

Gravity is pretty easy. Everything accelerates downwards all the time. What’s interesting are the exceptions.

Jumping

Jumping is a giant hack.

Think about how actual jumping works: you tense your legs, which generally involves bending your knees first, and then spring upwards. In a platformer, you can just leap whenever you feel like it, which is nonsense. Also you go like twenty feet into the air?

Worse, most platformers allow variable-height jumping, where your jump is lower if you let go of the jump button while you’re in the air. Normally, one would expect to have to decide how much force to put into the jump beforehand.

But of course this is about convenience of controls: when jumping is your primary action, you want to be able to do it immediately, without any windup for how high you want to jump.

(And then there’s double jumping? Come on.)

Air control is a similar phenomenon: usually you’d jump in a particular direction by controlling how you push off the ground with your feet, but in a video game, you don’t have feet! You only have the box. The compromise is to let you control your horizontal movement to a limit degree in midair, even though that doesn’t make any sense. (It’s way more fun, though, and overall gives you more movement options, which are good to have in an interactive medium.)

Air control also exposes an obvious place that game physics collide with the realistic model of serious physics engines. I’ve mentioned this before, but: if you use Real Physics™ and air control yourself into a wall, you might find that you’ll simply stick to the wall until you let go of the movement buttons. Why? Remember, player movement acts as though an external force were pushing you around (and from the perspective of a Real™ physics engine, this is exactly how you’d implement it) — so air-controlling into a wall is equivalent to pushing a book against a wall with your hand, and the friction with the wall holds you in place. Oops.

Ground sticking

Another place game physics conflict with physics engines is with running to the top of a slope. On a real hill, of course, you land on top of the slope and are probably glad of it; slopes are hard to climb!

An Eevee moves to the top of a slope, and rather than step onto the flat top, she goes flying off into the air

In a video game, you go flying. Because you’re a box. With momentum. So you hit the peak and keep going in the same direction. Which is diagonally upwards.

Projectiles

To make them more predictable, projectiles generally aren’t subject to gravity, at least as far as I’ve seen. The real world does not have such an exemption. The real world imposes gravity even on sniper rifles, which in a video game are often implemented as an instant trace unaffected by anything in the world because the bullet never actually exists in the world.

Resistance

Ah. Welcome to hell.

Water

Water is an interesting case, and offhand I don’t know the gritty details of how games implement it. In the real world, water applies a resistant drag force to movement — and that force is proportional to the square of velocity, which I’d completely forgotten until right now. I am almost positive that no game handles that correctly. But then, in real-world water, you can push against the water itself for movement, and games don’t simulate that either. What’s the rough equivalent?

The Sonic Physics Guide suggests that Sonic handles it by basically halving everything: acceleration, max speed, friction, etc. When Sonic enters water, his speed is cut; when Sonic exits water, his speed is increased.

That last bit feels validating — I could swear Metroid Prime did the same thing, and built my own solution around it, but couldn’t remember for sure. It makes no sense, of course, for a jump to become faster just because you happened to break the surface of the water, but it feels fantastic.

The thing I did was similar, except that I didn’t want to add a multiplier in a dozen places when you happen to be underwater (and remember which ones need it to be squared, etc.). So instead, I calculate everything completely as normal, so velocity is exactly the same as it would be on dry land — but the distance you would move gets halved. The effect seems to be pretty similar to most platformers with water, at least as far as I can tell. It hasn’t shown up in a published game and I only added this fairly recently, so I might be overlooking some reason this is a bad idea.

(One reason that comes to mind is that velocity is now a little white lie while underwater, so anything relying on velocity for interesting effects might be thrown off. Or maybe that’s correct, because velocity thresholds should be halved underwater too? Hm!)

Notably, air is also a fluid, so it should behave the same way (just with different constants). I definitely don’t think any games apply air drag that’s proportional to the square of velocity.

Friction

Friction is, in my experience, a little handwaved. Probably because real-world friction is so darn complicated.

Consider that in the real world, we want very high friction on the surfaces we walk on — shoes and tires are explicitly designed to increase it, even. We move by bracing a back foot against the ground and using that to push ourselves forward, so we want the ground to resist our push as much as possible.

In a game world, we are a box. We move by being pushed by some invisible outside force, so if the friction between ourselves and the ground is too high, we won’t be able to move at all! That’s complete nonsense physically, but it turns out to be handy in some cases — for example, highish friction can simulate walking through deep mud, which should be difficult due to fluid drag and low friction.

But the best-known example of the fakeness of game friction is video game ice. Walking on real-world ice is difficult because the low friction means low grip; your feet are likely to slip out from under you, and you’ll simply fall down and have trouble moving at all. In a video game, you can’t fall down, so you have the opposite experience: you spend most of your time sliding around uncontrollably. Yet ice is so common in video games (and perhaps so uncommon in places I’ve lived) that I, at least, had never really thought about this disparity until an hour or so ago.

Game friction vs real-world friction

Real-world friction is a force. It’s the normal force (which is the force exerted by the object on the surface) times some constant that depends on how the two materials interact.

Force is mass times acceleration, and platformers often ignore mass, so friction ought to be an acceleration — applied against the object’s movement, but never enough to push it backwards.

I haven’t made any games where variable friction plays a significant role, but my gut instinct is that low friction should mean the player accelerates more slowly but has a higher max speed, and high friction should mean the opposite. I see from my own source code that I didn’t even do what I just said, so let’s defer to some better-made and well-documented games: Sonic and Doom.

In Sonic, friction is a fixed value subtracted from the player’s velocity (regardless of direction) each tic. Sonic has a fixed framerate, so the units are really pixels per tic squared (i.e. acceleration), multiplied by an implicit 1 tic per tic. So far, so good.

But Sonic’s friction only applies if the player isn’t pressing or . Hang on, that isn’t friction at all; that’s just deceleration! That’s equivalent to jogging to a stop. If friction were lower, Sonic would take longer to stop, but otherwise this is only tangentially related to friction.

(In fairness, this approach would decently emulate friction for non-conscious sliding objects, which are never going to be pressing movement buttons. Also, we don’t have the Sonic source code, and the name “friction” is a fan invention; the Sonic Physics Guide already uses “deceleration” to describe the player’s acceleration when turning around.)

Okay, let’s try Doom. In Doom, the default friction is 90.625%.

Hang on, what?

Yes, in Doom, friction is a multiplier applied every tic. Doom runs at 35 tics per second, so this is a multiplier of 0.032 per second. Yikes!

This isn’t anything remotely like real friction, but it’s much easier to implement. With friction as acceleration, the game has to know both the direction of movement (so it can apply friction in the opposite direction) and the magnitude (so it doesn’t overshoot and launch the object in the other direction). That means taking a semi-costly square root and also writing extra code to cap the amount of friction. With a multiplier, neither is necessary; just multiply the whole velocity vector and you’re done.

There are some downsides. One is that objects will never actually stop, since multiplying by 3% repeatedly will never produce a result of zero — though eventually the speed will become small enough to either slip below a “minimum speed” threshold or simply no longer fit in a float representation. Another is that the units are fairly meaningless: with Doom’s default friction of 90.625%, about how long does it take for the player to stop? I have no idea, partly because “stop” is ambiguous here! If friction were an acceleration, I could divide it into the player’s max speed to get a time.

All that aside, what are the actual effects of changing Doom’s friction? What an excellent question that’s surprisingly tricky to answer. (Note that friction can’t be changed in original Doom, only in the Boom port and its derivatives.) Here’s what I’ve pieced together.

Doom’s “friction” is really two values. “Friction” itself is a multiplier applied to moving objects on every tic, but there’s also a move factor which defaults to \(\frac{1}{32} = 0.03125\) and is derived from friction for custom values.

Every tic, the player’s velocity is multiplied by friction, and then increased by their speed times the move factor.

$$
v(n) = v(n – 1) \times friction + speed \times move factor
$$

Eventually, the reduction from friction will balance out the speed boost. That happens when \(v(n) = v(n – 1)\), so we can rearrange it to find the player’s effective max speed:

$$
v = v \times friction + speed \times move factor \\
v – v \times friction = speed \times move factor \\
v = speed \times \frac{move factor}{1 – friction}
$$

For vanilla Doom’s move factor of 0.03125 and friction of 0.90625, that becomes:

$$
v = speed \times \frac{\frac{1}{32}}{1 – \frac{29}{32}} = speed \times \frac{\frac{1}{32}}{\frac{3}{32}} = \frac{1}{3} \times speed
$$

Curiously, “speed” is three times the maximum speed an actor can actually move. Doomguy’s run speed is 50, so in practice he moves a third of that, or 16⅔ units per tic. (Of course, this isn’t counting SR40, a bug that lets Doomguy run ~40% faster than intended diagonally.)

So now, what if you change friction? Even more curiously, the move factor is calculated completely differently depending on whether friction is higher or lower than the default Doom amount:

$$
move factor = \begin{cases}
\frac{133 – 128 \times friction}{544} &≈ 0.244 – 0.235 \times friction & \text{ if } friction \ge \frac{29}{32} \\
\frac{81920 \times friction – 70145}{1048576} &≈ 0.078 \times friction – 0.067 & \text{ otherwise }
\end{cases}
$$

That’s pretty weird? Complicating things further is that low friction (which means muddy terrain, remember) has an extra multiplier on its move factor, depending on how fast you’re already going — the idea is apparently that you have a hard time getting going, but it gets easier as you find your footing. The extra multiplier maxes out at 8, which makes the two halves of that function meet at the vanilla Doom value.

A graph of the relationship between friction and move factor

That very top point corresponds to the move factor from the original game. So no matter what you do to friction, the move factor becomes lower. At 0.85 and change, you can no longer move at all; below that, you move backwards.

From the formula above, it’s easy to see what changes to friction and move factor will do to Doomguy’s stable velocity. Move factor is in the numerator, so increasing it will increase stable velocity — but it can’t increase, so stable velocity can only ever decrease. Friction is in the denominator, but it’s subtracted from 1, so increasing friction will make the denominator a smaller value less than 1, i.e. increase stable velocity. Combined, we get this relationship between friction and stable velocity.

A graph showing stable velocity shooting up dramatically as friction increases

As friction approaches 1, stable velocity grows without bound. This makes sense, given the definition of \(v(n)\) — if friction is 1, the velocity from the previous tic isn’t reduced at all, so we just keep accelerating freely.

All of this is why I’m wary of using multipliers.

Anyway, this leaves me with one last question about the effects of Doom’s friction: how long does it take to reach stable velocity? Barring precision errors, we’ll never truly reach stable velocity, but let’s say within 5%. First we need a closed formula for the velocity after some number of tics. This is a simple recurrence relation, and you can write a few terms out yourself if you want to be sure this is right.

$$
v(n) = v_0 \times friction^n + speed \times move factor \times \frac{friction^n – 1}{friction – 1}
$$

Our initial velocity is zero, so the first term disappears. Set this equal to the stable formula and solve for n:

$$
speed \times move factor \times \frac{friction^n – 1}{friction – 1} = (1 – 5\%) \times speed \times \frac{move factor}{1 – friction} \\
friction^n – 1 = -(1 – 5\%) \\
n = \frac{\ln 5\%}{\ln friction}
$$

Speed” and move factor disappear entirely, which makes sense, and this is purely a function of friction (and how close we want to get). For vanilla Doom, that comes out to 30.4, which is a little less than a second. For other values of friction:

A graph of time to stability which leaps upwards dramatically towards the right

As friction increases (which in Doom terms means the surface is more slippery), it takes longer and longer to reach stable speed, which is in turn greater and greater. For lesser friction (i.e. mud), stable speed is lower, but reached fairly quickly. (Of course, the extra “getting going” multiplier while in mud adds some extra time here, but including that in the graph is a bit more complicated.)

I think this matches with my instincts above. How fascinating!

What’s that? This is way too much math and you hate it? Then don’t use multipliers in game physics.

Uh

That was a hell of a diversion!

I guess the goofiest stuff in basic game physics is really just about mapping player controls to in-game actions like jumping and deceleration; the rest consists of hacks to compensate for representing everything as a box.

[$] Addressing Meltdown and Spectre in the kernel

Post Syndicated from corbet original https://lwn.net/Articles/743265/rss

When the Meltdown and Spectre vulnerabilities were disclosed on
January 3, attention quickly turned to mitigations. There was already
a clear defense against Meltdown in the form of kernel page-table isolation (KPTI), but the
defenses
against the two Spectre variants had not been developed in public and still
do not exist in the mainline kernel. Initial versions of proposed
defenses have now been disclosed. The resulting picture shows what has
been done to fend off Spectre-based attacks in the near future, but the
situation remains chaotic, to put it lightly.

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.