Tag Archives: raw

Roguelike Simulator

Post Syndicated from Eevee original https://eev.ee/release/2017/12/09/roguelike-simulator/

Screenshot of a monochromatic pixel-art game designed to look mostly like ASCII text

On a recent game night, glip and I stumbled upon bitsy — a tiny game maker for “games where you can walk around and talk to people and be somewhere.” It’s enough of a genre to have become a top tag on itch, so we flicked through a couple games.

What we found were tiny windows into numerous little worlds, ill-defined yet crisply rendered in chunky two-colored pixels. Indeed, all you can do is walk around and talk to people and be somewhere, but the somewheres are strangely captivating. My favorite was the last days of our castle, with a day on the town in a close second (though it cheated and extended the engine a bit), but there are several hundred of these tiny windows available. Just single, short, minimal, interactive glimpses of an idea.

I’ve been wanting to do more of that, so I gave it a shot today. The result is Roguelike Simulator, a game that condenses the NetHack experience into about ninety seconds.


Constraints breed creativity, and bitsy is practically made of constraints — the only place you can even make any decisions at all is within dialogue trees. There are only three ways to alter the world: the player can step on an ending tile to end the game, step on an exit tile to instantly teleport to a tile on another map (or not), or pick up an item. That’s it. You can’t even implement keys; the best you can do is make an annoying maze of identical rooms, then have an NPC tell you the solution.

In retrospect, a roguelike — a genre practically defined by its randomness — may have been a poor choice.

I had a lot of fun faking it, though, and it worked well enough to fool at least one person for a few minutes! Some choice hacks follow. Probably play the game a couple times before reading them?

  • Each floor reveals itself, of course, by teleporting you between maps with different chunks of the floor visible. I originally intended for this to be much more elaborate, but it turns out to be a huge pain to juggle multiple copies of the same floor layout.

  • Endings can’t be changed or randomized; even the text is static. I still managed to implement multiple variants on the “ascend” ending! See if you can guess how. (It’s not that hard.)

  • There are no Boolean operators, but there are arithmetic operators, so in one place I check whether you have both of two items by multiplying together how many of each you have.

  • Monsters you “defeat” are actually just items you pick up. They’re both drawn in the same color, and you can’t see your inventory, so you can’t tell the difference.

Probably the best part was writing the text, which is all completely ridiculous. I really enjoy writing a lot of quips — which I guess is why I like Twitter — and I’m happy to see they’ve made people laugh!


I think this has been a success! It’s definitely made me more confident about making smaller things — and about taking the first idea I have and just running with it. I’m going to keep an eye out for other micro game engines to play with, too.

Resilient TVAddons Plans to Ditch Proactive ‘Piracy’ Screening

Post Syndicated from Ernesto original https://torrentfreak.com/resilient-tvaddons-plans-to-ditch-proactive-piracy-screening-171207/

After years of smooth sailing, this year TVAddons became a poster child for the entertainment industry’s war on illicit streaming devices.

The leading repository for unofficial Kodi addons was sued for copyright infringement in the US by satellite and broadcast provider Dish Network. Around the same time, a similar case was filed by Bell, TVA, Videotron, and Rogers in Canada.

The latter case has done the most damage thus far, as it caused the addon repository to lose its domain names and social media accounts. As a result, the site went dead and while many believed it would never return, it made a blazing comeback after a few weeks.

Since the original TVAddons.ag domain was seized, the site returned on TVaddons.co. And that was not the only difference. A lot of the old add-ons, for which it was unclear if they linked to licensed content, were no longer listed in the repository either.

TVAddons previously relied on the DMCA to shield it from liability but apparently, that wasn’t enough. As a result, they took the drastic decision to check all submitted add-ons carefully.

“Since complying with the law is clearly not enough to prevent frivolous legal action from being taken against you, we have been forced to implement a more drastic code vetting process,” a TVAddons representative told us previously.

Despite the absence of several of the most used add-ons, the repository has managed to regain many of its former users. Over the past month, TVAddons had over 12 million unique users. These all manually installed the new repository on their devices.

“We’re not like one of those pirate sites that are shut down and opens on a new domain the next day, getting users to actually manually install a new repo isn’t an easy feat,” a TVAddons representative informs TorrentFreak.

While it’s still far away from the 40 million unique users it had earlier this year, before the trouble began, it’s still a force to be reckoned with.

Interestingly, the vast majority of all TVAddons traffic comes from the United States. The UK is second at a respectable distance, followed by Canada, Germany, and the Netherlands.

While many former users have returned, the submission policy changes didn’t go unnoticed. The relatively small selection of add-ons is a major drawback for some, but that’s about to change as well, we are informed.

TVAddons plans to return to the old submission model where developers can upload their code more freely. Instead of proactive screening, TVAddons will rely on a standard DMCA takedown policy, relying on copyright holders to flag potentially infringing content.

“We intend on returning to a standard DMCA compliant add-on submission policy shortly, there’s no reason why we should be held to a higher standard than Facebook, Twitter, YouTube or Reddit given the fact that we don’t even host any form of streaming content in the first place.

“Our interim policy isn’t pragmatic, it’s nearly impossible for us to verify the global licensing of all forms of protected content. When you visit a website, there’s no way of verifying licensing beyond trusting them based on reputation.”

The upcoming change doesn’t mean that TVAddons will ignore its legal requirements. If they receive a legitimate takedown notice, proper action will be taken, as always. As such, they would operate in the same fashion as other user-generated sites.

“Right now our interim addon submission policy is akin to North Korea. We always followed the law and will always continue to do so. Anytime we’ve received a legitimate complaint we’ve acted upon it in an expedited manner.

“Facebook, Twitter, Reddit and other online communities would have never existed if they were required to approve the contents of each user’s submissions prior to public posting.”

The change takes place while the two court cases are still pending. TVAddons is determined to keep up this fight. Meanwhile, they are also asking the public to support the project financially.

While some copyright holders, including those who are fighting the service in court, might not like the change, TVAddons believes that this is well within their rights. And with support from groups such as the Electronic Frontier Foundation, they don’t stand alone in this.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The Raspberry Pi Christmas shopping list 2017

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/christmas-shopping-list-2017/

Looking for the perfect Christmas gift for a beloved maker in your life? Maybe you’d like to give a relative or friend a taste of the world of coding and Raspberry Pi? Whatever you’re looking for, the Raspberry Pi Christmas shopping list will point you in the right direction.

An ice-skating Raspberry Pi - The Raspberry Pi Christmas Shopping List 2017

For those getting started

Thinking about introducing someone special to the wonders of Raspberry Pi during the holidays? Although you can set up your Pi with peripherals from around your home, such as a mobile phone charger, your PC’s keyboard, and the old mouse dwelling in an office drawer, a starter kit is a nice all-in-one package for the budding coder.



Check out the starter kits from Raspberry Pi Approved Resellers such as Pimoroni, The Pi Hut, ModMyPi, Adafruit, CanaKit…the list is pretty long. Our products page will direct you to your closest reseller, or you can head to element14 to pick up the official Raspberry Pi Starter Kit.



You can also buy the Raspberry Pi Press’s brand-new Raspberry Pi Beginners Book, which includes a Raspberry Pi Zero W, a case, a ready-made SD card, and adapter cables.

Once you’ve presented a lucky person with their first Raspberry Pi, it’s time for them to spread their maker wings and learn some new skills.

MagPi Essentials books - The Raspberry Pi Christmas Shopping List 2017

To help them along, you could pick your favourite from among the Official Projects Book volume 3, The MagPi Essentials guides, and the brand-new third edition of Carrie Anne Philbin’s Adventures in Raspberry Pi. (She is super excited about this new edition!)

And you can always add a link to our free resources on the gift tag.

For the maker in your life

If you’re looking for something for a confident digital maker, you can’t go wrong with adding to their arsenal of electric and electronic bits and bobs that are no doubt cluttering drawers and boxes throughout their house.



Components such as servomotors, displays, and sensors are staples of the maker world. And when it comes to jumper wires, buttons, and LEDs, one can never have enough.



You could also consider getting your person a soldering iron, some helpings hands, or small tools such as a Dremel or screwdriver set.

And to make their life a little less messy, pop it all inside a Really Useful Box…because they’re really useful.



For kit makers

While some people like to dive into making head-first and to build whatever comes to mind, others enjoy working with kits.



The Naturebytes kit allows you to record the animal visitors of your garden with the help of a camera and a motion sensor. Footage of your local badgers, birds, deer, and more will be saved to an SD card, or tweeted or emailed to you if it’s in range of WiFi.

Cortec Tiny 4WD - The Raspberry Pi Christmas Shopping List 2017

Coretec’s Tiny 4WD is a kit for assembling a Pi Zero–powered remote-controlled robot at home. Not only is the robot adorable, building it also a great introduction to motors and wireless control.



Bare Conductive’s Touch Board Pro Kit offers everything you need to create interactive electronics projects using conductive paint.

Pi Hut Arcade Kit - The Raspberry Pi Christmas Shopping List 2017

Finally, why not help your favourite maker create their own gaming arcade using the Arcade Building Kit from The Pi Hut?

For the reader

For those who like to curl up with a good read, or spend too much of their day on public transport, a book or magazine subscription is the perfect treat.

For makers, hackers, and those interested in new technologies, our brand-new HackSpace magazine and the ever popular community magazine The MagPi are ideal. Both are available via a physical or digital subscription, and new subscribers to The MagPi also receive a free Raspberry Pi Zero W plus case.

Cover of CoderDojo Nano Make your own game

Marc Scott Beginner's Guide to Coding Book

You can also check out other publications from the Raspberry Pi family, including CoderDojo’s new CoderDojo Nano: Make Your Own Game, Eben Upton and Gareth Halfacree’s Raspberry Pi User Guide, and Marc Scott’s A Beginner’s Guide to Coding. And have I mentioned Carrie Anne’s Adventures in Raspberry Pi yet?

Stocking fillers for everyone

Looking for something small to keep your loved ones occupied on Christmas morning? Or do you have to buy a Secret Santa gift for the office tech? Here are some wonderful stocking fillers to fill your boots with this season.

Pi Hut 3D Christmas Tree - The Raspberry Pi Christmas Shopping List 2017

The Pi Hut 3D Xmas Tree: available as both a pre-soldered and a DIY version, this gadget will work with any 40-pin Raspberry Pi and allows you to create your own mini light show.



Google AIY Voice kit: build your own home assistant using a Raspberry Pi, the MagPi Essentials guide, and this brand-new kit. “Google, play Mariah Carey again…”



Pimoroni’s Raspberry Pi Zero W Project Kits offer everything you need, including the Pi, to make your own time-lapse cameras, music players, and more.



The official Raspberry Pi Sense HAT, Camera Module, and cases for the Pi 3 and Pi Zero will complete the collection of any Raspberry Pi owner, while also opening up exciting project opportunities.

STEAM gifts that everyone will love

Awesome Astronauts | Building LEGO’s Women of NASA!

LEGO Idea’s bought out this amazing ‘Women of NASA’ set, and I thought it would be fun to build, play and learn from these inspiring women! First up, let’s discover a little more about Sally Ride and Mae Jemison, two AWESOME ASTRONAUTS!

Treat the kids, and big kids, in your life to the newest LEGO Ideas set, the Women of NASA — starring Nancy Grace Roman, Margaret Hamilton, Sally Ride, and Mae Jemison!



Explore the world of wearables with Pimoroni’s sewable, hackable, wearable, adorable Bearables kits.



Add lights and motors to paper creations with the Activating Origami Kit, available from The Pi Hut.




We all loved Hidden Figures, and the STEAM enthusiast you know will do too. The film’s available on DVD, and you can also buy the original book, along with other fascinating non-fiction such as Rebecca Skloot’s The Immortal Life of Henrietta Lacks, Rachel Ignotofsky’s Women in Science, and Sydney Padua’s (mostly true) The Thrilling Adventures of Lovelace and Babbage.

Have we missed anything?

With so many amazing kits, HATs, and books available from members of the Raspberry Pi community, it’s hard to only pick a few. Have you found something splendid for the maker in your life? Maybe you’ve created your own kit that uses the Raspberry Pi? Share your favourites with us in the comments below or via our social media accounts.

The post The Raspberry Pi Christmas shopping list 2017 appeared first on Raspberry Pi.

Game night 1: Lisa, Lisa, MOOP

Post Syndicated from Eevee original https://eev.ee/blog/2017/12/05/game-night-1-lisa-lisa-moop/

For the last few weeks, glip (my partner) and I have spent a couple hours most nights playing indie games together. We started out intending to play a short list of games that had been recommended to glip, but this turns out to be a nice way to wind down, so we’ve been keeping it up and clicking on whatever looks interesting in the itch app.

Most of the games are small and made by one or two people, so they tend to be pretty tightly scoped and focus on a few particular kinds of details. I’ve found myself having brain thoughts about all that, so I thought I’d write some of them down.

I also know that some people (cough) tend not to play games they’ve never heard of, even if they want something new to play. If that’s you, feel free to play some of these, now that you’ve heard of them!

Also, I’m still figuring the format out here, so let me know if this is interesting or if you hope I never do it again!

First up:

  • Lisa: The Painful
  • Lisa: The Joyful
  • MOOP

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Lisa: The Painful

long · classic rpg · dec 2014 · lin/mac/win · $10 on itch or steam · website

(cw: basically everything??)

Lisa: The Painful is true to its name. I hesitate to describe it as fun, exactly, but I’m glad we played it.

Everything about the game is dark. It’s a (somewhat loose) sequel to another game called Lisa, whose titular character ultimately commits suicide; her body hanging from a noose is the title screen for this game.

Ah, but don’t worry, it gets worse. This game takes place in a post-apocalyptic wasteland, where every female human — women, children, babies — is dead. You play as Brad (Lisa’s brother), who has discovered the lone exception: a baby girl he names Buddy and raises like a daughter. Now, Buddy has been kidnapped, and you have to go rescue her, presumably from being raped.

Ah, but don’t worry, it gets worse.


I’ve had a hard time putting my thoughts in order here, because so much of what stuck with me is the way the game entangles the plot with the mechanics.

I love that kind of thing, but it’s so hard to do well. I can’t really explain why, but I feel like most attempts to do it fall flat — they have a glimmer of an idea, but they don’t integrate it well enough, or they don’t run nearly as far as they could have. I often get the same feeling as, say, a hyped-up big moral choice that turns out to be picking “yes” or “no” from a menu. The idea is there, but the execution is so flimsy that it leaves no impact on me at all.

An obvious recent success here is Undertale, where the entire story is about violence and whether you choose to engage or avoid it (and whether you can do that). If you choose to eschew violence, not only does the game become more difficult, it arguably becomes a different game entirely. Granted, the contrast is lost if you (like me) tried to play as a pacifist from the very beginning. I do feel that you could go further with the idea than Undertale, but Undertale itself doesn’t feel incomplete.

Christ, I’m not even talking about the right game any more.

Okay, so: this game is a “classic” RPG, by which I mean, it was made with RPG Maker. (It’s kinda funny that RPG Maker was designed to emulate a very popular battle style, and now the only games that use that style are… made with RPG Maker.) The main loop, on the surface, is standard RPG fare: you walk around various places, talk to people, solve puzzles, recruit party members, and get into turn-based fights.

Now, Brad is addicted to a drug called Joy. He will regularly go into withdrawal, which manifests in the game as a status effect that cuts his stats (even his max HP!) dramatically.

It is really, really, incredibly inconvenient. And therein lies the genius here. The game could have simply told me that Brad is an addict, and I don’t think I would’ve cared too much. An addiction to a fantasy drug in a wasteland doesn’t mean anything to me, especially about this tiny sprite man I just met, so I would’ve filed this away as a sterile fact and forgotten about it. By making his addiction affect me, I’m now invested in it. I wish Brad weren’t addicted, even if only because it’s annoying. I found a party member once who turned out to have the same addiction, and I felt dread just from seeing the icon for the status effect. I’ve been looped into the events of this story through the medium I use to interact with it: the game.

It’s a really good use of games as a medium. Even before I’m invested in the characters, I’m invested in what’s happening to them, because it impacts the game!

Incidentally, you can get Joy as an item, which will temporarily cure your withdrawal… but you mostly find it by looting the corpses of grotesque mutant flesh horrors you encounter. I don’t think the game would have the player abruptly mutate out of nowhere, but I wasn’t about to find out, either. We never took any.


Virtually every staple of the RPG genre has been played with in some way to tie it into the theme/setting. I love it, and I think it works so well precisely because it plays with expectations of how RPGs usually work.

Most obviously, the game is a sidescroller, not top-down. You can’t jump freely, but you can hop onto one-tile-high boxes and climb ropes. You can also drop off off ledges… but your entire party will take fall damage, which gets rapidly more severe the further you fall.

This wouldn’t be too much of a problem, except that healing is hard to come by for most of the game. Several hub areas have campfires you can sleep next to to restore all your health and MP, but when you wake up, something will have happened to you. Maybe just a weird cutscene, or maybe one of your party members has decided to leave permanently.

Okay, so use healing items instead? Good luck; money is also hard to come by, and honestly so are shops, and many of the healing items are woefully underpowered.

Grind for money? Good luck there, too! While the game has plenty of battles, virtually every enemy is a unique overworld human who only appears once, and then is dead, because you killed him. Only a handful of places have unlimited random encounters, and grinding is not especially pleasant.

The “best” way to get a reliable heal is to savescum — save the game, sleep by the campfire, and reload if you don’t like what you wake up to.

In a similar vein, there’s a part of the game where you’re forced to play Russian Roulette. You choose a party member; he and an opponent will take turns shooting themselves in the head until someone finds a loaded chamber. If your party member loses, he is dead. And you have to keep playing until you win three times, so there’s no upper limit on how many people you might lose. I couldn’t find any way to influence who won, so I just had to savescum for a good half hour until I made it through with minimal losses.

It was maddening, but also a really good idea. Games don’t often incorporate the existence of saves into the gameplay, and when they do, they usually break the fourth wall and get all meta about it. Saves are never acknowledged in-universe here (aside from the existence of save points), but surely these parts of the game were designed knowing that the best way through them is by reloading. It’s rarely done, it can easily feel unfair, and it drove me up the wall — but it was certainly painful, as intended, and I kinda love that.

(Naturally, I’m told there’s a hard mode, where you can only use each save point once.)

The game also drives home the finality of death much better than most. It’s not hard to overlook the death of a redshirt, a character with a bit part who simply doesn’t appear any more. This game permanently kills your party members. Russian Roulette isn’t even the only way you can lose them! Multiple cutscenes force you to choose between losing a life or some other drastic consequence. (Even better, you can try to fight the person forcing this choice on you, and he will decimate you.) As the game progresses, you start to encounter enemies who can simply one-shot murder your party members.

It’s such a great angle. Just like with Brad’s withdrawal, you don’t want to avoid their deaths because it’d be emotional — there are dozens of party members you can recruit (though we only found a fraction of them), and most of them you only know a paragraph about — but because it would inconvenience you personally. Chances are, you have your strongest dudes in your party at any given time, so losing one of them sucks. And with few random encounters, you can’t just grind someone else up to an appropriate level; it feels like there’s a finite amount of XP in the game, and if someone high-level dies, you’ve lost all the XP that went into them.


The battles themselves are fairly straightforward. You can attack normally or use a special move that costs MP. SP? Some kind of points.

Two things in particular stand out. One I mentioned above: the vast majority of the encounters are one-time affairs against distinct named NPCs, who you then never see again, because they are dead, because you killed them.

The other is the somewhat unusual set of status effects. The staples like poison and sleep are here, but don’t show up all that often; more frequent are statuses like weird, drunk, stink, or cool. If you do take Joy (which also cures depression), you become joyed for a short time.

The game plays with these in a few neat ways, besides just Brad’s withdrawal. Some party members have a status like stink or cool permanently. Some battles are against people who don’t want to fight at all — and so they’ll spend most of the battle crying, purely for flavor impact. Seeing that for the first time hit me pretty hard; until then we’d only seen crying as a mechanical side effect of having sand kicked in one’s face.


The game does drag on a bit. I think we poured 10 in-game hours into it, which doesn’t count time spent reloading. It doesn’t help that you walk not super fast.

My biggest problem was with getting my bearings; I’m sure we spent a lot of that time wandering around accomplishing nothing. Most of the world is focused around one of a few hub areas, and once you’ve completed one hub, you can move onto the next one. That’s fine. Trouble is, you can go any of a dozen different directions from each hub, and most of those directions will lead you to very similar-looking hills built out of the same tiny handful of tiles. The connections between places are mostly cave entrances, which also largely look the same. Combine that with needing to backtrack for puzzle or progression reasons, and it’s incredibly difficult to keep track of where you’ve been, what you’ve done, and where you need to go next.

I don’t know that the game is wrong here; the aesthetic and world layout are fantastic at conveying a desolate wasteland. I wouldn’t even be surprised if the navigation were deliberately designed this way. (On the other hand, assuming every annoyance in a despair-ridden game is deliberate might be giving it too much credit.) But damn it’s still frustrating.

I felt a little lost in the battle system, too. Towards the end of the game, Brad in particular had over a dozen skills he could use, but I still couldn’t confidently tell you which were the strongest. New skills sometimes appear in the middle of the list or cost less than previous skills, and the game doesn’t outright tell you how much damage any of them do. I know this is the “classic RPG” style, and I don’t think it was hugely inconvenient, but it feels weird to barely know how my own skills work. I think this puts me off getting into new RPGs, just generally; there’s a whole new set of things I have to learn about, and games in this style often won’t just tell me anything, so there’s this whole separate meta-puzzle to figure out before I can play the actual game effectively.

Also, the sound could use a little bit of… mastering? Some music and sound effects are significantly louder and screechier than others. Painful, you could say.


The world is full of side characters with their own stuff going on, which is also something I love seeing in games; too often, the whole world feels like an obstacle course specifically designed for you.

Also, many of those characters are, well, not great people. Really, most of the game is kinda fucked up. Consider: the weird status effect is most commonly inflicted by the “Grope” skill. It makes you feel weird, you see. Oh, and the currency is porn magazines.

And then there are the gangs, the various spins on sex clubs, the forceful drug kingpins, and the overall violence that permeates everything (you stumble upon an alarming number of corpses). The game neither condones nor condemns any of this; it simply offers some ideas of how people might behave at the end of the world. It’s certainly the grittiest interpretation I’ve seen.

I don’t usually like post-apocalypses, because they try to have these very hopeful stories, but then at the end the world is still a blighted hellscape so what was the point of any of that? I like this game much better for being a blighted hellscape throughout. The story is worth following to see where it goes, not just because you expect everything wrapped up neatly at the end.

…I realize I’ve made this game sound monumentally depressing throughout, but it manages to pack in a lot of funny moments as well, from the subtle to the overt. In retrospect, it’s actually really good at balancing the mood so it doesn’t get too depressing. If nothing else, it’s hilarious to watch this gruff, solemn, battle-scarred, middle-aged man pedal around on a kid’s bike he found.


An obvious theme of the game is despair, but the more I think about it, the more I wonder if ambiguity is a theme as well. It certainly fits the confusing geography.

Even the premise is a little ambiguous. Is/was Olathe a city, a country, a whole planet? Did the apocalypse affect only Olathe, or the whole world? Does it matter in an RPG, where the only world that exists is the one mapped out within the game?

Towards the end of the game, you catch up with Buddy, but she rejects you, apparently resentful that you kept her hidden away for her entire life. Brad presses on anyway, insisting on protecting her.

At that point I wasn’t sure I was still on Brad’s side. But he’s not wrong, either. Is he? Maybe it depends on how old Buddy is — but the game never tells us. Her sprite is a bit smaller than the men’s, but it’s hard to gauge much from small exaggerated sprites, and she might just be shorter. In the beginning of the game, she was doing kid-like drawings, but we don’t know how much time passed after that. Everyone seems to take for granted that she’s capable of bearing children, and she talks like an adult. So is she old enough to be making this decision, or young enough for parent figure Brad to overrule her? What is the appropriate age of agency, anyway, when you’re the last girl/woman left more than a decade after the end of the world?

Can you repopulate a species with only one woman, anyway?


Well, that went on a bit longer than I intended. This game has a lot of small touches that stood out to me, and they all wove together very well.

Should you play it? I have absolutely no idea.

FINAL SCORE: 1 out of 6 chambers

Lisa: The Joyful

fairly short · classic rpg · aug 2015 · lin/mac/win · $5 on itch or steam

Surprise! There’s a third game to round out this trilogy.

Lisa: The Joyful is much shorter, maybe three hours long — enough to be played in a night rather than over the better part of a week.

This one picks up immediately after the end of Painful, with you now playing as Buddy. It takes a drastic turn early on: Buddy decides that, rather than hide from the world, she must conquer it. She sets out to murder all the big bosses and become queen.

The battle system has been inherited from the previous game, but battles are much more straightforward this time around. You can’t recruit any party members; for much of the game, it’s just you and a sword.

There is a catch! Of course.

The catch is that you do not have enough health to survive most boss battles without healing. With no party members, you cannot heal via skills. I don’t think you could buy healing items anywhere, either. You have a few when the game begins, but once you run out, that’s it.

Except… you also have… some Joy. Which restores you to full health and also makes you crit with every hit. And drops off of several enemies.

We didn’t even recognize Joy as a healing item at first, since we never used it in Painful; it’s description simply says that it makes you feel nothing, and we’d assumed the whole point of it was to stave off withdrawal, which Buddy doesn’t experience. Luckily, the game provided a hint in the form of an NPC who offers to switch on easy mode:

What’s that? Bad guys too tough? Not enough jerky? You don’t want to take Joy!? Say no more, you’ve come to the right place!

So the game is aware that it’s unfairly difficult, and it’s deliberately forcing you to take Joy, and it is in fact entirely constructed around this concept. I guess the title is a pretty good hint, too.

I don’t feel quite as strongly about Joyful as I do about Painful. (Admittedly, I was really tired and starting to doze off towards the end of Joyful.) Once you get that the gimmick is to force you to use Joy, the game basically reduces to a moderate-difficulty boss rush. Other than that, the only thing that stood out to me mechanically was that Buddy learns a skill where she lifts her shirt to inflict flustered as a status effect — kind of a lingering echo of how outrageous the previous game could be.

You do get a healthy serving of plot, which is nice and ties a few things together. I wouldn’t say it exactly wraps up the story, but it doesn’t feel like it’s missing anything either; it’s exactly as murky as you’d expect.

I think it’s worth playing Joyful if you’ve played Painful. It just didn’t have the same impact on me. It probably doesn’t help that I don’t like Buddy as a person. She seems cold, violent, and cruel. Appropriate for the world and a product of her environment, I suppose.

FINAL SCORE: 300 Mags

MOOP

fairly short · inventory game · nov 2017 · win · free on itch

Finally, as something of a palate cleanser, we have MOOP: a delightful and charming little inventory game.

I don’t think “inventory game” is a real genre, but I mean the kind of game where you go around collecting items and using them in the right place. Puzzle-driven, but with “puzzles” that can largely be solved by simply trying everything everywhere. I’d put a lot of point and click adventures in the same category, despite having a radically different interface. Is that fair? Yes, because it’s my blog.

MOOP was almost certainly also made in RPG Maker, but it breaks the mold in a very different way by not being an RPG. There are no battles whatsoever, only interactions on the overworld; you progress solely via dialogue and puzzle-solving. Examining something gives you a short menu of verbs — use, talk, get — reminiscent of interactive fiction, or perhaps the graphical “adventure” games that took inspiration from interactive fiction. (God, “adventure game” is the worst phrase. Every game is an adventure! It doesn’t mean anything!)

Everything about the game is extremely chill. I love the monochrome aesthetic combined with a large screen resolution; it feels like I’m peeking into an alternate universe where the Game Boy got bigger but never gained color. I played halfway through the game before realizing that the protagonist (Moop) doesn’t have a walk animation; they simply slide around. Somehow, it works.

The puzzles are a little clever, yet low-pressure; the world is small enough that you can examine everything again if you get stuck, and there’s no way to lose or be set back. The music is lovely, too. It just feels good to wander around in a world that manages to make sepia look very pretty.

The story manages to pack a lot into a very short time. It’s… gosh, I don’t know. It has a very distinct texture to it that I’m not sure I’ve seen before. The plot weaves through several major events that each have very different moods, and it moves very quickly — but it’s well-written and doesn’t feel rushed or disjoint. It’s lighthearted, but takes itself seriously enough for me to get invested. It’s fucking witchcraft.

I think there was even a non-binary character! Just kinda nonchalantly in there. Awesome.

What a happy, charming game. Play if you would like to be happy and charmed.

FINAL SCORE: 1 waxing moon

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.

Google & Facebook Excluded From Aussie Safe Harbor Copyright Amendments

Post Syndicated from Andy original https://torrentfreak.com/google-facebook-excluded-from-aussie-safe-harbor-copyright-amendments-171205/

Due to a supposed drafting error in Australia’s implementation of the Australia – US Free Trade Agreement (AUSFTA), copyright safe harbor provisions currently only apply to commercial Internet service providers.

This means that while local ISPs such as Telstra receive protection from copyright infringement complaints, services such as Google, Facebook and YouTube face legal uncertainty.

Proposed amendments to the Copyright Act earlier this year would’ve seen enhanced safe harbor protections for such platforms but they were withdrawn at the eleventh hour so that the government could consider “further feedback” from interested parties.

Shortly after the government embarked on a detailed consultation with entertainment industry groups. They accuse platforms like YouTube of exploiting safe harbor provisions in the US and Europe, which forces copyright holders into an expensive battle to have infringing content taken down. They do not want that in Australia and at least for now, they appear to have achieved their aims.

According to a report from AFR (paywall), the Australian government is set to introduce new legislation Wednesday which will expand safe harbors for some organizations but will exclude companies such as Google, Facebook, and similar platforms.

Communications Minister Mitch Fifield confirmed the exclusions while noting that additional safeguards will be available to institutions, libraries, and organizations in the disability, archive and culture sectors.

“The measures in the bill will ensure these sectors are protected from legal liability where they can demonstrate that they have taken reasonable steps to deal with copyright infringement by users of their online platforms,” Senator Fifield told AFR.

“Extending the safe harbor scheme in this way will provide greater certainty to institutions in these sectors and enhance their ability to provide more innovative and creative services for all Australians.”

According to the Senator, the government will continue its work with stakeholders to further reform safe harbor provisions, before applying them to other service providers.

The news that Google, Facebook, and similar platforms are to be denied access to the new safe harbor rules will be seen as a victory for rightsholders. They’re desperately trying to tighten up legislation in other regions where such safeguards are already in place, arguing that platforms utilizing user-generated content for profit should obtain appropriate licensing first.

This so-called ‘Value Gap’ (1,2,3) and associated proactive filtering proposals are among the hottest copyright topics right now, generating intense debate across Europe and the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

GoMovies/123Movies Launches Anime Streaming Site

Post Syndicated from Ernesto original https://torrentfreak.com/gomovies123movies-launches-anime-streaming-site-171204/

Pirate video streaming sites are booming. Their relative ease of use through on-demand viewing makes them a viable alternative to P2P file-sharing, which traditionally dominated the piracy arena.

The popular movie streaming site GoMovies, formerly known as 123movies, is one of the most-used streaming sites. Despite the rebranding and several domain changes, it has built a steady base of millions of users over the past year and a half.

And it’s not done yet, we learn today.

The site, currently operating from the Gostream.is domain name, recently launched a new spinoff targeting anime fans. Animehub.to is currently promoted on GoMovies and the site’s operators aim to turn it into the leading streaming site for anime content.

Animehub.to

Someone connected to GoMovies told us that they’ve received a lot of requests from users to add anime content. Anime has traditionally been a large niche on file-sharing sites and the same is true on streaming platforms.

Technically speaking, GoMovies could have easily filled up the original site with anime content, but the owners prefer a different outlet.

With a separate anime site, they hope to draw in more visitors, TorrentFreak was told by an insider. For one, this makes it possible to rank better in search engines. It also allows the operators to cater specifically to the anime audience, with anime specific categories and release schedules.

Anime copyright holders will not be pleased with the new initiative, that’s for sure, but GoMovies is not new to legal pressure.

Earlier this year the US Ambassador to Vietnam called on the local Government to criminally prosecute people behind 123movies, the previous iteration of the site. In addition, the MPAA reported the site to the US Government in its recent overview of notorious pirate sites.

Pressure or not, it appears that GoMovies has no intention of slowing down or changing its course, although we’ve heard that yet another rebranding is on the horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

New Piracy Scaremongering Video Depicts ‘Dangerous’ Raspberry Pi

Post Syndicated from Andy original https://torrentfreak.com/new-piracy-scaremongering-video-depicts-dangerous-raspberry-pi-171202/

Unless you’ve been living under a rock for the past few years, you’ll be aware that online streaming of video is a massive deal right now.

In addition to the successes of Netflix and Amazon Prime, for example, unauthorized sources are also getting a piece of the digital action.

Of course, entertainment industry groups hate this and are quite understandably trying to do something about it. Few people have a really good argument as to why they shouldn’t but recent tactics by some video-affiliated groups are really starting to wear thin.

From the mouth of Hollywood itself, the trending worldwide anti-piracy message is that piracy is dangerous. Torrent sites carry viruses that will kill your computer, streaming sites carry malware that will steal your identity, and ISDs (that’s ‘Illegal Streaming Devices’, apparently) can burn down your home, kill you, and corrupt your children.

If anyone is still taking notice of these overblown doomsday messages, here’s another one. Brought to you by the Hollywood-funded Digital Citizens Alliance, the new video rams home the message – the exact same message in fact – that set-top boxes providing the latest content for free are a threat to, well, just about everything.

While the message is probably getting a little old now, it’s worth noting the big reveal at ten seconds into the video, where the evil pirate box is introduced to the viewer.

As reproduced in the left-hand image below, it is a blatantly obvious recreation of the totally content-neutral Raspberry Pi, the affordable small computer from the UK. Granted, people sometimes use it for Kodi (the image on the right shows a Kodi-themed Raspberry Pi case, created by official Kodi team partner FLIRC) but its overwhelming uses have nothing to do with the media center, or indeed piracy.

Disreputable and dangerous device? Of course not

So alongside all the scary messages, the video succeeds in demonizing a perfectly innocent and safe device of which more than 15 million have been sold, many of them directly to schools. Since the device is so globally recognizable, it’s a not inconsiderable error.

It’s a topic that the Kodi team itself vented over earlier this week, noting how the British tabloid media presented the recent wave of “Kodi Boxes Can Kill You” click-bait articles alongside pictures of the Raspberry Pi.

“Instead of showing one of the many thousands of generic black boxes sold without the legally required CE/UL marks, the media mainly chose to depict a legitimate Rasbperry Pi clothed in a very familiar Kodi case. The Pis originate from Cambridge, UK, and have been rigorously certified,” the team complain.

“We’re also super-huge fans of the Raspberry Pi Foundation, and the proceeds of Pi board sales fund the awesome work they do to promote STEM (Science, Technology, Engineering and Mathematics) education in schools. The Kodi FLIRC case has also been a hit with our Raspberry Pi users and sales contribute towards the cost of events like Kodi DevCon.”

“It’s insulting, and potentially harmful, to see two successful (and safe) products being wrongly presented for the sake of a headline,” they conclude.

Indeed, it seems that both press and the entertainment industry groups that feed them have been playing fast and loose recently, with the Raspberry Pi getting a particularly raw deal.

Still, if it scares away some pirates, that’s the main thing….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Zach Joins The Support Team

Post Syndicated from Yev original https://www.backblaze.com/blog/zach-joins-support-team/

As Backblaze continues to grow, one thing that runs linearly with our growth is the number of folks we need in support. We believe strongly that people writing in to get a helping hand should be quickly and kindly take care of. To help us with that, we’d like to welcome Zach, our latest Support Tech to the Backblaze team! Lets take a minute to learn a bit more about Zach shall we?

What is your Backblaze Title?
Jr. Support Technician

Where are you originally from?
I was born in Pasadena, CA, but I’ve spent most of my life in the Bay Area.

What attracted you to Backblaze?
I have a few friends that have been with the company for some time who would do nothing but gush about the respect that Backblaze has for its employees. More than anything I was drawn to the loyalty and faith the company has for its staff.

Where else have you worked?
Previously I have worked support roles for other tech companies as well as general IT and computer hardware repair.

What’s your dream job?
Somewhere that I feel I can grow within the company and find success in a role that makes me feel satisfied. Or a touring musician. That would be cool, too.

Favorite place you’ve traveled?
Canada! Everyone was so nice!

Favorite hobby?
In my spare time I like to write sad songs.

Of what achievement are you most proud?
One of my favorite singers told me that I have a really nice voice. So I suppose my proudest achievement is being born with a nice voice.

Star Trek or Star Wars?
I cried during Episode VII.

Coke or Pepsi?
Coke, obviously.

Favorite food?
Is bread an acceptable answer?

Anything else you’d like you’d like to tell us?
I’m also a big Disney fan like so many other people who work here. ¯\_(ツ)_/¯

We certainly do have a lot of Disney fans on staff — there must be something in the air. Welcome aboard Zach!

The post Zach Joins The Support Team appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

NSA "Red Disk" Data Leak

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/nsa_red_disk_da.html

ZDNet is reporting about another data leak, this one from US Army’s Intelligence and Security Command (INSCOM), which is also within to the NSA.

The disk image, when unpacked and loaded, is a snapshot of a hard drive dating back to May 2013 from a Linux-based server that forms part of a cloud-based intelligence sharing system, known as Red Disk. The project, developed by INSCOM’s Futures Directorate, was slated to complement the Army’s so-called distributed common ground system (DCGS), a legacy platform for processing and sharing intelligence, surveillance, and reconnaissance information.

[…]

Red Disk was envisioned as a highly customizable cloud system that could meet the demands of large, complex military operations. The hope was that Red Disk could provide a consistent picture from the Pentagon to deployed soldiers in the Afghan battlefield, including satellite images and video feeds from drones trained on terrorists and enemy fighters, according to a Foreign Policy report.

[…]

Red Disk was a modular, customizable, and scalable system for sharing intelligence across the battlefield, like electronic intercepts, drone footage and satellite imagery, and classified reports, for troops to access with laptops and tablets on the battlefield. Marking files found in several directories imply the disk is “top secret,” and restricted from being shared to foreign intelligence partners.

A couple of points. One, this isn’t particularly sensitive. It’s an intelligence distribution system under development. It’s not raw intelligence. Two, this doesn’t seem to be classified data. Even the article hedges, using the unofficial term of “highly sensitive.” Three, it doesn’t seem that Chris Vickery, the researcher that discovered the data, has published it.

Chris Vickery, director of cyber risk research at security firm UpGuard, found the data and informed the government of the breach in October. The storage server was subsequently secured, though its owner remains unknown.

This doesn’t feel like a big deal to me.

Slashdot thread.

YouTube Begins Blocking Music in Finland Due to Licensing Failure

Post Syndicated from Andy original https://torrentfreak.com/youtube-begins-blocking-music-in-finland-due-to-licensing-failure-171130/

YouTube is used by millions of people worldwide to access a broad range of content but it is music that is increasingly one of the platform’s big draws.

With an almost unrivaled library, YouTube is the go-to service for music fans globally but over in Finland this morning, things aren’t playing out well.

As shown in the image below, users who try to access music are now getting the following graphic. When translated the text reads “Video content owned by Teosto. The video can not be used in your country.”

No license…..No access…

This is a pretty big deal. Teosto is a Finnish performance rights organization that collects royalties on behalf of local artists and composers. It represents around 30,000 local songwriters and publishers, small fry when compared to the three million foreign music entities it represents in Finland.

This means that YouTube must have pulled huge volumes of content from its platform locally, rendering the service far less attractive to users. However, according to a TorrentFreak source, things go much further than standard modern licensed music.

As shown in the image below, even music published in 1899 has found itself pulled from the platform.

Jean Sibelius’ masterpiece Finlandia? Gone..

The music licensing dispute, which appears to have led to millions of tracks being rendered inaccessible in Finland, was confirmed by YouTube this morning.

“We were unable to reach a new licensing agreement with TEOSTO. Because of this, some videos containing music will be blocked in Finland,” the team said.

While the removal of content will come as a disappointment to the quarter of Finnish citizens who use YouTube regularly, it doesn’t come as a complete surprise.

In September, Teosto issued an opinion on copyrights to Parliament’s Education Committee. The licensing group complained that rightsholders aren’t adequately compensated for content played on platforms like YouTube. Like other groups in the same position, Teosto is looking to obtain more revenue for its members. That seems to be the basis for the dispute with YouTube.

For YouTube to have pulled so much content, negotiations must have really broken down, but Teosto sounded a note of optimism this morning. The group noted that while Google had indeed pulled music content from YouTube in Finland, it may reinstate it during the next couple of days.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Presenting AWS IoT Analytics: Delivering IoT Analytics at Scale and Faster than Ever Before

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-presenting-aws-iot-analytics/

One of the technology areas I thoroughly enjoy is the Internet of Things (IoT). Even as a child I used to infuriate my parents by taking apart the toys they would purchase for me to see how they worked and if I could somehow put them back together. It seems somehow I was destined to end up the tough and ever-changing world of technology. Therefore, it’s no wonder that I am really enjoying learning and tinkering with IoT devices and technologies. It combines my love of development and software engineering with my curiosity around circuits, controllers, and other facets of the electrical engineering discipline; even though an electrical engineer I can not claim to be.

Despite all of the information that is collected by the deployment of IoT devices and solutions, I honestly never really thought about the need to analyze, search, and process this data until I came up against a scenario where it became of the utmost importance to be able to search and query through loads of sensory data for an anomaly occurrence. Of course, I understood the importance of analytics for businesses to make accurate decisions and predictions to drive the organization’s direction. But it didn’t occur to me initially, how important it was to make analytics an integral part of my IoT solutions. Well, I learned my lesson just in time because this re:Invent a service is launching to make it easier for anyone to process and analyze IoT messages and device data.

 

Hello, AWS IoT Analytics!  AWS IoT Analytics is a fully managed service of AWS IoT that provides advanced data analysis of data collected from your IoT devices.  With the AWS IoT Analytics service, you can process messages, gather and store large amounts of device data, as well as, query your data. Also, the new AWS IoT Analytics service feature integrates with Amazon Quicksight for visualization of your data and brings the power of machine learning through integration with Jupyter Notebooks.

Benefits of AWS IoT Analytics

  • Helps with predictive analysis of data by providing access to pre-built analytical functions
  • Provides ability to visualize analytical output from service
  • Provides tools to clean up data
  • Can help identify patterns in the gathered data

Be In the Know: IoT Analytics Concepts

  • Channel: archives the raw, unprocessed messages and collects data from MQTT topics.
  • Pipeline: consumes messages from channels and allows message processing.
    • Activities: perform transformations on your messages including filtering attributes and invoking lambda functions advanced processing.
  • Data Store: Used as a queryable repository for processed messages. Provide ability to have multiple datastores for messages coming from different devices or locations or filtered by message attributes.
  • Data Set: Data retrieval view from a data store, can be generated by a recurring schedule. 

Getting Started with AWS IoT Analytics

First, I’ll create a channel to receive incoming messages.  This channel can be used to ingest data sent to the channel via MQTT or messages directed from the Rules Engine. To create a channel, I’ll select the Channels menu option and then click the Create a channel button.

I’ll name my channel, TaraIoTAnalyticsID and give the Channel a MQTT topic filter of Temperature. To complete the creation of my channel, I will click the Create Channel button.

Now that I have my Channel created, I need to create a Data Store to receive and store the messages received on the Channel from my IoT device. Remember you can set up multiple Data Stores for more complex solution needs, but I’ll just create one Data Store for my example. I’ll select Data Stores from menu panel and click Create a data store.

 

I’ll name my Data Store, TaraDataStoreID, and once I click the Create the data store button and I would have successfully set up a Data Store to house messages coming from my Channel.

Now that I have my Channel and my Data Store, I will need to connect the two using a Pipeline. I’ll create a simple pipeline that just connects my Channel and Data Store, but you can create a more robust pipeline to process and filter messages by adding Pipeline activities like a Lambda activity.

To create a pipeline, I’ll select the Pipelines menu option and then click the Create a pipeline button.

I will not add an Attribute for this pipeline. So I will click Next button.

As we discussed there are additional pipeline activities that I can add to my pipeline for the processing and transformation of messages but I will keep my first pipeline simple and hit the Next button.

The final step in creating my pipeline is for me to select my previously created Data Store and click Create Pipeline.

All that is left for me to take advantage of the AWS IoT Analytics service is to create an IoT rule that sends data to an AWS IoT Analytics channel.  Wow, that was a super easy process to set up analytics for IoT devices.

If I wanted to create a Data Set as a result of queries run against my data for visualization with Amazon Quicksight or integrate with Jupyter Notebooks to perform more advanced analytical functions, I can choose the Analyze menu option to bring up the screens to create data sets and access the Juypter Notebook instances.

Summary

As you can see, it was a very simple process to set up the advanced data analysis for AWS IoT. With AWS IoT Analytics, you have the ability to collect, visualize, process, query and store large amounts of data generated from your AWS IoT connected device. Additionally, you can access the AWS IoT Analytics service in a myriad of different ways; the AWS Command Line Interface (AWS CLI), the AWS IoT API, language-specific AWS SDKs, and AWS IoT Device SDKs.

AWS IoT Analytics is available today for you to dig into the analysis of your IoT data. To learn more about AWS IoT and AWS IoT Analytics go to the AWS IoT Analytics product page and/or the AWS IoT documentation.

Tara

GDPR – A Practical Guide For Developers

Post Syndicated from Bozho original https://techblog.bozho.net/gdpr-practical-guide-developers/

You’ve probably heard about GDPR. The new European data protection regulation that applies practically to everyone. Especially if you are working in a big company, it’s most likely that there’s already a process for gettign your systems in compliance with the regulation.

The regulation is basically a law that must be followed in all European countries (but also applies to non-EU companies that have users in the EU). In this particular case, it applies to companies that are not registered in Europe, but are having European customers. So that’s most companies. I will not go into yet another “12 facts about GDPR” or “7 myths about GDPR” posts/whitepapers, as they are often aimed at managers or legal people. Instead, I’ll focus on what GDPR means for developers.

Why am I qualified to do that? A few reasons – I was advisor to the deputy prime minister of a EU country, and because of that I’ve been both exposed and myself wrote some legislation. I’m familiar with the “legalese” and how the regulatory framework operates in general. I’m also a privacy advocate and I’ve been writing about GDPR-related stuff in the past, i.e. “before it was cool” (protecting sensitive data, the right to be forgotten). And finally, I’m currently working on a project that (among other things) aims to help with covering some GDPR aspects.

I’ll try to be a bit more comprehensive this time and cover as many aspects of the regulation that concern developers as I can. And while developers will mostly be concerned about how the systems they are working on have to change, it’s not unlikely that a less informed manager storms in in late spring, realizing GDPR is going to be in force tomorrow, asking “what should we do to get our system/website compliant”.

The rights of the user/client (referred to as “data subject” in the regulation) that I think are relevant for developers are: the right to erasure (the right to be forgotten/deleted from the system), right to restriction of processing (you still keep the data, but mark it as “restricted” and don’t touch it without further consent by the user), the right to data portability (the ability to export one’s data), the right to rectification (the ability to get personal data fixed), the right to be informed (getting human-readable information, rather than long terms and conditions), the right of access (the user should be able to see all the data you have about them), the right to data portability (the user should be able to get a machine-readable dump of their data).

Additionally, the relevant basic principles are: data minimization (one should not collect more data than necessary), integrity and confidentiality (all security measures to protect data that you can think of + measures to guarantee that the data has not been inappropriately modified).

Even further, the regulation requires certain processes to be in place within an organization (of more than 250 employees or if a significant amount of data is processed), and those include keeping a record of all types of processing activities carried out, including transfers to processors (3rd parties), which includes cloud service providers. None of the other requirements of the regulation have an exception depending on the organization size, so “I’m small, GDPR does not concern me” is a myth.

It is important to know what “personal data” is. Basically, it’s every piece of data that can be used to uniquely identify a person or data that is about an already identified person. It’s data that the user has explicitly provided, but also data that you have collected about them from either 3rd parties or based on their activities on the site (what they’ve been looking at, what they’ve purchased, etc.)

Having said that, I’ll list a number of features that will have to be implemented and some hints on how to do that, followed by some do’s and don’t’s.

  • “Forget me” – you should have a method that takes a userId and deletes all personal data about that user (in case they have been collected on the basis of consent, and not due to contract enforcement or legal obligation). It is actually useful for integration tests to have that feature (to cleanup after the test), but it may be hard to implement depending on the data model. In a regular data model, deleting a record may be easy, but some foreign keys may be violated. That means you have two options – either make sure you allow nullable foreign keys (for example an order usually has a reference to the user that made it, but when the user requests his data be deleted, you can set the userId to null), or make sure you delete all related data (e.g. via cascades). This may not be desirable, e.g. if the order is used to track available quantities or for accounting purposes. It’s a bit trickier for event-sourcing data models, or in extreme cases, ones that include some sort of blcokchain/hash chain/tamper-evident data structure. With event sourcing you should be able to remove a past event and re-generate intermediate snapshots. For blockchain-like structures – be careful what you put in there and avoid putting personal data of users. There is an option to use a chameleon hash function, but that’s suboptimal. Overall, you must constantly think of how you can delete the personal data. And “our data model doesn’t allow it” isn’t an excuse.
  • Notify 3rd parties for erasure – deleting things from your system may be one thing, but you are also obligated to inform all third parties that you have pushed that data to. So if you have sent personal data to, say, Salesforce, Hubspot, twitter, or any cloud service provider, you should call an API of theirs that allows for the deletion of personal data. If you are such a provider, obviously, your “forget me” endpoint should be exposed. Calling the 3rd party APIs to remove data is not the full story, though. You also have to make sure the information does not appear in search results. Now, that’s tricky, as Google doesn’t have an API for removal, only a manual process. Fortunately, it’s only about public profile pages that are crawlable by Google (and other search engines, okay…), but you still have to take measures. Ideally, you should make the personal data page return a 404 HTTP status, so that it can be removed.
  • Restrict processing – in your admin panel where there’s a list of users, there should be a button “restrict processing”. The user settings page should also have that button. When clicked (after reading the appropriate information), it should mark the profile as restricted. That means it should no longer be visible to the backoffice staff, or publicly. You can implement that with a simple “restricted” flag in the users table and a few if-clasues here and there.
  • Export data – there should be another button – “export data”. When clicked, the user should receive all the data that you hold about them. What exactly is that data – depends on the particular usecase. Usually it’s at least the data that you delete with the “forget me” functionality, but may include additional data (e.g. the orders the user has made may not be delete, but should be included in the dump). The structure of the dump is not strictly defined, but my recommendation would be to reuse schema.org definitions as much as possible, for either JSON or XML. If the data is simple enough, a CSV/XLS export would also be fine. Sometimes data export can take a long time, so the button can trigger a background process, which would then notify the user via email when his data is ready (twitter, for example, does that already – you can request all your tweets and you get them after a while).
  • Allow users to edit their profile – this seems an obvious rule, but it isn’t always followed. Users must be able to fix all data about them, including data that you have collected from other sources (e.g. using a “login with facebook” you may have fetched their name and address). Rule of thumb – all the fields in your “users” table should be editable via the UI. Technically, rectification can be done via a manual support process, but that’s normally more expensive for a business than just having the form to do it. There is one other scenario, however, when you’ve obtained the data from other sources (i.e. the user hasn’t provided their details to you directly). In that case there should still be a page where they can identify somehow (via email and/or sms confirmation) and get access to the data about them.
  • Consent checkboxes – this is in my opinion the biggest change that the regulation brings. “I accept the terms and conditions” would no longer be sufficient to claim that the user has given their consent for processing their data. So, for each particular processing activity there should be a separate checkbox on the registration (or user profile) screen. You should keep these consent checkboxes in separate columns in the database, and let the users withdraw their consent (by unchecking these checkboxes from their profile page – see the previous point). Ideally, these checkboxes should come directly from the register of processing activities (if you keep one). Note that the checkboxes should not be preselected, as this does not count as “consent”.
  • Re-request consent – if the consent users have given was not clear (e.g. if they simply agreed to terms & conditions), you’d have to re-obtain that consent. So prepare a functionality for mass-emailing your users to ask them to go to their profile page and check all the checkboxes for the personal data processing activities that you have.
  • “See all my data” – this is very similar to the “Export” button, except data should be displayed in the regular UI of the application rather than an XML/JSON format. For example, Google Maps shows you your location history – all the places that you’ve been to. It is a good implementation of the right to access. (Though Google is very far from perfect when privacy is concerned)
  • Age checks – you should ask for the user’s age, and if the user is a child (below 16), you should ask for parent permission. There’s no clear way how to do that, but my suggestion is to introduce a flow, where the child should specify the email of a parent, who can then confirm. Obviosuly, children will just cheat with their birthdate, or provide a fake parent email, but you will most likely have done your job according to the regulation (this is one of the “wishful thinking” aspects of the regulation).

Now some “do’s”, which are mostly about the technical measures needed to protect personal data. They may be more “ops” than “dev”, but often the application also has to be extended to support them. I’ve listed most of what I could think of in a previous post.

  • Encrypt the data in transit. That means that communication between your application layer and your database (or your message queue, or whatever component you have) should be over TLS. The certificates could be self-signed (and possibly pinned), or you could have an internal CA. Different databases have different configurations, just google “X encrypted connections. Some databases need gossiping among the nodes – that should also be configured to use encryption
  • Encrypt the data at rest – this again depends on the database (some offer table-level encryption), but can also be done on machine-level. E.g. using LUKS. The private key can be stored in your infrastructure, or in some cloud service like AWS KMS.
  • Encrypt your backups – kind of obvious
  • Implement pseudonymisation – the most obvious use-case is when you want to use production data for the test/staging servers. You should change the personal data to some “pseudonym”, so that the people cannot be identified. When you push data for machine learning purposes (to third parties or not), you can also do that. Technically, that could mean that your User object can have a “pseudonymize” method which applies hash+salt/bcrypt/PBKDF2 for some of the data that can be used to identify a person
  • Protect data integrity – this is a very broad thing, and could simply mean “have authentication mechanisms for modifying data”. But you can do something more, even as simple as a checksum, or a more complicated solution (like the one I’m working on). It depends on the stakes, on the way data is accessed, on the particular system, etc. The checksum can be in the form of a hash of all the data in a given database record, which should be updated each time the record is updated through the application. It isn’t a strong guarantee, but it is at least something.
  • Have your GDPR register of processing activities in something other than Excel – Article 30 says that you should keep a record of all the types of activities that you use personal data for. That sounds like bureaucracy, but it may be useful – you will be able to link certain aspects of your application with that register (e.g. the consent checkboxes, or your audit trail records). It wouldn’t take much time to implement a simple register, but the business requirements for that should come from whoever is responsible for the GDPR compliance. But you can advise them that having it in Excel won’t make it easy for you as a developer (imagine having to fetch the excel file internally, so that you can parse it and implement a feature). Such a register could be a microservice/small application deployed separately in your infrastructure.
  • Log access to personal data – every read operation on a personal data record should be logged, so that you know who accessed what and for what purpose
  • Register all API consumers – you shouldn’t allow anonymous API access to personal data. I’d say you should request the organization name and contact person for each API user upon registration, and add those to the data processing register. Note: some have treated article 30 as a requirement to keep an audit log. I don’t think it is saying that – instead it requires 250+ companies to keep a register of the types of processing activities (i.e. what you use the data for). There are other articles in the regulation that imply that keeping an audit log is a best practice (for protecting the integrity of the data as well as to make sure it hasn’t been processed without a valid reason)

Finally, some “don’t’s”.

  • Don’t use data for purposes that the user hasn’t agreed with – that’s supposed to be the spirit of the regulation. If you want to expose a new API to a new type of clients, or you want to use the data for some machine learning, or you decide to add ads to your site based on users’ behaviour, or sell your database to a 3rd party – think twice. I would imagine your register of processing activities could have a button to send notification emails to users to ask them for permission when a new processing activity is added (or if you use a 3rd party register, it should probably give you an API). So upon adding a new processing activity (and adding that to your register), mass email all users from whom you’d like consent.
  • Don’t log personal data – getting rid of the personal data from log files (especially if they are shipped to a 3rd party service) can be tedious or even impossible. So log just identifiers if needed. And make sure old logs files are cleaned up, just in case
  • Don’t put fields on the registration/profile form that you don’t need – it’s always tempting to just throw as many fields as the usability person/designer agrees on, but unless you absolutely need the data for delivering your service, you shouldn’t collect it. Names you should probably always collect, but unless you are delivering something, a home address or phone is unnecessary.
  • Don’t assume 3rd parties are compliant – you are responsible if there’s a data breach in one of the 3rd parties (e.g. “processors”) to which you send personal data. So before you send data via an API to another service, make sure they have at least a basic level of data protection. If they don’t, raise a flag with management.
  • Don’t assume having ISO XXX makes you compliant – information security standards and even personal data standards are a good start and they will probably 70% of what the regulation requires, but they are not sufficient – most of the things listed above are not covered in any of those standards

Overall, the purpose of the regulation is to make you take conscious decisions when processing personal data. It imposes best practices in a legal way. If you follow the above advice and design your data model, storage, data flow , API calls with data protection in mind, then you shouldn’t worry about the huge fines that the regulation prescribes – they are for extreme cases, like Equifax for example. Regulators (data protection authorities) will most likely have some checklists into which you’d have to somehow fit, but if you follow best practices, that shouldn’t be an issue.

I think all of the above features can be implemented in a few weeks by a small team. Be suspicious when a big vendor offers you a generic plug-and-play “GDPR compliance” solution. GDPR is not just about the technical aspects listed above – it does have organizational/process implications. But also be suspicious if a consultant claims GDPR is complicated. It’s not – it relies on a few basic principles that are in fact best practices anyway. Just don’t ignore them.

The post GDPR – A Practical Guide For Developers appeared first on Bozho's tech blog.

AWS PrivateLink Update – VPC Endpoints for Your Own Applications & Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-privatelink-update-vpc-endpoints-for-your-own-applications-services/

Earlier this month, my colleague Colm MacCárthaigh told you about AWS PrivateLink and showed you how to use it to access AWS services such as Amazon Kinesis Streams, AWS Service Catalog, EC2 Systems Manager, the EC2 APIs, and the ELB APIs by way of VPC Endpoints. The endpoint (represented by one or more Elastic Network Interfaces or ENIs) resides within your VPC and has IP addresses drawn from the VPC’s subnets, without the need for an Internet or NAT Gateway. This model is clear and easy to understand, not to mention secure and scalable!

Endpoints for Private Connectivity
Today we are building upon the initial launch and extending the PrivateLink model, allowing you to set up and use VPC Endpoints to access your own services and those made available by others. Even before we launched PrivateLink for AWS services, we had a lot of requests for this feature, so I expect it to be pretty popular. For example, one customer told us that they plan to create hundreds of VPCs, each hosting and providing a single microservice (read Microservices on AWS to learn more).

Companies can now create services and offer them for sale to other AWS customers, for access via a private connection. They create a service that accepts TCP traffic, host it behind a Network Load Balancer, and then make the service available, either directly or in AWS Marketplace. They will be notified of new subscription requests and can choose to accept or reject each one. I expect that this feature will be used to create a strong, vibrant ecosystem of service providers in 2018.

The service provider and the service consumer run in separate VPCs and AWS accounts and communicate solely through the endpoint, with all traffic flowing across Amazon’s private network. Service consumers don’t have to worry about overlapping IP addresses, arrange for VPC peering, or use a VPC Gateway. You can also use AWS Direct Connect to connect your existing data center to one of your VPCs in order to allow your cloud-based applications to access services running on-premises, or vice versa.

Providing and Consuming Services
This new feature puts a lot of power at your fingertips. You can set it all up using the VPC APIs, the VPC CLI, or the AWS Management Console. I’ll use the console, and will show you how to provide and then consume a service. I am going to do both within a single AWS account, but that’s just for demo purposes.

Let’s talk about providing a service. It must run behind a Network Load Balancer and must be accessible over TCP. It can be hosted on EC2 instances, ECS containers, or on-premises (configured as an IP target), and should be able to scale in order to meet the expected level of demand. For low latency and fault tolerance, we recommend using an NLB with targets in every AZ of its region. Here’s mine:

I open up the VPC Console and navigate to Endpoint Services, then click on Create Endpoint Service:

I choose my NLB (just one in this case, but I can choose two or more and they will be mapped to consumers on a round-robin basis). By clicking on Acceptance required, I get to control access to my endpoint on a request-by-request basis:

I click on Create service and my service is ready immediately:

If I was going to make this service available in AWS Marketplace, I would go ahead and create a listing now. Since I am going to be the producer and the consumer in this blog post, I’ll skip that step. I will, however, copy the Service name for use in the next step.

I return to the VPC Dashboard and navigate to Endpoints, then click on Create endpoint. Then I select Find service by name, paste the service name, and click on Verify to move ahead. Then I select the desired AZs, and a subnet in each one, pick my security groups, and click on Create endpoint:

Because I checked Acceptance required when I created the endpoint service, the connection is pending acceptance:

Back on the endpoint service side (typically in a separate AWS account), I can see and accept the pending request:

The endpoint becomes available and ready to use within a minute or so. If I was creating a service and selling access on a paid basis, I would accept the request as part of a larger, and perhaps automated, onboarding workflow for a new customer.

On the consumer side, my new endpoint is accessible via DNS name:

Services provided by AWS and services in AWS Marketplace are accessible through split-horizon DNS. Accessing the service through this name will resolve to the “best” endpoint, taking Region and Availability Zone into consideration.

In the Marketplace
As I noted earlier, this new PrivateLink feature creates an opportunity for new and existing sellers in AWS Marketplace. The following SaaS offerings are already available as endpoints and I expect many more to follow (read Sell on AWS Marketplace to get started):

CA TechnologiesCA App Experience Analytics Essentials.

Aqua SecurityAqua Container Image Security Scanner.

DynatraceCloud-Native Monitoring powered by AI.

Cisco StealthwatchPublic Cloud Monitoring – Metered, Public Cloud Monitoring – Contracts.

SigOptML Optimization & Tuning.

Available Today
This new PrivateLink feature is available now and you can start using it today!

Jeff;

 

Potential impact of the Intel ME vulnerability

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/49611.html

(Note: this is my personal opinion based on public knowledge around this issue. I have no knowledge of any non-public details of these vulnerabilities, and this should not be interpreted as the position or opinion of my employer)

Intel’s Management Engine (ME) is a small coprocessor built into the majority of Intel CPUs[0]. Older versions were based on the ARC architecture[1] running an embedded realtime operating system, but from version 11 onwards they’ve been small x86 cores running Minix. The precise capabilities of the ME have not been publicly disclosed, but it is at minimum capable of interacting with the network[2], display[3], USB, input devices and system flash. In other words, software running on the ME is capable of doing a lot, without requiring any OS permission in the process.

Back in May, Intel announced a vulnerability in the Advanced Management Technology (AMT) that runs on the ME. AMT offers functionality like providing a remote console to the system (so IT support can connect to your system and interact with it as if they were physically present), remote disk support (so IT support can reinstall your machine over the network) and various other bits of system management. The vulnerability meant that it was possible to log into systems with enabled AMT with an empty authentication token, making it possible to log in without knowing the configured password.

This vulnerability was less serious than it could have been for a couple of reasons – the first is that “consumer”[4] systems don’t ship with AMT, and the second is that AMT is almost always disabled (Shodan found only a few thousand systems on the public internet with AMT enabled, out of many millions of laptops). I wrote more about it here at the time.

How does this compare to the newly announced vulnerabilities? Good question. Two of the announced vulnerabilities are in AMT. The previous AMT vulnerability allowed you to bypass authentication, but restricted you to doing what AMT was designed to let you do. While AMT gives an authenticated user a great deal of power, it’s also designed with some degree of privacy protection in mind – for instance, when the remote console is enabled, an animated warning border is drawn on the user’s screen to alert them.

This vulnerability is different in that it allows an authenticated attacker to execute arbitrary code within the AMT process. This means that the attacker shouldn’t have any capabilities that AMT doesn’t, but it’s unclear where various aspects of the privacy protection are implemented – for instance, if the warning border is implemented in AMT rather than in hardware, an attacker could duplicate that functionality without drawing the warning. If the USB storage emulation for remote booting is implemented as a generic USB passthrough, the attacker could pretend to be an arbitrary USB device and potentially exploit the operating system through bugs in USB device drivers. Unfortunately we don’t currently know.

Note that this exploit still requires two things – first, AMT has to be enabled, and second, the attacker has to be able to log into AMT. If the attacker has physical access to your system and you don’t have a BIOS password set, they will be able to enable it – however, if AMT isn’t enabled and the attacker isn’t physically present, you’re probably safe. But if AMT is enabled and you haven’t patched the previous vulnerability, the attacker will be able to access AMT over the network without a password and then proceed with the exploit. This is bad, so you should probably (1) ensure that you’ve updated your BIOS and (2) ensure that AMT is disabled unless you have a really good reason to use it.

The AMT vulnerability applies to a wide range of versions, everything from version 6 (which shipped around 2008) and later. The other vulnerability that Intel describe is restricted to version 11 of the ME, which only applies to much more recent systems. This vulnerability allows an attacker to execute arbitrary code on the ME, which means they can do literally anything the ME is able to do. This probably also means that they are able to interfere with any other code running on the ME. While AMT has been the most frequently discussed part of this, various other Intel technologies are tied to ME functionality.

Intel’s Platform Trust Technology (PTT) is a software implementation of a Trusted Platform Module (TPM) that runs on the ME. TPMs are intended to protect access to secrets and encryption keys and record the state of the system as it boots, making it possible to determine whether a system has had part of its boot process modified and denying access to the secrets as a result. The most common usage of TPMs is to protect disk encryption keys – Microsoft Bitlocker defaults to storing its encryption key in the TPM, automatically unlocking the drive if the boot process is unmodified. In addition, TPMs support something called Remote Attestation (I wrote about that here), which allows the TPM to provide a signed copy of information about what the system booted to a remote site. This can be used for various purposes, such as not allowing a compute node to join a cloud unless it’s booted the correct version of the OS and is running the latest firmware version. Remote Attestation depends on the TPM having a unique cryptographic identity that is tied to the TPM and inaccessible to the OS.

PTT allows manufacturers to simply license some additional code from Intel and run it on the ME rather than having to pay for an additional chip on the system motherboard. This seems great, but if an attacker is able to run code on the ME then they potentially have the ability to tamper with PTT, which means they can obtain access to disk encryption secrets and circumvent Bitlocker. It also means that they can tamper with Remote Attestation, “attesting” that the system booted a set of software that it didn’t or copying the keys to another system and allowing that to impersonate the first. This is, uh, bad.

Intel also recently announced Intel Online Connect, a mechanism for providing the functionality of security keys directly in the operating system. Components of this are run on the ME in order to avoid scenarios where a compromised OS could be used to steal the identity secrets – if the ME is compromised, this may make it possible for an attacker to obtain those secrets and duplicate the keys.

It’s also not entirely clear how much of Intel’s Secure Guard Extensions (SGX) functionality depends on the ME. The ME does appear to be required for SGX Remote Attestation (which allows an application using SGX to prove to a remote site that it’s the SGX app rather than something pretending to be it), and again if those secrets can be extracted from a compromised ME it may be possible to compromise some of the security assumptions around SGX. Again, it’s not clear how serious this is because it’s not publicly documented.

Various other things also run on the ME, including stuff like video DRM (ensuring that high resolution video streams can’t be intercepted by the OS). It may be possible to obtain encryption keys from a compromised ME that allow things like Netflix streams to be decoded and dumped. From a user privacy or security perspective, these things seem less serious.

The big problem at the moment is that we have no idea what the actual process of compromise is. Intel state that it requires local access, but don’t describe what kind. Local access in this case could simply require the ability to send commands to the ME (possible on any system that has the ME drivers installed), could require direct hardware access to the exposed ME (which would require either kernel access or the ability to install a custom driver) or even the ability to modify system flash (possible only if the attacker has physical access and enough time and skill to take the system apart and modify the flash contents with an SPI programmer). The other thing we don’t know is whether it’s possible for an attacker to modify the system such that the ME is persistently compromised or whether it needs to be re-compromised every time the ME reboots. Note that even the latter is more serious than you might think – the ME may only be rebooted if the system loses power completely, so even a “temporary” compromise could affect a system for a long period of time.

It’s also almost impossible to determine if a system is compromised. If the ME is compromised then it’s probably possible for it to roll back any firmware updates but still report that it’s been updated, giving admins a false sense of security. The only way to determine for sure would be to dump the system flash and compare it to a known good image. This is impractical to do at scale.

So, overall, given what we know right now it’s hard to say how serious this is in terms of real world impact. It’s unlikely that this is the kind of vulnerability that would be used to attack individual end users – anyone able to compromise a system like this could just backdoor your browser instead with much less effort, and that already gives them your banking details. The people who have the most to worry about here are potential targets of skilled attackers, which means activists, dissidents and companies with interesting personal or business data. It’s hard to make strong recommendations about what to do here without more insight into what the vulnerability actually is, and we may not know that until this presentation next month.

Summary: Worst case here is terrible, but unlikely to be relevant to the vast majority of users.

[0] Earlier versions of the ME were built into the motherboard chipset, but as portions of that were incorporated onto the CPU package the ME followed
[1] A descendent of the SuperFX chip used in Super Nintendo cartridges such as Starfox, because why not
[2] Without any OS involvement for wired ethernet and for wireless networks in the system firmware, but requires OS support for wireless access once the OS drivers have loaded
[3] Assuming you’re using integrated Intel graphics
[4] “Consumer” is a bit of a misnomer here – “enterprise” laptops like Thinkpads ship with AMT, but are often bought by consumers.

comment count unavailable comments

Police Seize Hundreds of Computers Over Pirate Movie Download in 2013

Post Syndicated from Andy original https://torrentfreak.com/police-seize-hundreds-of-computers-over-pirate-movie-download-in-2013-171125/

Late October 2016, we reported on an alarming situation in Poland, where police had visited hundreds of homes across the country, seizing computers alleged to have been involved in the sharing of a comedy movie titled “Screwed“.

In some cases, police reportedly advised suspects to settle with copyright holders rather than face legal action, something critics felt was particularly inappropriate in an unproven copyright case. Now it appears that history is repeating itself in the region, with people being targeted over downloads of a local thriller titled “Drogówka”.

While this is of concern in itself, the alleged offenses took place via BitTorrent way back in 2013, four whole years ago. Local journalist Marcin Maj at Bezprawnik, who’s also an IT security instructor at Niebezpiecznik, has been documenting the activities of copyright trolls in Poland for some time. He picked up the story this week after he learned that police had seized an alleged file-sharer’s computer.

After speaking with local police, he subsequently discovered that 200 to 300 other people had been given the same treatment.

Maj says that after presenting a long list of questions to authorities, he learned that these seizures have been going on continuously for about a year, following a criminal complaint filed by a law firm. It’s that this point that the uncomfortable nature of this whole operation becomes apparent.

“In 2013-2014, lawyer Artur Glass-Brudziński reported numerous copyright infringements (movie sharing) to the prosecutor’s office, and the prosecutor’s office started to identify people behind the indicated IP addresses,” Maj informs TF.

“It’s important to understand that in the Polish legal system, it’s impossible to sue someone who is unknown to a plaintiff [John Doe]. But you can always start a criminal proceeding.”

Such a criminal proceeding was filed in 2014 but it appears that Glass-Brudziński used the process to gain a secondary advantage.

“As a barrister of the [copyright holder], Artur Glass-Brudziński had access to the prosecutor’s documentation. So he used this to obtain identified names and addresses, without waiting for the end of the criminal proceeding. Those people were just witnesses, but Glass-Brudziński sent thousands of letters to them, suggesting they are suspects, which was not true,” Maj says.

So, in effect, a criminal action was used to gain access to personal details that were subsequently used in civil actions. That’s completely legal and quite common in Poland but many view the process as problematic.

“Polish lawyers see this as something not quite ethical,” Maj reports. “Now Glass-Brudziński faces a disciplinary court because his letters were quite misleading. Regardless of that, however, criminal proceedings are still underway.”

A hearing took place before the Disciplinary Court November 13 but a resolution will take some time to reach since there around 80 people involved in the case. In the meantime the current criminal case continues, with several problems.

For example, it’s quite likely that many people will have changed their computers since 2013, but the police are required to seize the ones people currently have. Also, Maj reports that after speaking to people who received demands for cash payment, many report having had nothing to do with the alleged offenses. But there is a broader problem around such cases in general.

As we reported last year, prosecutors admit that they do not verify the technical processes that the copyright holders use to identify the alleged infringers, meaning that hundreds of members of the public are subjected to property seizures based on untested evidence.

“Polish prosecutors often decide to seize computers just because they got an IP address list from a lawyer. Sometimes even prosecutors don’t want to do that, but copyright owners complain to the courts, and the courts issue an order to seize machines. That’s deeply absurd,” Maj says.

“Many times I have asked prosecutors if they check the method used to track pirates. Many times I have asked prosecutors if they have found evidence on every seized computer. The answers? No. They don’t check the method of tracking pirates, and evidence is found only ‘sometimes’.”

There are clearly mounting problems in Poland with both evidence and discovery-based loopholes providing copyright holders with a significant advantage. While questionable, it’s currently all legal, so it seems likely that as long as ‘victims’ can gain access to private information via criminal cases, the cash threats will continue. It’s a topic covered in a report compiled by Maj and the Modern Poland Foundation (Polish, pdf)

“Computer seizures and our report were discussed in the lower house of the Polish parliament in 2016, at the meeting of the Commision of Digitalization, Innovation and New Technologies. Many politicians are aware of the problem and they declare we should do something to stop bullying and seizures. Unfortunately, it all ended with was declarations,” Maj concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons