Tag Archives: embedded

Ringing in 2017 with 90 hacker-friendly single board computers (HackerBoards)

Post Syndicated from ris original http://lwn.net/Articles/710502/rss

HackerBoards.com takes
a look
at hacker-friendly single board computers. “Community backed, open spec single board computers running Linux and Android sit at the intersection between the commercial embedded market and the open source maker community. Hacker boards also play a key role in developing the Internet of Things devices that will increasingly dominate our technology economy in the coming years, from home automation devices to industrial equipment to drones.

This year, we identified 90 boards that fit our relatively loose requirements for community-backed, open spec SBCs running Linux and/or Android.”

Top Spotify Lawyer: Attracting Pirates is in Our DNA

Post Syndicated from Andy original https://torrentfreak.com/top-spotify-lawyer-attracting-pirates-is-in-our-dna-161226/

spotifyAlmost eight years ago and just months after its release, TF published an article which pondered whether the fledgling Spotify service could become a true alternative to Internet piracy.

From the beginning, one of the key software engineers at Spotify has been Ludvig Strigeus, the creator of uTorrent, so clearly the company already knew a lot about file-sharers. In the early days the company was fairly open about its aim to provide an alternative to piracy, but perhaps one of the earliest indications of growing success came when early invites were shared among users of private torrent sites.

Today Spotify is indeed huge. The service has an estimated 100 million users, many of them taking advantage of its ad-supported free tier. This is the gateway for many subscribers, including millions of former and even current pirates who augment their sharing with the desirable service.

Over the years, Spotify has made no secret of its desire to recruit more pirates to its service. In 2014, Spotify Australia managing director Kate Vale said it was one of their key aims.

“People that are pirating music and not paying for it, they are the ones we want on our platform. It’s important for us to be reaching these individuals that have never paid for music before in their life, and get them onto a service that’s legal and gives money back to the rights holders,” Vale said.

Now, in a new interview with The Journal on Sports and Entertainment Law, General Counsel of Spotify Horacio Gutierrez reveals just how deeply this philosophy runs in the company. It’s absolutely fundamental to its being, he explains.

“One of the things that inspired the creation of Spotify and is part of the DNA of the company from the day it launched (and remember the service was launched for the first time around 8 years ago) was addressing one of the biggest questions that everyone in the music industry had at the time — how would one tackle and combat online piracy in music?” Gutierrez says.

“Spotify was determined from the very beginning to provide a fully licensed, legal alternative for online music consumption that people would prefer over piracy.”

The signs that just might be possible came very early on. Just months after Spotify’s initial launch the quality of its service was celebrated on what was to become the world’s best music torrent site, What.cd.

“Honestly it’s going to be huge,” a What.cd user predicted in 2008.

“I’ve been browsing and playing from its seemingly endless music catalogue all afternoon, it loads as if it’s playing from local files, so fast, so easy. If it’s this great in such early beta stages then I can’t imagine where it’s going. I feel like buying another laptop to have permanently rigged.”

Of course, hardcore pirates aren’t always easily encouraged to part with their cash, so Spotify needed an equivalent to the no-cost approach of many torrent sites. That is still being achieved today via its ad-supported entry level, Gutierrez says.

“I think one just has to look at data to recognize that the freemium model for online music consumption works. Our free tier is a key to attracting users away from online piracy, and Spotify’s success is proof that the model works.

“We have data around the world that shows that it works, that in fact we are making inroads against piracy because we offer an ability for those users to have a better experience with higher quality content, variety richer catalogue, and a number of other user-minded features that make the experience much better for the user.”

Spotify’s general counsel says that the company is enjoying success, not only by bringing pirates onboard, but also by converting them to premium customers via a formula that benefits everyone in the industry.

“If you look at what has happened since the launch of the Spotify service, we have been incredibly successful on that score. Figures coming out the music industry show that after 15 years of revenue losses in music industry, the music industry is once again growing thanks to music streaming,” he concludes.

With the shutdown of What.cd in recent weeks, it’s likely that former users will be considering the Spotify option again this Christmas, if they aren’t customers already.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Crazy Pirates Troll TorrentFreak With Bad Santa 2 Watermark

Post Syndicated from Andy original https://torrentfreak.com/crazy-pirates-troll-torrentfreak-with-bad-santa-2-watermark-161225/

xmas-trollHo! Ho! Ho! Many happy returns and Merry Christmas to all our readers. It’s Christmas Day once again and it’s been a pretty eventful year in file-sharing and copyright.

While we wish things were different, there hasn’t been much positive news to report in 2016. There’s been the occasional ray of light here and there, but overall it’s been a cascade of negativity. Today, however, we promise not to spoil anyone’s Christmas lunch or well-deserved day off.

In fact, this morning we can confidently report that for at least the next 48 hours, no one will be fined, detained, arrested, extradited, or otherwise screwed around with by rightsholder groups and their affiliates. Instead, we have a rather crazy mystery to solve, one that we really hope you can help us solve.

On November 23, the movie Bad Santa 2 was released in the United States to a somewhat lukewarm reception. Despite the average reviews, it’s a Christmas movie so pirates were still looking for something seasonal to watch.

Three weeks ago a copy surfaced in Russia with local dubbing but this week pirates obliged with an English language edition of the Billy Bob Thornton movie. However, something embedded in one of the sundry copies left us both surprised and scratching our heads here at TF.

Within seconds of the movie starting and for the next couple of minutes, a giant watermark appears on screen. Filling the entire width of the print from border to border, the watermark then slowly makes its way up the screen until it disappears off the top.

santa-tf2

Of course, watermarks are usually put in place to indicate some kind of ownership. Studios use visible and invisible watermarks on screener copies of movies to literally stamp their name on pre-release versions of movies. However, we have absolutely no idea why someone would put our site name on a cam copy of a movie.

TorrentFreak spoke with releasers and even a couple of site operators to find out who might be behind this little surprise but we’ve had no success getting to the bottom of the mystery. It’s certainly possible that the “Streetcams” reference at the start of the watermark could hold the secret, but we’ve had no success in identifying who or what could be behind that particular brand either.

The watermark eventually scrolls away but at the end of the movie it reappears, beginning its journey from the bottom of the screen to the top in all its glory.

santa-tf3

From there, who knows where it goes but we are aware that the “streetcams” watermark has appeared elsewhere, although not with additional TorrentFreak branding. It’s more difficult to see when compared to Bad Santa 2, but here it is on a cam copy of the movie Shut In.

shut-in

So with logs on the fire and gifts on the tree, can you help us solve this cam mystery?

Merry Christmas and other celebrations to all our readers

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Court Overturns ‘Pirate’ Site Blockade Based on EU Ruling

Post Syndicated from Andy original https://torrentfreak.com/court-overturns-pirate-site-blockade-based-on-eu-ruling-161202/

Early November, police in Italy targeted more than 150 sites involved in the unauthorized streaming of movies and sports.

The Special Units of the Guardia di Finanza obtained a mass injunction from a judge in Rome, heralding the largest ever blocking operation in the country.

At the time, Fulvia Sarzana, a lawyer specializing in Internet and copyright disputes, described the move as “sensational.” Since then he’s been defending one of the sites targeted by the sweeping action and now reports success thanks to an EU ruling.

Kisstube.tv is a site that acts an index for movies hosted elsewhere. A cursory skim through its archives reveals plenty of content in both Italian and English, mainly stored on YouTube.

According to Sarzana, until the action last month Kisstube had never received any infringement complaints. Nevertheless, it found itself blocked along with dozens of other sites without any pre-action discussion.

After being hired by Kisstube, Sarzana took the site’s case to the Rome Court of Appeal, arguing that it should have never been ordered to be blocked. At first view and perhaps surprisingly, the Court agreed and overturned the injunction against the site. So what was its reasoning?

Like many other similar sites, Kisstube hosts none of its own content. The site embeds videos that are stored elsewhere, on hosting platforms such as YouTube.

“The Kisstube portal links to YouTube, where there is the Content ID system and a notice and takedown system, just as there is a notice and takedown system on the Kisstube site,” Sarzana informs TorrentFreak.

“I do not know whether they are copyrighted films and allowed on YouTube, but in this case it should have been YouTube that removed any pirated movies. The site that embeds the content can not be held responsible.”

In handing down its decision the Court of Appeals considered two rulings from the European Court of Justice.

The first involved water filtering company BestWater International, who accused two men of copyright infringement after they embedded a Bestwater promotional video in their website in a YouTube frame. Even though the video had been uploaded to YouTube without Bestwater’s permission, the Court found that embedding the content in a third-party site did not amount to infringement.

The second more recent case involved Dutch blog GeenStijl.nl, which published an article linking to leaked Playboy photos which were stored on file-hosting site FileFactory.

“I do not like mass-blocking and am convinced that before a judge you can always explain your case,” Sarzana told TF following his win for KissTube.

However, the lawyer says that the future may be more complex. New rules under discussion have the potential to limit the freedoms of sites that rely on user-uploaded content.

“My fear is that the new EU rules on copyright under discussion are trying to put rules in place which would prohibit any type of linking and embedding to overcome the protection that the EU Court of Justice has guaranteed UGC portals like Youtube,” Sarzana concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Greengrass – Ubiquitous, Real-World Computing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-greengrass-ubiquitous-real-world-computing/

Computing and data processing within the confines of a data center or office is easy. You can generally count on good connectivity and a steady supply of electricity, and you have access to as much on-premises or cloud-based storage and compute power as you need. The situation is much different out in the real world. Connectivity can be intermittent, unreliable, and limited in speed and scale. Power is at a premium, putting limits on how much storage and compute power can be brought in to play.

Lots of interesting and potentially valuable data is present out in the field, if only it could be collected, processed, and turned into actionable intelligence. This data could be located miles below the surface of the Earth in a mine or an oil well, in a sensitive & safety-critical location like a hospital or a factory, or even on another planet (hello, Curiosity).

Our customers are asking for a way to use the scale and power of the AWS Cloud to help them to do local processing under these trying conditions. First, they want to build systems that measure, sense, and act upon the data locally. Then they want to bring cloud-like, local intelligence to bear on the data, implementing local actions that are interdependent and coordinated. To make this even more challenging they want to do make use of any available local processing and storage resources, while also connecting to specialized sensors and peripherals.

Introducing AWS Greengrass
I’d like to tell you about AWS Greengrass. This new service is designed to allow you to address the challenges that I outlined above by extending the AWS programming model to small, simple, field-based devices.

Greengrass builds on AWS IoT and AWS Lambda, and can also access other AWS services. it is built for offline operation and greatly simplifies the implementation of local processing. Code running in the field can collect, filter, and aggregate freshly collected data and then push it up to the cloud for long-term storage and further aggregation. Further, code running in the field can also take action very quickly, even in cases where connectivity to the cloud is temporarily unavailable.

If you are already developing embedded systems for small devices, you will now be able to make use of modern, cloud-aware development tools and workflows. You can write and test your code in the cloud and then deploy it locally. You can write Python code that responds to device events and you can make use of MQTT-based pub/sub messaging for communication.

Greengrass has two constituent parts, the Greengrass Core (GGC) and the IoT Device SDK. Both of these components run on your own hardware, out in the field.

Greengrass Core is designed to run on devices that have at least 128 MB of memory and an x86 or ARM CPU running at 1 GHz or better, and can take advantage of additional resources if available. It runs Lambda functions locally, interacts with the AWS Cloud, manages security & authentication, and communicates with the other devices under its purview.

The IoT Device SDK is used to build the applications that run on the devices that connect to the device that hosts the core (generally via a LAN or other local connection). These applications will capture data from sensors, subscribe to MQTT topics, and use AWS IoT device shadows to store and retrieve state information.

Using AWS GreenGrass
You will be able to set up and manage many aspects of Greengrass through the AWS Management Console, the AWS APIs, and the AWS Command Line Interface (CLI).

You will be able to register new hub devices, configure the desired set of Lambda functions, and create a deployment package for delivery to the device. From there, you will associate the lightweight devices with the hub.

Now in Preview
We are launching a limited preview of AWS Greengrass today and you can [[sign up now]] if you would like to participate.

Each AWS customer will be able to use up 3 devices for one year at no charge. Beyond that, the monthly cost for each active Greengrass Core is $0.16 ($1.49 per year) for up to 10,000 devices.

Jeff;

 

IPTV: Mass Piracy That’s Flying Largely Under The Radar

Post Syndicated from Andy original https://torrentfreak.com/iptv-mass-piracy-thats-flying-largely-under-the-radar-161127/

Anyone with a Kodi setup understands how it works. Download the software, install a bunch of third party addons, and enjoy TV shows, sports, movies and PPVs, all for free.

While millions have fun doing just that, even the leading experts in Kodi setups acknowledge that they are limited by the content being provided by third parties. Sometimes the quality is good and the service is reliable, but at the other extremes people can spend more time getting stuff to work than actually watching and relaxing.

For those who simply can’t stand this kind of messing around, going legit is the sensible option. Subscribe to a big TV package with a local provider, pay for all the sports and PPVs, tack on the movie package, throw in Netflix for good measure, and then sit back and enjoy the ride. Trouble is, it costs a fortune.

However, somewhere in the middle lies a third way. It’s a lurking piracy monster that is getting very little press.

IPTV is short for Internet Protocol television, which is a fancy way of describing TV content that’s streamed over the Internet rather than via terrestrial, satellite, or cable formats. That said, thinking this is all IPTV has to offer would be a big mistake.

We aren’t going to name names, but the service shown to TF this week boasts more than 3,000 live and premium TV channels from all over the world, many of them in HD quality. Apparently a bigger package of more than 4,000 channels is also available if you really want to gorge on media.

Streamed over the web, the channels can be viewed on a range of devices, from VLC Media Player on PC, through to Android and Apple apps. However, the jewel in the crown is a tiny little device called a Mag Box.

magbox254

A Mag Box is a cheap Linux-based box that plugs into any modern TV and utterly transforms the way streaming pirate content is consumed. Instead of reams of text in a VLC playlist, for example, Mag Boxes and an appropriate IPTV service are almost indistinguishable from the real deal.

Channels are presented in Electronic Program Guide format (EPG) and switching from one to another is achieved in fractions of a second. Poor photography skills (and dodgy Christmas wallpaper aside), the menus are very well presented.

iptv-1

As shown in the image above, when a channel is selected a moving preview appears in the window on the right. Under that sits an EPG covering the next few hours which allows for planning ahead. Also accessible are full channel EPG views that are easily as good as many commercial offerings.

But while at first glance these services seem dedicated to only live TV (albeit on any channel you could imagine, anywhere in the world), many have another trick up their sleeve. The service we saw also carries hundreds of the latest movies, up to 4K in resolution and even in 3D. They too can be accessed from professional looking menus and play, interruption free, every single time.

iptv-4

Also on offer is a massive ‘catch-up’ service which provides all the latest episodes of the most popular shows on-demand. Shows appear to be available minutes after airing and can be paused and skipped within a convenient media player setup.

Instead of using dedicated hardware, many IPTV users also use Kodi to view these kinds of streams. There are plenty of tutorials available online which detail how to activate the PVR component of the popular media player to access these kinds of services. However, from what we’ve seen so far, the Mag Box experience is head and shoulders above everything else.

So what’s stopping everyone from dumping torrents and web-based streaming services and jumping over to premium IPTV products right now? Well, it’s the same old story – cost.

Like a good VPN service, a decent IPTV service costs money – a few dollars, pounds, or euros per month. Depending on what they’re offering, Netflix-style deals are possible but for those who simply must have everything, it’s closer to double that price per month, or less if one subscribes for a whole year.

That being said, as we’ve seen before in many areas of piracy, these pirate IPTV services offer massively more bang for your buck than official offerings. The only practical problem is that there aren’t enough hours in a lifetime to watch everything they have to offer. The selection is bewildering.

Needless to say, these services are definitely illegal, certainly when it comes down to offering them to the public. Whether it’s illegal for users to watch these streams is down to individual countries’ laws, but on the whole the legal system is untested in both Europe and the United States so prosecutions seem unlikely, at least for now.

The video embedded below shows a similar IPTV service to the one seen by TF. It is not the actual provider and we certainly don’t endorse any of these products

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Embedding Lua in ZDoom

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/26/embedding-lua-in-zdoom/

I’ve spent a little time trying to embed a Lua interpreter in ZDoom. I didn’t get too far yet; it’s just an experimental thing I poke at every once and a while. The existing pile of constraints makes it an interesting problem, though.

Background

ZDoom is a “source port” (read: fork) of the Doom engine, with all the changes from the commercial forks merged in (mostly Heretic, Hexen, Strife), and a lot of internal twiddles exposed. It has a variety of mechanisms for customizing game behavior; two are major standouts.

One is ACS, a vaguely C-ish language inherited from Hexen. It’s mostly used to automate level behavior — at the simplest, by having a single switch perform multiple actions. It supports the usual loops and conditionals, it can store data persistently, and ZDoom exposes a number of functions to it for inspecting and altering the state of the world, so it can do some neat tricks. Here’s an arbitrary script from my DUMP2 map.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
script "open_church_door" (int tag)
{
    // Open the door more quickly on easier skill levels, so running from the
    // arch-vile is a more viable option
    int skill = GameSkill();
    int speed;
    if (skill < SKILL_NORMAL)
        speed = 64;  // blazing door speed
    else if (skill == SKILL_NORMAL)
        speed = 16;  // normal door speed
    else
        speed = 8;  // very dramatic door speed

    Door_Raise(tag, speed, 68);  // double usual delay
}

However, ZDoom doesn’t actually understand the language itself; ACS is compiled to bytecode. There’s even at least one alternative language that compiles to the same bytecode, which is interesting.

The other big feature is DECORATE, a mostly-declarative mostly-interpreted language for defining new kinds of objects. It’s a fairly direct reflection of how Doom actors are implemented, which is in terms of states. In Doom and the other commercial games, actor behavior was built into the engine, but this language has allowed almost all actors to be extracted as text files instead. For example, the imp is implemented partly as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  Melee:
  Missile:
    TROO EF 8 A_FaceTarget
    TROO G 6 A_TroopAttack
    Goto See
  ...
  }

TROO is the name of the imp’s sprite “family”. A, B, and so on are individual frames. The numbers are durations in tics (35 per second). All of the A_* things (which are optional) are action functions, behavioral functions (built into the engine) that run when the actor switches to that frame. An actor starts out at its Spawn state, so an imp behaves as follows:

  • Spawn. Render as TROO frame A. (By default, action functions don’t run on the very first frame they’re spawned.)
  • Wait 10 tics.
  • Change to TROO frame B. Run A_Look, which checks to see if a player is within line of sight, and if so jumps to the See state.
  • Wait 10 tics.
  • Repeat. (This time, frame A will also run A_Look, since the imp was no longer just spawned.)

All monster and item behavior is one big state table. Even the player’s own weapons work this way, which becomes very confusing — at some points a weapon can be running two states simultaneously. Oh, and there’s A_CustomMissile for monster attacks but A_FireCustomMissile for weapon attacks, and the arguments are different, and if you mix them up you’ll get extremely confusing parse errors.

It’s a little bit of a mess. It’s fairly flexible for what it is, and has come a long way — for example, even original Doom couldn’t pass arguments to action functions (since they were just function pointers), so it had separate functions like A_TroopAttack for every monster; now that same function can be written generically. People have done some very clever things with zero-delay frames (to run multiple action functions in a row) and storing state with dummy inventory items, too. Still, it’s not quite a programming language, and it’s easy to run into walls and bizarre quirks.

When DECORATE lets you down, you have one interesting recourse: to call an ACS script!

Unfortunately, ACS also has some old limitations. The only type it truly understands is int, so you can’t manipulate an actor directly or even store one in a variable. Instead, you have to work with TIDs (“thing IDs”). Every actor has a TID (zero is special-cased to mean “no TID”), and most ACS actor-related functions are expressed in terms of TIDs. For level automation, this is fine, and probably even what you want — you can dump a group of monsters in a map, give them all a TID, and then control them as a group fairly easily.

But if you want to use ACS to enhance DECORATE, you have a bit of a problem. DECORATE defines individual actor behavior. Also, many DECORATE actors are designed independently of a map and intended to be reusable anywhere. DECORATE should thus not touch TIDs at all, because they’re really the map‘s concern, and mucking with TIDs might break map behavior… but ACS can’t refer to actors any other way. A number of action functions can, but you can’t call action functions from ACS, only DECORATE. The workarounds for this are not pretty, especially for beginners, and they’re very easy to silently get wrong.

Also, ultimately, some parts of the engine are just not accessible to either ACS or DECORATE, and neither language is particularly amenable to having them exposed. Adding more native types to ACS is rather difficult without making significant changes to both the language and bytecode, and DECORATE is barely a language at all.

Some long-awaited work is finally being done on a “ZScript”, which purports to solve all of these problems by expanding DECORATE into an entire interpreted-C++-ish scripting language with access to tons of internals. I don’t know what I think of it, and it only seems to half-solve the problem, since it doesn’t replace ACS.

Trying out Lua

Lua is supposed to be easy to embed, right? That’s the one thing it’s famous for. Before ZScript actually started to materialize, I thought I’d take a little crack at embedding a Lua interpreter and exposing some API stuff to it.

It’s not very far along yet, but it can do one thing that’s always been completely impossible in both ACS and DECORATE: print out the player’s entire inventory. You can check how many of a given item the player has in either language, but neither has a way to iterate over a collection. In Lua, it’s pretty easy.

1
2
3
4
5
6
function lua_test_script(activator, ...)
    for item, amount in pairs(activator.inventory) do
        -- This is Lua's builtin print(), so it goes to stdout
        print(item.class.name, amount)
    end
end

I made a tiny test map with a switch that tries to run the ACS script named lua_test_script. I hacked the name lookup to first look for the name in Lua’s global scope; if the function exists, it’s called immediately, and ACS isn’t consulted at all. The code above is just a regular (global) function in a regular Lua file, embedded as a lump in the map. So that was a good start, and was pretty neat to see work.

Writing the bindings

I used the bare Lua API at first. While its API is definitely very simple, actually using it to define and expose a large API in practice is kind of repetitive and error-prone, and I was never confident I was doing it quite right. It’s plain C and it works entirely through stack manipulation and it relies on a lot of casting to/from void*, so virtually anything might go wrong at any time.

I was on the cusp of writing a bunch of gross macros to automate the boring parts, and then I found sol2, which is pretty great. It makes heavy use of basically every single C++11 feature, so it’s a nightmare when it breaks (and I’ve had to track down a few bugs), but it’s expressive as hell when it works:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
lua.new_usertype<AActor>("zdoom.AActor",
    "__tostring", [](AActor& actor) { return "<actor>"; },
    // Pointer to an unbound method.  Sol automatically makes this an attribute
    // rather than a method because it takes no arguments, then wraps its
    // return value to pass it back to Lua, no manual wrapper code required.
    "class", &AActor::GetClass,
    "inventory", sol::property([](AActor& actor) -> ZLuaInventory { return ZLuaInventory(actor); }),
    // Pointers to unbound attributes.  Sol turns these into writable
    // attributes on the Lua side.
    "health", &AActor::health,
    "floorclip", &AActor::Floorclip,
    "weave_index_xy", &AActor::WeaveIndexXY,
    "weave_index_z", &AActor::WeaveIndexZ);

This is the type of the activator argument from the script above. It works via template shenanigans, so most of the work is done at compile time. AActor has a lot of properties of various types; wrapping them with the bare Lua API would’ve been awful, but wrapping them with Sol is fairly straightforward.

Lifetime

activator.inventory is a wrapper around a ZLuaInventory object, which I made up. It’s just a tiny proxy struct that tries to represent the inventory of a particular actor, because the engine itself doesn’t quite have such a concept — an actor’s “inventory” is a single item (itself an actor), and each item has a pointer to the next item in the inventory. Creating an intermediate type lets me hide that detail from Lua and pretend the inventory is a real container.

The inventory is thus not a real table; pairs() works on it because it provides the __pairs metamethod. It calls an iter method returning a closure, per Lua’s iteration API, which Sol makes just work:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
struct ZLuaInventory {
    ...
    std::function<AInventory* ()> iter()
    {
        TObjPtr<AInventory> item = this->actor->Inventory;
        return [item]() mutable {
            AInventory* ret = item;
            if (ret)
                item = ret->NextInv();
            return ret;
        };
    }
}

C++’s closures are slightly goofy and it took me a few tries to land on this, but it works.

Well, sort of.

I don’t know how I got this idea in my head, but I was pretty sure that ZDoom’s TObjPtr did reference counting and would automatically handle the lifetime problems in the above code. Eventually Lua reaps the closure, then C++ reaps the closure, then the wrapped AInventorys refcount drops, and all is well.

Turns out TObjPtr doesn’t do reference counting. Rather, all the game objects participate in tracing garbage collection. The basic idea is to start from some root object and recursively traverse all the objects reachable from that root; whatever isn’t reached is garbage and can be deleted.

Unfortunately, the Lua interpreter is not reachable from ZDoom’s own object tree. If an object ends up only being held by Lua, ZDoom will think it’s garbage and delete it prematurely, leaving a dangling reference. Those are bad.

I think I can fix without too much trouble. Sol allows customizing how it injects particular types, so I can use that for the type tree that participates in this GC scheme and keep an unordered_set of all objects that are alive in Lua. The Lua interpreter itself is already wrapped in an object that participates in the GC, so when the GC descends to the wrapper, it’s easy to tell it that that set of objects is alive. I’ll probably need to figure out read/write barriers, too, but I haven’t looked too closely at how ZDoom uses those yet. I don’t know whether it’s possible for an object to be “dead” (as in no longer usable, not just 0 health) before being reaped, but if so, I’ll need to figure out something there too.

It’s a little ironic that I have to do this weird workaround when ZDoom’s tracing garbage collector is based on… Lua’s.

ZDoom does have types I want to expose that aren’t garbage collected, but those are all map structures like sectors, which are never created or destroyed at runtime. I will have to be careful with the Lua interpreter itself to make sure those can’t live beyond the current map, but I haven’t really dealt with map changes at all yet. The ACS approach is that everything is map-local, and there’s some limited storage for preserving values across maps; I could do something similar, perhaps only allowing primitive scalars.

Asynchronicity

Another critical property of ACS scripts is that they can pause themselves. They can either wait for a set number of tics with delay(), or wait for map geometry to stop being busy with something like tagwait(). So you can raise up some stairs, wait for the stairs to finish appearing, and then open the door they lead to. Or you can simulate game rules by running a script in an infinite loop that waits for a few tics between iterations. It’s pretty handy. It’s incredibly handy. It’s non-negotiable.

Luckily, Lua can emulate this using coroutines. I implemented the delay case yesterday:

1
2
3
4
5
function lua_test_script(activator, ...)
    zprint("hey it's me what's up", ...)
    coroutine.yield("delay", 70)
    zprint("i'm back again")
end

When I press the switch, I see the first message, then there’s a two-second pause (Doom is 35fps), then I see the second message.

A lot more details need to be hammered out before this is really equivalent to what ACS can do, but the basic functionality is there. And since these are full-stack coroutines, I can trivially wrap that yield gunk in a delay(70) function, so you never have to know the difference.

Determinism

ZDoom has demos and peer-to-peer multiplayer. Both features rely critically on the game state’s unfolding exactly the same way, given the same seed and sequence of inputs.

ACS goes to great lengths to preserve this. It executes deterministically. It has very, very few ways to make decisions based on anything but the current state of the game. Netplay and demos just work; modders and map authors never have to think about it.

I don’t know if I can guarantee the same about Lua. I’d think so, but I don’t know so. Will the order of keys in a table be exactly the same on every system, for example? That’s important! Even the ACS random-number generator is deterministic.

I hope this is the case. I know some games, like Starbound, implicitly assume for multiplayer purposes that scripts will execute the same way on every system. So it’s probably fine. I do wish Lua made some sort of guarantee here, though, especially since it’s such an obvious and popular candidate for game scripting.

Savegames

ZDoom allows you to quicksave at any time.

Any time.

Not while a script is running, mind you. Script execution blocks the gameplay thread, so only one thing can actually be happening at a time. But what happens if you save while a script is in the middle of a tagwait?

The coroutine needs to be persisted, somehow. More importantly, when the game is loaded, the coroutine needs to be restored to the same state: paused in the same place, with locals set to the same values. Even if those locals were wrapped pointers to C++ objects, which now have different addresses.

Vanilla Lua has no way to do this. Vanilla Lua has a pretty poor serialization story overall — nothing is built in — which is honestly kind of shocking. People use Lua for games, right? Like, a lot? How is this not an extremely common problem?

A potential solution exists in the form of Eris, a modified Lua that does all kinds of invasive things to allow absolutely anything to be serialized. Including coroutines!

So Eris makes this at least possible. I haven’t made even the slightest attempt at using it yet, but a few gotchas already stand out to me.

For one, Eris serializes everything. Even regular ol’ functions are serialized as Lua bytecode. A naïve approach would thus end up storing a copy of the entire game script in the save file.

Eris has a thing called the “permanent object table”, which allows giving names to specific Lua values. Those values are then serialized by name instead, and the names are looked up in the same table to deserialize. So I could walk the Lua namespace myself after the initial script load and stick all reachable functions in this table to avoid having them persisted. (That won’t catch if someone loads new code during play, but that sounds like a really bad idea anyway, and I’d like to prevent it if possible.) I have to do this to some extent anyway, since Eris can’t persist the wrapped C++ functions I’m exposing to Lua. Even if a script does some incredibly fancy dynamic stuff to replace global functions with closures at runtime, that’s okay; they’ll be different functions, so Eris will fall back to serializing them.

Then when the save is reloaded, Eris will replace any captured references to a global function with the copy that already exists in the map script. ZDoom doesn’t let you load saves across different mods, so the functions should be the same. I think. Hmm, maybe I should check on exactly what the load rules are. If you can load a save against a more recent copy of a map, you’ll want to get its updated scripts, but stored closures and coroutines might be old versions, and that is probably bad. I don’t know if there’s much I can do about that, though, unless Eris can somehow save the underlying code from closures/coros as named references too.

Eris also has a mechanism for storing wrapped native objects, so all I have to worry about is translating pointers, and that’s a problem Doom has already solved (somehow). Alas, that mechanism is also accessible to pure Lua code, and the docs warn that it’s possible to get into an infinite loop when loading. I’d rather not give modders the power to fuck up a save file, so I’ll have to disable that somehow.

Finally, since Eris loads bytecode, it’s possible to do nefarious things with a specially-crafted save file. But since the save file is full of a web of pointers anyway, I suspect it’s not too hard to segfault the game with a specially-crafted save file anyway. I’ll need to look into this. Or maybe I won’t, since I don’t seriously expect this to be merged in.

Runaway scripts

Speaking of which, ACS currently has detection for “runaway scripts”, i.e. those that look like they might be stuck in an infinite loop (or are just doing a ludicrous amount of work). Since scripts are blocking, the game does not actually progress while a script is running, and a very long script would appear to freeze the game.

I think ACS does this by counting instructions. I see Lua has its own mechanism for doing that, so limiting script execution “time” shouldn’t be too hard.

Defining new actors

I want to be able to use Lua with (or instead of) DECORATE, too, but I’m a little hung up on syntax.

I do have something slightly working — I was able to create a variant imp class with a bunch more health from Lua, then spawn it and fight it. Also, I did it at runtime, which is probably bad — I don’t know that there’s any way to destroy an actor class, so having them be map-scoped makes no sense.

That could actually pose a bit of a problem. The Lua interpreter should be scoped to a single map, but actor classes are game-global. Do they live in separate interpreters? That seems inconvenient. I could load the game-global stuff, take an internal-only snapshot of the interpreter with Lua (bytecode and all), and then restore it at the beginning of each level? Hm, then what happens if you capture a reference to an actor method in a save file…? Christ.

I could consider making the interpreter global and doing black magic to replace all map objects with nil when changing maps, but I don’t think that can possibly work either. ZDoom has hubs — levels that can be left and later revisited, preserving their state just like with a save — and that seems at odds with having a single global interpreter whose state persists throughout the game.

Er, anyway. So, the problem with syntax is that DECORATEs own syntax is extremely compact and designed for its very specific goal of state tables. Even ZScript appears to preserve the state table syntax, though it lets you write your own action functions or just provide a block of arbitrary code. Here’s a short chunk of the imp implementation again, for reference.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  ...
  }

Some tricky parts that stand out to me:

  • Labels are important, since these are state tables, and jumping to a particular state is very common. It’s tempting to use Lua coroutines here somehow, but short of using a lot of goto in Lua code (yikes!), jumping around arbitrarily doesn’t work. Also, it needs to be possible to tell an actor to jump to a particular state from outside — that’s how A_Look works, and there’s even an ACS function to do it manually.

  • Aside from being shorthand, frames are fine. Though I do note that hacks like AABBCCDD 3 are relatively common. The actual animation that’s wanted here is ABCD 6, but because animation and behavior are intertwined, the labels need to be repeated to run the action function more often. I wonder if it’s desirable to be able to separate display and behavior?

  • The durations seem straightforward, but they can actually be a restricted kind of expression as well. So just defining them as data in a table doesn’t quite work.

  • This example doesn’t have any, but states can also have a number of flags, indicated by keywords after the duration. (Slightly ambiguous, since there’s nothing strictly distinguishing them from action functions.) Bright, for example, is a common flag on projectiles, weapons, and important pickups; it causes the sprite to be drawn fullbright during that frame.

  • Obviously, actor behavior is a big part of the game sim, so ideally it should require dipping into Lua-land as little as possible.

Ideas I’ve had include the following.

Emulate state tables with arguments? A very straightforward way to do the above would be to just, well, cram it into one big table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
define_actor{
    ...
    states = {
        'Spawn:',
        'TROO', 'AB', 10, A_Look,
        'loop',
        'See:',
        'TROO', 'AABBCCDD', 3, A_Chase,
        'loop',
        ...
    },
}

It would work, technically, I guess, except for non-literal durations, but I’d basically just be exposing the DECORATE parser from Lua and it would be pretty ridiculous.

Keep the syntax, but allow calling Lua from it? DECORATE is okay, for the most part. For simple cases, it’s great, even. Would it be good enough to be able to write new action functions in Lua? Maybe. Your behavior would be awkwardly split between Lua and DECORATE, though, which doesn’t seem ideal. But it would be the most straightforward approach, and it would completely avoid questions of how to emulate labels and state counts.

As an added benefit, this would keep DECORATE almost-purely declarative — which means editor tools could still reliably parse it and show you previews of custom objects.

Split animation from behavior? This could go several ways, but the most obvious to me is something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
define_actor{
    ...
    states = {
        spawn = function(self)
            self:set_animation('AB', 10)
            while true do
                A_Look(self)
                delay(10)
            end
        end,
        see = function(self)
            self:set_animation('ABCD', 6)
            while true do
                A_Chase(self)
                delay(3)
            end
        end,
    },
}

This raises plenty of other API questions, like how to wait until an animation has finished or how to still do work on a specific frame, but I think those are fairly solvable. The big problems are that it’s very much not declarative, and it ends up being rather wordier. It’s not all boilerplate, though; it’s fairly straightforward. I see some value in having state delays and level script delays work the same way, too. And in some cases, you have only an animation with no code at all, so the heavier use of Lua should balance out. I don’t know.

A more practical problem is that, currently, it’s possible to jump to an arbitrary number of states past a given label, and that would obviously make no sense with this approach. It’s pretty rare and pretty unreadable, so maybe that’s okay. Also, labels aren’t blocks, so it’s entirely possible to have labels that don’t end with a keyword like loop and instead carry straight on into the next label — but those are usually used for logic more naturally expressed as for or while, so again, maybe losing that ability is okay.

Or… perhaps it makes sense to do both of these last two approaches? Built-in classes should stay as DECORATE anyway, so that existing code can still inherit from them and perform jumps with offsets, but new code could go entirely Lua for very complex actors.

Alas, this is probably one of those questions that won’t have an obvious answer unless I just build several approaches and port some non-trivial stuff to them to see how they feel.

And further

An enduring desire among ZDoom nerds has been the ability to write custom “thinkers”. Thinkers are really anything that gets to act each tic, but the word also specifically refers to the logic responsible for moving floors, opening doors, changing light levels, and so on. Exposing those more directly to Lua, and letting you write your own, would be pretty interesting.

Anyway

I don’t know if I’ll do all of this. I somewhat doubt it, in fact. I pick it up for half a day every few weeks to see what more I can make it do, just because it’s interesting. It has virtually no chance of being upstreamed anyway (the only active maintainer hates Lua, and thinks poorly of dynamic languages in general; plus, it’s redundant with ZScript) and I don’t really want to maintain my own yet another Doom fork, so I don’t expect it to ever be a serious project.

The source code for what I’ve done so far is available, but it’s brittle and undocumented, so I’m not going to tell you where to find it. If it gets far enough along to be useful as more than a toy, I’ll make a slightly bigger deal about it.

Embedding Lua in ZDoom

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/26/embedding-lua-in-zdoom/

I’ve spent a little time trying to embed a Lua interpreter in ZDoom. I didn’t get too far yet; it’s just an experimental thing I poke at every once and a while. The existing pile of constraints makes it an interesting problem, though.

Background

ZDoom is a “source port” (read: fork) of the Doom engine, with all the changes from the commercial forks merged in (mostly Heretic, Hexen, Strife), and a lot of internal twiddles exposed. It has a variety of mechanisms for customizing game behavior; two are major standouts.

One is ACS, a vaguely C-ish language inherited from Hexen. It’s mostly used to automate level behavior — at the simplest, by having a single switch perform multiple actions. It supports the usual loops and conditionals, it can store data persistently, and ZDoom exposes a number of functions to it for inspecting and altering the state of the world, so it can do some neat tricks. Here’s an arbitrary script from my DUMP2 map.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
script "open_church_door" (int tag)
{
    // Open the door more quickly on easier skill levels, so running from the
    // arch-vile is a more viable option
    int skill = GameSkill();
    int speed;
    if (skill < SKILL_NORMAL)
        speed = 64;  // blazing door speed
    else if (skill == SKILL_NORMAL)
        speed = 16;  // normal door speed
    else
        speed = 8;  // very dramatic door speed

    Door_Raise(tag, speed, 68);  // double usual delay
}

However, ZDoom doesn’t actually understand the language itself; ACS is compiled to bytecode. There’s even at least one alternative language that compiles to the same bytecode, which is interesting.

The other big feature is DECORATE, a mostly-declarative mostly-interpreted language for defining new kinds of objects. It’s a fairly direct reflection of how Doom actors are implemented, which is in terms of states. In Doom and the other commercial games, actor behavior was built into the engine, but this language has allowed almost all actors to be extracted as text files instead. For example, the imp is implemented partly as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  Melee:
  Missile:
    TROO EF 8 A_FaceTarget
    TROO G 6 A_TroopAttack
    Goto See
  ...
  }

TROO is the name of the imp’s sprite “family”. A, B, and so on are individual frames. The numbers are durations in tics (35 per second). All of the A_* things (which are optional) are action functions, behavioral functions (built into the engine) that run when the actor switches to that frame. An actor starts out at its Spawn state, so an imp behaves as follows:

  • Spawn. Render as TROO frame A. (By default, action functions don’t run on the very first frame they’re spawned.)
  • Wait 10 tics.
  • Change to TROO frame B. Run A_Look, which checks to see if a player is within line of sight, and if so jumps to the See state.
  • Wait 10 tics.
  • Repeat. (This time, frame A will also run A_Look, since the imp was no longer just spawned.)

All monster and item behavior is one big state table. Even the player’s own weapons work this way, which becomes very confusing — at some points a weapon can be running two states simultaneously. Oh, and there’s A_CustomMissile for monster attacks but A_FireCustomMissile for weapon attacks, and the arguments are different, and if you mix them up you’ll get extremely confusing parse errors.

It’s a little bit of a mess. It’s fairly flexible for what it is, and has come a long way — for example, even original Doom couldn’t pass arguments to action functions (since they were just function pointers), so it had separate functions like A_TroopAttack for every monster; now that same function can be written generically. People have done some very clever things with zero-delay frames (to run multiple action functions in a row) and storing state with dummy inventory items, too. Still, it’s not quite a programming language, and it’s easy to run into walls and bizarre quirks.

When DECORATE lets you down, you have one interesting recourse: to call an ACS script!

Unfortunately, ACS also has some old limitations. The only type it truly understands is int, so you can’t manipulate an actor directly or even store one in a variable. Instead, you have to work with TIDs (“thing IDs”). Every actor has a TID (zero is special-cased to mean “no TID”), and most ACS actor-related functions are expressed in terms of TIDs. For level automation, this is fine, and probably even what you want — you can dump a group of monsters in a map, give them all a TID, and then control them as a group fairly easily.

But if you want to use ACS to enhance DECORATE, you have a bit of a problem. DECORATE defines individual actor behavior. Also, many DECORATE actors are designed independently of a map and intended to be reusable anywhere. DECORATE should thus not touch TIDs at all, because they’re really the map‘s concern, and mucking with TIDs might break map behavior… but ACS can’t refer to actors any other way. A number of action functions can, but you can’t call action functions from ACS, only DECORATE. The workarounds for this are not pretty, especially for beginners, and they’re very easy to silently get wrong.

Also, ultimately, some parts of the engine are just not accessible to either ACS or DECORATE, and neither language is particularly amenable to having them exposed. Adding more native types to ACS is rather difficult without making significant changes to both the language and bytecode, and DECORATE is barely a language at all.

Some long-awaited work is finally being done on a “ZScript”, which purports to solve all of these problems by expanding DECORATE into an entire interpreted-C++-ish scripting language with access to tons of internals. I don’t know what I think of it, and it only seems to half-solve the problem, since it doesn’t replace ACS.

Trying out Lua

Lua is supposed to be easy to embed, right? That’s the one thing it’s famous for. Before ZScript actually started to materialize, I thought I’d take a little crack at embedding a Lua interpreter and exposing some API stuff to it.

It’s not very far along yet, but it can do one thing that’s always been completely impossible in both ACS and DECORATE: print out the player’s entire inventory. You can check how many of a given item the player has in either language, but neither has a way to iterate over a collection. In Lua, it’s pretty easy.

1
2
3
4
5
6
function lua_test_script(activator, ...)
    for item, amount in pairs(activator.inventory) do
        -- This is Lua's builtin print(), so it goes to stdout
        print(item.class.name, amount)
    end
end

I made a tiny test map with a switch that tries to run the ACS script named lua_test_script. I hacked the name lookup to first look for the name in Lua’s global scope; if the function exists, it’s called immediately, and ACS isn’t consulted at all. The code above is just a regular (global) function in a regular Lua file, embedded as a lump in the map. So that was a good start, and was pretty neat to see work.

Writing the bindings

I used the bare Lua API at first. While its API is definitely very simple, actually using it to define and expose a large API in practice is kind of repetitive and error-prone, and I was never confident I was doing it quite right. It’s plain C and it works entirely through stack manipulation and it relies on a lot of casting to/from void*, so virtually anything might go wrong at any time.

I was on the cusp of writing a bunch of gross macros to automate the boring parts, and then I found sol2, which is pretty great. It makes heavy use of basically every single C++11 feature, so it’s a nightmare when it breaks (and I’ve had to track down a few bugs), but it’s expressive as hell when it works:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
lua.new_usertype<AActor>("zdoom.AActor",
    "__tostring", [](AActor& actor) { return "<actor>"; },
    // Pointer to an unbound method.  Sol automatically makes this an attribute
    // rather than a method because it takes no arguments, then wraps its
    // return value to pass it back to Lua, no manual wrapper code required.
    "class", &AActor::GetClass,
    "inventory", sol::property([](AActor& actor) -> ZLuaInventory { return ZLuaInventory(actor); }),
    // Pointers to unbound attributes.  Sol turns these into writable
    // attributes on the Lua side.
    "health", &AActor::health,
    "floorclip", &AActor::Floorclip,
    "weave_index_xy", &AActor::WeaveIndexXY,
    "weave_index_z", &AActor::WeaveIndexZ);

This is the type of the activator argument from the script above. It works via template shenanigans, so most of the work is done at compile time. AActor has a lot of properties of various types; wrapping them with the bare Lua API would’ve been awful, but wrapping them with Sol is fairly straightforward.

Lifetime

activator.inventory is a wrapper around a ZLuaInventory object, which I made up. It’s just a tiny proxy struct that tries to represent the inventory of a particular actor, because the engine itself doesn’t quite have such a concept — an actor’s “inventory” is a single item (itself an actor), and each item has a pointer to the next item in the inventory. Creating an intermediate type lets me hide that detail from Lua and pretend the inventory is a real container.

The inventory is thus not a real table; pairs() works on it because it provides the __pairs metamethod. It calls an iter method returning a closure, per Lua’s iteration API, which Sol makes just work:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
struct ZLuaInventory {
    ...
    std::function<AInventory* ()> iter()
    {
        TObjPtr<AInventory> item = this->actor->Inventory;
        return [item]() mutable {
            AInventory* ret = item;
            if (ret)
                item = ret->NextInv();
            return ret;
        };
    }
}

C++’s closures are slightly goofy and it took me a few tries to land on this, but it works.

Well, sort of.

I don’t know how I got this idea in my head, but I was pretty sure that ZDoom’s TObjPtr did reference counting and would automatically handle the lifetime problems in the above code. Eventually Lua reaps the closure, then C++ reaps the closure, then the wrapped AInventorys refcount drops, and all is well.

Turns out TObjPtr doesn’t do reference counting. Rather, all the game objects participate in tracing garbage collection. The basic idea is to start from some root object and recursively traverse all the objects reachable from that root; whatever isn’t reached is garbage and can be deleted.

Unfortunately, the Lua interpreter is not reachable from ZDoom’s own object tree. If an object ends up only being held by Lua, ZDoom will think it’s garbage and delete it prematurely, leaving a dangling reference. Those are bad.

I think I can fix without too much trouble. Sol allows customizing how it injects particular types, so I can use that for the type tree that participates in this GC scheme and keep an unordered_set of all objects that are alive in Lua. The Lua interpreter itself is already wrapped in an object that participates in the GC, so when the GC descends to the wrapper, it’s easy to tell it that that set of objects is alive. I’ll probably need to figure out read/write barriers, too, but I haven’t looked too closely at how ZDoom uses those yet. I don’t know whether it’s possible for an object to be “dead” (as in no longer usable, not just 0 health) before being reaped, but if so, I’ll need to figure out something there too.

It’s a little ironic that I have to do this weird workaround when ZDoom’s tracing garbage collector is based on… Lua’s.

ZDoom does have types I want to expose that aren’t garbage collected, but those are all map structures like sectors, which are never created or destroyed at runtime. I will have to be careful with the Lua interpreter itself to make sure those can’t live beyond the current map, but I haven’t really dealt with map changes at all yet. The ACS approach is that everything is map-local, and there’s some limited storage for preserving values across maps; I could do something similar, perhaps only allowing primitive scalars.

Asynchronicity

Another critical property of ACS scripts is that they can pause themselves. They can either wait for a set number of tics with delay(), or wait for map geometry to stop being busy with something like tagwait(). So you can raise up some stairs, wait for the stairs to finish appearing, and then open the door they lead to. Or you can simulate game rules by running a script in an infinite loop that waits for a few tics between iterations. It’s pretty handy. It’s incredibly handy. It’s non-negotiable.

Luckily, Lua can emulate this using coroutines. I implemented the delay case yesterday:

1
2
3
4
5
function lua_test_script(activator, ...)
    zprint("hey it's me what's up", ...)
    coroutine.yield("delay", 70)
    zprint("i'm back again")
end

When I press the switch, I see the first message, then there’s a two-second pause (Doom is 35fps), then I see the second message.

A lot more details need to be hammered out before this is really equivalent to what ACS can do, but the basic functionality is there. And since these are full-stack coroutines, I can trivially wrap that yield gunk in a delay(70) function, so you never have to know the difference.

Determinism

ZDoom has demos and peer-to-peer multiplayer. Both features rely critically on the game state’s unfolding exactly the same way, given the same seed and sequence of inputs.

ACS goes to great lengths to preserve this. It executes deterministically. It has very, very few ways to make decisions based on anything but the current state of the game. Netplay and demos just work; modders and map authors never have to think about it.

I don’t know if I can guarantee the same about Lua. I’d think so, but I don’t know so. Will the order of keys in a table be exactly the same on every system, for example? That’s important! Even the ACS random-number generator is deterministic.

I hope this is the case. I know some games, like Starbound, implicitly assume for multiplayer purposes that scripts will execute the same way on every system. So it’s probably fine. I do wish Lua made some sort of guarantee here, though, especially since it’s such an obvious and popular candidate for game scripting.

Savegames

ZDoom allows you to quicksave at any time.

Any time.

Not while a script is running, mind you. Script execution blocks the gameplay thread, so only one thing can actually be happening at a time. But what happens if you save while a script is in the middle of a tagwait?

The coroutine needs to be persisted, somehow. More importantly, when the game is loaded, the coroutine needs to be restored to the same state: paused in the same place, with locals set to the same values. Even if those locals were wrapped pointers to C++ objects, which now have different addresses.

Vanilla Lua has no way to do this. Vanilla Lua has a pretty poor serialization story overall — nothing is built in — which is honestly kind of shocking. People use Lua for games, right? Like, a lot? How is this not an extremely common problem?

A potential solution exists in the form of Eris, a modified Lua that does all kinds of invasive things to allow absolutely anything to be serialized. Including coroutines!

So Eris makes this at least possible. I haven’t made even the slightest attempt at using it yet, but a few gotchas already stand out to me.

For one, Eris serializes everything. Even regular ol’ functions are serialized as Lua bytecode. A naïve approach would thus end up storing a copy of the entire game script in the save file.

Eris has a thing called the “permanent object table”, which allows giving names to specific Lua values. Those values are then serialized by name instead, and the names are looked up in the same table to deserialize. So I could walk the Lua namespace myself after the initial script load and stick all reachable functions in this table to avoid having them persisted. (That won’t catch if someone loads new code during play, but that sounds like a really bad idea anyway, and I’d like to prevent it if possible.) I have to do this to some extent anyway, since Eris can’t persist the wrapped C++ functions I’m exposing to Lua. Even if a script does some incredibly fancy dynamic stuff to replace global functions with closures at runtime, that’s okay; they’ll be different functions, so Eris will fall back to serializing them.

Then when the save is reloaded, Eris will replace any captured references to a global function with the copy that already exists in the map script. ZDoom doesn’t let you load saves across different mods, so the functions should be the same. I think. Hmm, maybe I should check on exactly what the load rules are. If you can load a save against a more recent copy of a map, you’ll want to get its updated scripts, but stored closures and coroutines might be old versions, and that is probably bad. I don’t know if there’s much I can do about that, though, unless Eris can somehow save the underlying code from closures/coros as named references too.

Eris also has a mechanism for storing wrapped native objects, so all I have to worry about is translating pointers, and that’s a problem Doom has already solved (somehow). Alas, that mechanism is also accessible to pure Lua code, and the docs warn that it’s possible to get into an infinite loop when loading. I’d rather not give modders the power to fuck up a save file, so I’ll have to disable that somehow.

Finally, since Eris loads bytecode, it’s possible to do nefarious things with a specially-crafted save file. But since the save file is full of a web of pointers anyway, I suspect it’s not too hard to segfault the game with a specially-crafted save file anyway. I’ll need to look into this. Or maybe I won’t, since I don’t seriously expect this to be merged in.

Runaway scripts

Speaking of which, ACS currently has detection for “runaway scripts”, i.e. those that look like they might be stuck in an infinite loop (or are just doing a ludicrous amount of work). Since scripts are blocking, the game does not actually progress while a script is running, and a very long script would appear to freeze the game.

I think ACS does this by counting instructions. I see Lua has its own mechanism for doing that, so limiting script execution “time” shouldn’t be too hard.

Defining new actors

I want to be able to use Lua with (or instead of) DECORATE, too, but I’m a little hung up on syntax.

I do have something slightly working — I was able to create a variant imp class with a bunch more health from Lua, then spawn it and fight it. Also, I did it at runtime, which is probably bad — I don’t know that there’s any way to destroy an actor class, so having them be map-scoped makes no sense.

That could actually pose a bit of a problem. The Lua interpreter should be scoped to a single map, but actor classes are game-global. Do they live in separate interpreters? That seems inconvenient. I could load the game-global stuff, take an internal-only snapshot of the interpreter with Lua (bytecode and all), and then restore it at the beginning of each level? Hm, then what happens if you capture a reference to an actor method in a save file…? Christ.

I could consider making the interpreter global and doing black magic to replace all map objects with nil when changing maps, but I don’t think that can possibly work either. ZDoom has hubs — levels that can be left and later revisited, preserving their state just like with a save — and that seems at odds with having a single global interpreter whose state persists throughout the game.

Er, anyway. So, the problem with syntax is that DECORATEs own syntax is extremely compact and designed for its very specific goal of state tables. Even ZScript appears to preserve the state table syntax, though it lets you write your own action functions or just provide a block of arbitrary code. Here’s a short chunk of the imp implementation again, for reference.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  ...
  }

Some tricky parts that stand out to me:

  • Labels are important, since these are state tables, and jumping to a particular state is very common. It’s tempting to use Lua coroutines here somehow, but short of using a lot of goto in Lua code (yikes!), jumping around arbitrarily doesn’t work. Also, it needs to be possible to tell an actor to jump to a particular state from outside — that’s how A_Look works, and there’s even an ACS function to do it manually.

  • Aside from being shorthand, frames are fine. Though I do note that hacks like AABBCCDD 3 are relatively common. The actual animation that’s wanted here is ABCD 6, but because animation and behavior are intertwined, the labels need to be repeated to run the action function more often. I wonder if it’s desirable to be able to separate display and behavior?

  • The durations seem straightforward, but they can actually be a restricted kind of expression as well. So just defining them as data in a table doesn’t quite work.

  • This example doesn’t have any, but states can also have a number of flags, indicated by keywords after the duration. (Slightly ambiguous, since there’s nothing strictly distinguishing them from action functions.) Bright, for example, is a common flag on projectiles, weapons, and important pickups; it causes the sprite to be drawn fullbright during that frame.

  • Obviously, actor behavior is a big part of the game sim, so ideally it should require dipping into Lua-land as little as possible.

Ideas I’ve had include the following.

Emulate state tables with arguments? A very straightforward way to do the above would be to just, well, cram it into one big table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
define_actor{
    ...
    states = {
        'Spawn:',
        'TROO', 'AB', 10, A_Look,
        'loop',
        'See:',
        'TROO', 'AABBCCDD', 3, A_Chase,
        'loop',
        ...
    },
}

It would work, technically, I guess, except for non-literal durations, but I’d basically just be exposing the DECORATE parser from Lua and it would be pretty ridiculous.

Keep the syntax, but allow calling Lua from it? DECORATE is okay, for the most part. For simple cases, it’s great, even. Would it be good enough to be able to write new action functions in Lua? Maybe. Your behavior would be awkwardly split between Lua and DECORATE, though, which doesn’t seem ideal. But it would be the most straightforward approach, and it would completely avoid questions of how to emulate labels and state counts.

As an added benefit, this would keep DECORATE almost-purely declarative — which means editor tools could still reliably parse it and show you previews of custom objects.

Split animation from behavior? This could go several ways, but the most obvious to me is something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
define_actor{
    ...
    states = {
        spawn = function(self)
            self:set_animation('AB', 10)
            while true do
                A_Look(self)
                delay(10)
            end
        end,
        see = function(self)
            self:set_animation('ABCD', 6)
            while true do
                A_Chase(self)
                delay(3)
            end
        end,
    },
}

This raises plenty of other API questions, like how to wait until an animation has finished or how to still do work on a specific frame, but I think those are fairly solvable. The big problems are that it’s very much not declarative, and it ends up being rather wordier. It’s not all boilerplate, though; it’s fairly straightforward. I see some value in having state delays and level script delays work the same way, too. And in some cases, you have only an animation with no code at all, so the heavier use of Lua should balance out. I don’t know.

A more practical problem is that, currently, it’s possible to jump to an arbitrary number of states past a given label, and that would obviously make no sense with this approach. It’s pretty rare and pretty unreadable, so maybe that’s okay. Also, labels aren’t blocks, so it’s entirely possible to have labels that don’t end with a keyword like loop and instead carry straight on into the next label — but those are usually used for logic more naturally expressed as for or while, so again, maybe losing that ability is okay.

Or… perhaps it makes sense to do both of these last two approaches? Built-in classes should stay as DECORATE anyway, so that existing code can still inherit from them and perform jumps with offsets, but new code could go entirely Lua for very complex actors.

Alas, this is probably one of those questions that won’t have an obvious answer unless I just build several approaches and port some non-trivial stuff to them to see how they feel.

And further

An enduring desire among ZDoom nerds has been the ability to write custom “thinkers”. Thinkers are really anything that gets to act each tic, but the word also specifically refers to the logic responsible for moving floors, opening doors, changing light levels, and so on. Exposing those more directly to Lua, and letting you write your own, would be pretty interesting.

Anyway

I don’t know if I’ll do all of this. I somewhat doubt it, in fact. I pick it up for half a day every few weeks to see what more I can make it do, just because it’s interesting. It has virtually no chance of being upstreamed anyway (the only active maintainer hates Lua, and thinks poorly of dynamic languages in general; plus, it’s redundant with ZScript) and I don’t really want to maintain my own yet another Doom fork, so I don’t expect it to ever be a serious project.

The source code for what I’ve done so far is available, but it’s brittle and undocumented, so I’m not going to tell you where to find it. If it gets far enough along to be useful as more than a toy, I’ll make a slightly bigger deal about it.

Anti-Piracy Movie Competition Entries Are Terrifying

Post Syndicated from Andy original https://torrentfreak.com/anti-piracy-movie-competition-entries-are-terrifying-161113/

scary-pirateWhen it comes to delivering tough anti-piracy action and rhetoric Down Under, few can match the efforts of movie company Village Roadshow.

In addition to holding ISPs responsible for piracy and having sites blocked at the provider level, the company is also threatening to track down and fine regular Australian file-sharers.

Village Roadshow co-chief Graham Burke is well known for his outspoken views on piracy, and now he’s encouraging aspiring filmmakers to express theirs via the ‘Unscene‘ short film competition.

First aired during the summer, the competition is now nearing its end-November deadline. Filmmakers of all abilities are invited to participate by expressing their views on how piracy will impact their future in the industry.

The competition is open to anyone over 18 and films are limited to five minutes duration. For the winner, there’s a cash prize of AUS$10,000 and film equipment up for grabs, plus a chance for their entry to be played before movies in Village Cinemas.

After submissions close on November 30, online voting via Facebook begins on December 1 and continues for the rest of the month. Finalists are announced January 15 and the winner will be revealed during a gala event on January 30.

Entrants are invited to “impress, inspire or upset” the judges (who include Graham Burke) but thus far all entries are towing the “piracy is evil” line, so the latter category will probably go unfulfilled.

Many of the filmmakers have been uploading their films to Vimeo without protection, so they can already be viewed. As can be seen from the handful embedded below, many follow a horror theme depicting a bleak future.

‘Echoes’ by Alessandro Frosali is particularly creative, but they all have something to offer in their own way. Thus far, no one has dared to put forward an entry that challenges the notion that piracy is not destructive, but there are still three weeks left to go, so anything could happen.

Turn off the lights, close the curtains. Piracy has never been this scary (NSFW).

Echoes | Unscene Short Film Competition Entry from Alessandro Frosali on Vimeo.

You've Been Warned from Natalie Carbone on Vimeo.

CINEMA. from Zoe Leslie on Vimeo.

Blackspot from Troy Blackman on Vimeo.

DEMONS OF THE FILM INDUSTRY from jesse wakelin on Vimeo.

The Pirates from Andy Burkitt on Vimeo.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Deploying a Spring Boot Application on AWS Using AWS Elastic Beanstalk

Post Syndicated from Juan Villa original https://aws.amazon.com/blogs/devops/deploying-a-spring-boot-application-on-aws-using-aws-elastic-beanstalk/

In this blog post, I will show you how to deploy a sample Spring Boot application using AWS Elastic Beanstalk and how to customize the Spring Boot configuration through the use of environment variables.

Spring Boot is often described as a quick and easy way of building production-grade Spring Framework-based applications. To accomplish this, Spring Boot comes prepackaged with auto configuration modules for most libraries typically used with the Spring Framework. This is often referred to as “convention over configuration.”

AWS Elastic Beanstalk offers a similar approach to application deployment. It provides convention over configuration while still giving you the ability to dig under the hood to make adjustments, as needed. This makes Elastic Beanstalk a perfect match for Spring Boot.

The sample application used in this blog post is the gs-accessing-data-rest sample project provided as part of the Accessing JPA Data with REST topic in the Spring Getting Started Guide. The repository is located in GitHub at https://github.com/spring-guides/gs-accessing-data-rest.

Anatomy of the Sample Application

The sample application is a very simple Spring Boot-based application that leverages the spring-data and spring-data-rest projects. The default configuration uses the H2 in-memory database. For this post, I will modify the build steps to include the mysql-connector library, which is required for persisting data to MySQL.

The application exposes a REST-based API with features such as pagination, JSON Hypertext Application Language (HAL), Application-Level Profile Semantics (ALPS), and Hypermedia as the Engine of Application State (HATEOAS). It has defined one model named “Person” with the following properties: id, firstName, and lastName. The defined repository interface exposes a function to find a “Person” by last name. This function is called “findByLastName.”

A Few Words About Elastic Beanstalk

Elastic Beanstalk is a managed service designed for deploying and scaling web applications and services. It supports languages such as Java, .NET, PHP, Node.js, Python, Ruby, and Go. It also supports a variety of web/application servers such as Apache, Nginx, Passenger, Tomcat, and IIS. Elastic Beanstalk also supports deployments of web application and services using Docker.

In this blog post, I’ll leverage Elastic Beanstalk’s support for Java 8. I will not be using Java with Tomcat because Spring Boot bundles an embedded Tomcat server suitable for production workloads.

Building and Bundling the Sample Application

The first step is to clone the repository from GitHub, add “mysql-connector” to the build steps, compile it, and generate a “fat” JAR containing all of the required library dependencies. To accomplish this, I will use Git (https://git-scm.com/downloads) and Gradle (downloaded automatically through a wrapper script).

git clone https://github.com/spring-guides/gs-accessing-data-rest.git
cd gs-accessing-data-rest/complete

In the build.gradle file, replace “compile(“com.h2database:h2″)” with “compile(“mysql:mysql-connector-java:6.0.3″)”. This step will replace the use of H2 with the mysql-connector required for persisting data to MySQL using Amazon RDS.

Build the project using the Gradle wrapper.

./gradlew bootRepackage

After Gradle finishes building the application, the JAR will be located in build/libs/gs-accessing-data-rest-0.1.0.jar.

Setting Up an Elastic Beanstalk Application

Sign in to the AWS Management Console, and then open the Elastic Beanstalk console. If this is your first time accessing this service, you will see a Welcome to AWS Elastic Beanstalk page. Otherwise, you’ll land on the Elastic Beanstalk dashboard, which lists all of your applications.

Welcome to AWS Elastic Beanstalk Screenshot

Choose Create New Application. This will open a wizard that will create your application and launch an appropriate environment.

An application is the top-level container in Elastic Beanstalk that contains one or more application environments (for example prod, qa, and dev or prod-web, prod-worker, qa-web, qa-worker).

Application Information Screenshot

The next step is to choose the environment tier. Elastic Beanstalk supports two environment tiers: Web server and Worker. For this blog post, set up a Web Server Environment tier.

New Environment Screenshot

When you choose Create web server, the wizard will display additional steps for setting up your new environment. Don’t be overwhelmed!

Now choose an environment configuration and environment type. For Predefined configuration, choose Java. For Environment type, choose Load Balancing, auto scaling.

Environment Type Screenshot

Specify the source for the application. Choose Upload your own, and then choose the JAR file built in a previous step. Leave the deployment preferences at their defaults.

Application Version Screenshot

Next, on the Environment Information page, configure the environment name and URL and provide an optional description. You can use any name for the environment, but I recommend something descriptive (for example, springbooteb-web-prod). You can use the same prefix as the environment name for the URL, but the URL must be globally unique. When you specify a URL, choose Check availability before you continue to the next step.

Environment information Screenshot

On the Additional Resources page, you’ll specify if you want to create an RDS instance with the web application environment. Select Create an RDS DB Instance with this environment and Create this environment inside a VPC.

Additional Resources Screenshot

For Instance type, choose t2.small. If you have an Amazon EC2 key pair and want to be able to remotely connect to the instance, choose your key pair now; otherwise, leave this field blank. Also, set the Application health check URL to “/”. Leave all of the other settings at their defaults.

Configuration Details Screenshot

On the Environment Tags page, you can specify up to seven environment tags. Although this step is optional, specifying tags allows you to document resources in your environment. For example, teams often use tags to specify things like environment or application for tracking purposes.

Environment Tags Screenshot

On the RDS Configuration page, configure a MySQL database with an Instance class of db.t2.small. Specify a Username and Password for database access. Choose something easy to remember because you’ll need them in a later step. Also, configure the Availability to Multiple availability zones. Leave all of the other settings at their defaults.

RDS Configuration Screenshot

The next step in the wizard is used to configure which VPC and subnets to use for environment resources. Specifying a VPC will give you full control over the network where the application will be deployed, which, in turn, gives you additional mechanisms for hardening your security posture.

For this deployment, specify the default VPC that comes with all recently created AWS accounts. Select the subnets Elastic Beanstalk will use to launch the Elastic Load Balancing load balancers and EC2 instances. Select at least two Availability Zones (AZ) for each service category (ELB and EC2), in order to achieve high-availability.

Select Associate Public IP Address so that compute instances will be created in the public subnets of the selected VPC and will be assigned a public IP address. The default VPC created with most accounts contains only public subnets. Also, for the VPC security group choose the default security group already created for your default VPC.

VPC Configuration Screenshot

On the Permissions page, configure the instance profile and service role that the Elastic Beanstalk service will use to deploy all of the resources required to create the environment. If you have launched an environment with this wizard before, then the instance profile and service role have already been created and will be selected automatically; it not, the wizard will create them for you.

By default, AWS services don’t have permissions to access other services. The instance profile and service role give Elastic Beanstalk the permissions it needs to create, modify, and delete resources in other AWS services, such as EC2.

Permissions Screenshot

The final step in the wizard allows you to review all of the settings. Review the configuration and launch the environment! As your application is being launched, you’ll see something similar to this on the environment dashboard.

Overview Pending Screenshot

During the launch process, Elastic Beanstalk coordinates the creation and deployment of all AWS resources required to support the environment. This includes, but is not limited to, launching two EC2 instance, creating a Multi-AZ MySQL database using RDS, creating a load balancer, and creating a security group.

Once the environment has been created and the resources have been deployed, you’ll notice that the Health will be reported as Severe. This is because the Spring application still needs some configuration.

Overview Severe Screenshot

Configuring Spring Boot Through Environment Variables

By default, Spring Boot applications will listen on port 8080. Elastic Beanstalk assumes that the application will listen on port 5000. There are two ways to fix this discrepancy: change the port Elastic Beanstalk is configured to use, or change the port the Spring Boot application listens on. For this post, we will change the port the Spring Boot application listens on.

The easiest way to do this is to specify the SERVER_PORT environment variable in the Elastic Beanstalk environment and set the value to 5000. (The configuration property name is server.port, but Spring Boot allows you to specify a more environment variable-friendly name).

On the Configuration page in your environment, under Software Configuration, click the settings icon.

Configuration Dashboard Screenshot

On the Software Configuration page, you’ll see that there are already some environment variables set. They are set automatically by Elastic Beanstalk when it is configured to use the Java platform.

To change the port that Spring Boot listens on, add a new environment variable, SERVER_PORT, with the value 5000.

Environment Properties Screenshot

In addition to configuring the port the application listens on, you also need to specify environment variables to configure the database that the Spring Boot application will be using.

Before the Spring Boot application can be configured to use the RDS database, you’ll need to get the database endpoint URI. On the Environment Configuration page, under the Data Tier section, you’ll find the endpoint under RDS.

Data Tier Configuration Screenshot

Spring Boot bundles a series of AutoConfiguration classes that configure Spring resources automatically based on other classes available in the class path. Many of these auto configuration classes accept customizations through configuration, including environment variables. To configure the Spring Boot application to use the newly created MySQL database, specify the following environment variables:

SPRING_DATASOURCE_URL=jdbc:mysql://<url>/ebdb
SPRING_DATASOURCE_USERNAME=<username>
SPRING_DATASOURCE_PASSWORD=<password>
SPRING_JPA_HIBERNATE_DDL_AUTO=update
SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.MySQL5Dialect

Environment Properties Screenshot

As soon as you click Apply, the configuration change will be propagated to the application servers. The application will be restarted. When it restarts, it will pick up the new configuration through the environment variables. In about a minute, you’ll see a healthy application on the dashboard!

Overview Healthy Screenshot

Testing Spring Boot in the Cloud

Now test the deployed REST API endpoint!

Use the URL you configured on the environment to access the service. For this example, the specified URL is http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/.

For our first test, we’ll do an HTTP GET on the root of the URL:

curl -X GET -i http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/

HTTP/1.1 200 OK
Date: Fri, 15 Jul 2016 20:19:13 GMT
Server: nginx/1.8.1
Content-Length: 282
Content-Type: application/hal+json;charset=UTF-8
Connection: keep-alive

{
  "_links" : {
    "people" : {
      "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people{?page,size,sort}",
      "templated" : true
    },
    "profile" : {
      "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/profile"
    }
  }
}

The service responded with a JSON HAL document. There’s a “people” repository you can access. Next, create a person!

curl -X POST -H "Content-Type: application/json" -d '{ "firstName": "John", "lastName": "Doe" }' http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people

{
  "firstName" : "John",
  "lastName" : "Doe",
  "_links" : {
    "self" : {
      "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people/1"
    },
    "person" : {
      "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people/1"
    }
  }
}

You’ve successfully added a person. Now get a list of people.

curl -X GET http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people

{
  "_embedded" : {
    "people" : [ {
      "firstName" : "John",
      "lastName" : "Doe",
      "_links" : {
        "self" : {
          "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people/1"
        },
        "person" : {
          "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people/1"
        }
      }
    } ]
  },
  "_links" : {
    "self" : {
      "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people"
    },
    "profile" : {
      "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/profile/people"
    },
    "search" : {
      "href" : "http://springbooteb-web-prod.us-east-1.elasticbeanstalk.com/people/search"
    }
  },
  "page" : {
    "size" : 20,
    "totalElements" : 1,
    "totalPages" : 1,
    "number" : 0
  }
}

There’s the person you added! The response from the server is a HAL document with HATOAS and pagination.

Conclusion

In just a few clicks you’ve deployed a simple, production-ready Spring Boot application with a MySQL database on AWS using Elastic Beanstalk.

As part of the launch and configuration of the environment, Elastic Beanstalk launched resources using other AWS services. These resources still remain under your control. They can be accessed through other AWS service consoles (for example, the EC2 console and the RDS console).

This is not the only way to deploy and manage applications on AWS, but it’s a powerful and easy way to deploy product-grade applications and services. Most of the configuration options you set during the setup process can be modified. There are many more options for customizing the deployment. I hope you found this post helpful. Feel free to leave feedback in the comments.

 

 

AWS Week in Review – October 31, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-31-2016/

Over 25 internal and external contributors helped out with pull requests and fresh content this week! Thank you all for your help and your support.

Monday

October 31

Tuesday

November 1

Wednesday

November 2

Thursday

November 3

Friday

November 4

Saturday

November 5

Sunday

November 6

New & Notable Open Source

New Customer Success Stories

  • Apposphere – Using AWS and bitfusion.io from the AWS Marketplace, Apposphere can scale 50 to 60 percent month-over-month while keeping customer satisfaction high. Based in Austin, Texas, the Apposphere mobile app delivers real-time leads from social media channels.
  • CADFEM – CADFEM uses AWS to make complex simulation software more accessible to smaller engineering firms, helping them compete with much larger ones. The firm specializes in simulation software and services for the engineering industry.
  • Mambu – Using AWS, Mambu helped one of its customers launch the United Kingdom’s first cloud-based bank, and the company is now on track for tenfold growth, giving it a competitive edge in the fast-growing fintech sector. Mambu is an all-in-one SaaS banking platform for managing credit and deposit products quickly, simply, and affordably.
  • Okta – Okta uses AWS to get new services into production in days instead of weeks. Okta creates products that use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections.
  • PayPlug – PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites. PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites
  • Rent-a-Center – Rent-a-Center is a leading renter of furniture, appliances, and electronics to customers in the United States, Canada, Puerto Rico, and Mexico. Rent-A-Center uses AWS to manage its new e-commerce website, scale to support a 1,000 percent spike in site traffic, and enable a DevOps approach.
  • UK Ministry of Justice – By going all in on the AWS Cloud, the UK Ministry of Justice (MoJ) can use technology to enhance the effectiveness and fairness of the services it provides to British citizens. The MoJ is a ministerial department of the UK government. MoJ had its own on-premises data center, but lacked the ability to change and adapt rapidly to the needs of its citizens. As it created more digital services, MoJ turned to AWS to automate, consolidate, and deliver constituent services.

New SlideShare Presentations

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Research into IoT Security Is Finally Legal

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/research_into_i.html

For years, the DMCA has been used to stifle legitimate research into the security of embedded systems. Finally, the research exemption to the DMCA is in effect (for two years, but we can hope it’ll be extended forever).

Detecting landmines – with spinach

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/detecting-landmines-with-spinach/

Forget sniffer dogs…we need to talk about spinach.

The team at MIT (Massachusetts Institute of Technology) have been working to transform spinach plants into a means of detection in the fight against buried munitions such as landmines.

Plant-to-human communication

MIT engineers have transformed spinach plants into sensors that can detect explosives and wirelessly relay that information to a handheld device similar to a smartphone. (Learn more: http://news.mit.edu/2016/nanobionic-spinach-plants-detect-explosives-1031) Watch more videos from MIT: http://www.youtube.com/user/MITNewsOffice?sub_confirmation=1 The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts.

Nanoparticles, plus tiny tubes called carbon nanotubes, are embedded into the spinach leaves where they pick up nitro-aromatics, chemicals found in the hidden munitions.

It takes the spinach approximately ten minutes to absorb water from the ground, including the nitro-aromatics, which then bind to the polymer material wrapped around the nanotube.

But where does the Pi come into this?

The MIT team shine a laser onto the leaves, detecting the altered fluorescence of the light emitted by the newly bonded tubes. This light is then read by a Raspberry Pi fitted with an infrared camera, resulting in a precise map of where hidden landmines are located. This signal can currently be picked up within a one-mile radius, with plans to increase the reach in future.

detecting landmines with spinach

You can also physically hack a smartphone to replace the Raspberry Pi… but why would you want to do that?

The team at MIT have already used the tech to detect hydrogen peroxide, TNT, and sarin, while co-author Prof. Michael Strano advises that the same setup can be used to detect “virtually anything”.

“The plants could be use for defence applications, but also to monitor public spaces for terrorism-related activities, since we show both water and airborne detection”

More information on the paper can be found at the MIT website.

The post Detecting landmines – with spinach appeared first on Raspberry Pi.

AWS Week in Review – October 24, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-24-2016/

Another busy week in AWS-land! Today’s post included submissions from 21 internal and external contributors, along with material from my RSS feeds, my inbox, and other things that come my way. To join in the fun, create (or find) some awesome AWS-related content and submit a pull request!

Monday

October 24

Tuesday

October 25

Wednesday

October 26

Thursday

October 27

Friday

October 28

Saturday

October 29

Sunday

October 30

New & Notable Open Source

  • aws-git-backed-static-website is a Git-backed static website generator powered entirely by AWS.
  • rds-pgbadger fetches log files from an Amazon RDS for PostgreSQL instance and generates a beautiful pgBadger report.
  • aws-lambda-redshift-copy is an AWS Lambda function that automates the copy command in Redshift.
  • VarnishAutoScalingCluster contains code and instructions for setting up a shared, horizontally scalable Varnish cluster that scales up and down using Auto Scaling groups.
  • aws-base-setup contains starter templates for developing AWS CloudFormation-based AWS stacks.
  • terraform_f5 contains Terraform scripts to instantiate a Big IP in AWS.
  • claudia-bot-builder creates chat bots for Facebook, Slack, Skype, Telegram, GroupMe, Kik, and Twilio and deploys them to AWS Lambda in minutes.
  • aws-iam-ssh-auth is a set of scripts used to authenticate users connecting to EC2 via SSH with IAM.
  • go-serverless sets up a go.cd server for serverless application deployment in AWS.
  • awsq is a helper script to run batch jobs on AWS using SQS.
  • respawn generates CloudFormation templates from YAML specifications.

New SlideShare Presentations

New Customer Success Stories

  • AlbemaTV – AbemaTV is an Internet media-services company that operates one of Japan’s leading streaming platforms, FRESH! by AbemaTV. The company built its microservices platform on Amazon EC2 Container Service and uses an Amazon Aurora data store for its write-intensive microservices—such as timelines and chat—and a MySQL database on Amazon RDS for the remaining microservices APIs. By using AWS, AbemaTV has been able to quickly deploy its new platform at scale with minimal engineering effort.
  • Celgene – Celgene uses AWS to enable secure collaboration between internal and external researchers, allow individual scientists to launch hundreds of compute nodes, and reduce the time it takes to do computational jobs from weeks or months to less than a day. Celgene is a global biopharmaceutical company that creates drugs that fight cancer and other diseases and disorders. Celgene runs its high-performance computing research clusters, as well as its research collaboration environment, on AWS.
  • Under Armour – Under Armour can scale its Connected Fitness apps to meet the demands of more than 180 million global users, innovate and deliver new products and features more quickly, and expand internationally by taking advantage of the reliability and high availability of AWS. The company is a global leader in performance footwear, apparel, and equipment. Under Armour runs its growing Connected Fitness app platform on the AWS Cloud.

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Conservancy’s First GPL Enforcement Feedback Session

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2016/10/27/gpl-feedback.html

[ This blog
was crossposted
on Software Freedom Conservancy’s website
. ]

As I mentioned in an earlier blog post, I had the privilege
of attending Embedded Linux Conference Europe (ELC EU) and the OpenWrt Summit
in Berlin, Germany earlier this month. I gave a talk (for which the video is
available below) at the OpenWrt Summit. I also had the opportunity to host
the first of many conference sessions seeking feedback and input from the
Linux developer community about Conservancy’s
GPL Compliance Project for
Linux Developers
.

ELC EU has no “BoF Board” where you can post informal
sessions. So, we scheduled the session by word of mouth over a lunch hour.
We nevertheless got an good turnout (given that our session’s main
competition was eating food 🙂 of about 15 people.

Most notably and excitingly, Harald Welte, well-known Netfilter developer
and leader of gpl-violations.org,
was able to attend. Harald talked about his work with
gpl-violations.org enforcing his own copyrights in Linux, and
explained why this was important work for users of the violating devices.
He also pointed out that some of the companies that were sued during his
most active period of gpl-violations.org are now regular upstream
contributors.

Two people who work in the for-profit license compliance industry attended
as well. Some of the discussion focused on usual debates that charities
involved in compliance commonly have with the for-profit compliance
industry. Specifically, one of them asked how much compliance is
enough, by percentage?
I responded to his question on two axes.
First, I addressed the axis of how many enforcement matters does the GPL
Compliance Program for Linux Developers do, by percentage of products
violating the GPL
? There are, at any given time, hundreds of
documented GPL violating products, and our coalition works on only a tiny
percentage of those per year. It’s a sad fact that only that tiny
percentage of the products that violate Linux are actually pursued to
compliance.

On the other axis, I discussed the percentage on a per-product basis.
From that point of view, the question is really: Is there a ‘close
enough to compliance’ that we can as a community accept and forget
about the remainder?
From my point of view, we frequently compromise
anyway, since the GPL doesn’t require someone to prepare code properly for
upstream contribution. Thus, we all often accept compliance once someone
completes the bare minimum of obligations literally written in the GPL, but
give us a source release that cannot easily be converted to an upstream
contribution. So, from that point of view, we’re often accepting a
less-than-optimal outcome. The GPL by itself does not inspire upstreaming;
the other collaboration techniques that are enabled in our community
because of the GPL work to finish that job, and adherence to
the Principles assures
that process can work. Having many people who work with companies in
different ways assures that as a larger community, we try all the different
strategies to encourage participation, and inspire today’s violators to
become tomorrow upstream contributors — as Harald mention has already
often happened.

That same axis does include on rare but important compliance problem: when
a violator is particularly savvy, and refuses to release very specific
parts of their Linux code
(as VMware did),
even though the license requires it. In those cases, we certainly cannot
and should not accept anything less than required compliance — lest
companies begin holding back all the most interesting parts of the code
that GPL requires them to produce. If that happened, the GPL would cease
to function correctly for Linux.

After that part of the discussion, we turned to considerations of
corporate contributors, and how they responded to enforcement. Wolfram
Sang, one of the developers in Conservancy’s coalition, spoke up on this
point. He expressed that the focus on for-profit company contributions,
and the achievements of those companies, seemed unduly prioritized by some
in the community. As an independent contractor and individual developer,
Wolfram believes that contributions from people like him are essential to a
diverse developer base, that their opinions should be taken into account,
and their achievements respected.

I found Wolfram’s points particularly salient. My view is that Free
Software development, including for Linux, succeeds because both powerful
and wealthy entities and individuals contribute and collaborate
together on equal footing. While companies have typically only enforce the
GPL on their own copyrights for business reasons (e.g., there is at least
one example of a major Linux-contributing company using GPL enforcement
merely as a counter-punch in a patent lawsuit), individual developers who
join Conservancy’s coalition follow community principles and enforce to
defend the rights of their users.

At the end of the session, I asked two developers who hadn’t spoken during
the session, and who aren’t members of Conservancy’s coalition, their
opinion on how enforcement was historically carried out by
gpl-violations.org, and how it is currently carried out by Conservancy’s
GPL Compliance Program for Linux Developers. Both responded with a simple
response (paraphrased): it seems like a good thing to do; keep doing
it!

I finished up the session by inviting everyone to
the join
the principles-discuss
list, where public discussion about GPL
enforcement under the Principles has already begun. I also invited
everyone to attend my talk, that took place an hour later at the OpenWrt
Summit, which was co-located with ELC EU.

In that talk, I spoke about a specific example of community success in GPL
enforcement. As explained on the
OpenWrt history page,
OpenWrt was initially made possible thanks to GPL enforcement done by
BusyBox and Linux contributors in a coalition together. (Those who want to
hear more about the connection between GPL enforcement and OpenWrt can view
my talk.)

Since there weren’t opportunities to promote impromptu sessions on-site,
this event was a low-key (but still quite nice) start to Conservancy’s
planned year-long effort seeking feedback about GPL compliance and
enforcement. Our next
session is
an official BoF session at Linux Plumbers Conference
, scheduled for
next Thursday 3 November at 18:00. It will be led by my colleagues Karen
Sandler and Brett Smith.

Warner Bros. Claims Agency Ran its Own Pirate Movie Site

Post Syndicated from Andy original https://torrentfreak.com/warner-bros-claims-agency-ran-its-own-pirate-movie-site-161025/

cofeeleakWhen so-called DVD screeners of the latest movies leak to pirate sites, studios are among the first to highlight the damaging effects. Often the copies are of excellent quality, a gift to millions of file-sharers worldwide but a potential headache for subsequent official distribution efforts.

While studios have had the means to track screeners back to their sources for a long time, when compared to the number of leaks it is relatively rare for industry insiders to face civil legal action. That changed yesterday when Warner Bros. Entertainment sued talent agency Innovative Artists.

In a lawsuit filed in a California federal court, Warner accuses the agency of effectively setting up its own pirate site, stocked with rips of DVD screeners that should have been kept secure.

“Beginning in late 2015, Innovative Artists set up and operated an illegal digital distribution platform that copied movies and then distributed copies and streamed public performances of those movies to numerous people inside and outside of the agency,” the complaint (pdf) reads.

“Innovative Artists stocked its platform with copies of Plaintiff’s works, including copies that Innovative Artists made by ripping awards consideration ‘screener’ DVDs that Plaintiff sent to the agency to deliver to one of its clients.”

Given its position in the industry, Innovative Artists should have known better than to upload content, Warner’s lawyers write.

“The actions Plaintiff complains of are blatantly illegal. That illegality would be obvious to anyone, but especially to Innovative Artists, a talent agency that claims to promote the interests of actors, writers, directors and others whose livelihoods depend critically on respect for copyright,” the complaint adds.

Only making matters worse is the fact that some of the DVD screeners ripped by the agency leaked out beyond its platform, which was actually a shared folder on its Google Drive account.

According to the complaint, Warner Bros. discovered something was amiss when content security company Deluxe Entertainment Services advised that screener copies of Creed and In the Heart of the Sea had appeared on file-sharing sites.

Both movies were made available by Hive-CM8, a release outfit responsible for many leaks during December 2015. Crucially, both contained watermarks that enabled them to be tracked back to the source.

“Because the screeners were ‘watermarked’ — embedded with markers that identified their intended recipients — Plaintiff traced the copies to screeners that Plaintiff had sent to an Innovative Artists client, in care of the agency,” the complaint notes.

“Instead of forwarding the screeners directly to its client, Innovative Artists used illegal ripping software to bypass the technical measures that prevent access to and copying of the content on DVDs. Innovative Artists then copied the movies to its digital distribution platform, where those copies became available for immediate downloading and streaming along with infringing copies of many other copyrighted movies.”

While the allegations are damaging enough already, they don’t stop there. The complaint alleges that the agency also gave others access to the screeners stored on Google Drive in return for access to other titles not yet in its possession.

“Innovative Artists traded access to some of its unauthorized digital copies of movies in exchange for unauthorized copies of content possessed by third parties. For example, in one case, Innovative Artists granted an assistant at another company access to the digital distribution platform because the assistant had provided a screener to Innovative Artists for a title that was not already on the platform,” Warner writes.

For copyright infringement, Warner Bros. seeks actual or statutory damages, up to the maximum of $150,000 for willful infringement, attorneys’ fees, and an injunction. For the breaches of anti-circumvention provisions when Innovative ripped the DVDs, the studio claims the maximum statutory damages as permitted by the DMCA.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How Different Stakeholders Frame Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/how_different_s.html

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society — ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ — such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ — require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.

Cisco Develops System To Automatically Cut-Off Pirate Video Streams

Post Syndicated from Andy original https://torrentfreak.com/cisco-develops-system-automatically-cut-off-pirate-video-streams-161021/

network-roundWhile torrents continue to be one of the Internet’s major distribution methods for copyrighted content, it’s streaming that’s capturing the imagination of the pirating mainstream.

Easy to use via regular web browsers, modified Kodi installations, and fully-fledged IPTV services, streaming is now in the living rooms of millions of people. As such it is viewed as a threat to subscription and PPV TV providers worldwide, especially those offering live content such as sporting events.

Pirate services obtain content by capturing and restreaming feeds obtained from official sources, often from something as humble as a regular subscriber account. These streams can then be redistributed by thousands of other sites and services, many of which are easily found using a simple search.

Dedicated anti-piracy companies track down these streams and send takedown notices to the hosts carrying them. Sometimes this means that streams go down quickly but in other cases hosts can take a while to respond or may not comply at all. Networking company Cisco thinks it has found a solution to these problems.

The company’s claims center around its Streaming Piracy Prevention (SPP) platform, a system that aims to take down illicit streams in real-time. Perhaps most interestingly, Cisco says SPP functions without needing to send takedown notices to companies hosting illicit streams.

“Traditional takedown mechanisms such as sending legal notices (commonly referred to as ‘DMCA notices’) are ineffective where pirate services have put in place infrastructure capable of delivering video at tens and even hundreds of gigabits per second, as in essence there is nobody to send a notice to,” the company explains.

“Escalation to infrastructure providers works to an extent, but the process is often slow as the pirate services will likely provide the largest revenue source for many of the platform providers in question.”

To overcome these problems Cisco says it has partnered with Friend MTS (FMTS), a UK-based company specializing in content-protection.

Among its services, FMTS offers Distribution iD, which allows content providers to pinpoint which of their downstream distributors’ platforms are a current source of content leaks.

“Robust and unique watermarks are embedded into each distributor feed for identification. The code is invisible to the viewer but can be recovered by our specialist detector software,” FMTS explains.

“Once infringing content has been located, the service automatically extracts the watermark for accurate distributor identification.”

Friend MTS also offers Advanced Subscriber iDentification (ASiD), a system that is able to identify legitimate subscribers who are subsequently re-transmitting content online.

According to Cisco, FMTS feeds the SPP service with pirate video streams it finds online. These are tracked back to the source of the leak (such as a particular distributor or specific pay TV subscriber account) which can then be shut-down in real time.

“The process is fully automated, ensuring a timely response to incidents of piracy. Gone are the days of sending a legal notice and waiting to see if anyone will answer,” Cisco says.

“SPP acts without the need to involve or gain cooperation from any third parties, enabling an unmatched level of cross-device retransmission prevention and allowing service providers to take back control of their channels, to maximize their revenue.”

Friends MTS and Cisco believe the problem is significant. During the last month alone the company says it uncovered 12,000 HD channels on pirate services that were being sourced from Pay TV providers.

How much of dent the companies will be able to make in this market will remain to be seen but not having to rely on the efficiency of takedown requests certainly has the potential to shift the balance of power.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Compute Module – now in an NEC display near you

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/compute-module-nec-display-near-you/

Back in April 2014, we launched the Compute Module to provide hardware developers with a way to incorporate Raspberry Pi technology into their own products. Since then we’ve seen it used to build home media players, industrial control systems, and everything in between.

Earlier this week, NEC announced that they would be adding Compute Module support to their next-generation large-format displays, starting with 40″, 48″ and 55″ models in January 2017 and eventually scaling all the way up to a monstrous 98″ (!!) by the end of the year. These are commercial-grade displays designed for use in brightly-lit public spaces such as schools, offices, shops and railway stations.

Believe it or not these are the small ones

Believe it or not, these are the small ones.

NEC have already lined up a range of software partners in retail, airport information systems, education and corporate to provide presentation and signage software which runs on the Compute Module platform. You’ll be seeing these roll out in a lot of locations that you visit frequently.

Each display has an internal bay which accepts an adapter board loaded with either the existing Compute Module, or the upcoming Compute Module 3, which incorporates the BCM2837 application processor and 1GB of LPDDR2 memory found on the Raspberry Pi 3 Model B. We’re expecting to do a wider release of Compute Module 3 to everybody around the end of the year.

The Compute Module in situ

The Compute Module in situ

We’ve been working on this project with NEC for over a year now, and are very excited that it’s finally seeing the light of day. It’s an incredible vote of confidence in the Raspberry Pi Compute Module platform from a blue-chip hardware vendor, and will hopefully be the first of many.

Now, here’s some guy to tell you more about what’s going on behind the screens you walk past every day on your commute.

‘The Power to Surprise’ live stream at Display Trends Forum 2016 – NEC Teams Up With Raspberry Pi

NEC Display Solutions today announced that it will be sharing an open platform modular approach with Raspberry Pi, enabling a seamless integration of Raspberry Pi’s devices with NEC’s displays. NEC’s leading position in offering the widest product range of display solutions matches perfectly with the Raspberry Pi, the organisation responsible for developing the award-winning range of low-cost, high-performance computers.

The post The Compute Module – now in an NEC display near you appeared first on Raspberry Pi.

Insanely Popular YouTuber KSI Goes F****ing Ballistic at Movie Pirates

Post Syndicated from Andy original https://torrentfreak.com/insanely-popular-youtuber-ksi-goes-fng-ballistic-at-movie-pirates-161013/

ksi-4If YouTube stars are the world’s next big celebrities, Olajide “JJ” Olatunji, Jr. plays in the big league.

Also known as KSI, the UK-based YouTubing, videogaming, rapper-come-comedian has amassed a staggering 14.76m subscribers. For perspective, giant music channel Vevo has two million less.

KSI is currently running the 22nd most-subscribed YouTube channel on the planet and with more than three billion views, he’s a truly influencial global player.

All this fame has opened up plenty of opportunities for the millionaire 23-year-old. In addition to launching a rap career, KSI has just starred in a movie with fellow YouTuber Caspar Lee.

Laid in America is the story of two guys attempting to do what the title suggests and for those interested in assessing their success, the movie has just come out on Blu-ray. However, it’s also available on torrent sites and KSI is far from impressed.

In a seven-minute video addressed to fans of both kinds (those who pay and those who do not), the YouTube megastar goes completely off the deep end. For his fans his bad language and crazy antics aren’t anything new but as anti-piracy rants go, this one is pretty epic.

“Stop pirating the fucking movie. For fuck’s sake man!” he begins. “I get it, I get it why people pirate. If there’s no physical way of getting what you want then yeah, I get it.”

Highlighting a free music track called “For The Summer” that he previously released with Randolph, KSI says that it’s OK for people to consume that how they like. Torrenting his movie, on the other hand, is completely unacceptable.

“[Since it has a price tag], evidently this shit is not for free,” KSI says. “Trust me, there isn’t a shortage of places where you can go to get this fucking thing. So why the fuck are you watching it without paying for it? You stealing mother-fucking piece of shit.”

KSI then goes on to question whether people are pirating his movie because they don’t like him, specifically since he’s a “black guy doing alright for himself.” With almost 15 million subscribers, that hardly seems likely.

But while his delivery may be unorthodox, KSI’s anti-piracy message is pretty standard. In fact, his message is SO familiar it could’ve been penned by the PR department of any major Hollywood studio.

“I’m not the only one that made this movie. There were hundreds and hundreds and hundreds of people that made this movie. It’s not just me you’re fucking over, you’re fucking over so many people,” he explains.

“You’re fucking over the producer, director, actors, people who did the music, the cameramen, the lighting crew, the set crew. Seriously, it’s fucking ridiculous. Laid in America is going to be the most illegally downloaded movie of last month and this month as well, what the fuck?”

While that claim isn’t true, the thought that KSI’s Hollywood-style rant might be industry-sponsored in some way is a pretty tantalizing one as it would reach an audience the studios couldn’t easily reach by other means. Whether it would work is another matter, however.

But for what it’s worth, KSI says he doesn’t care about the money being lost through downloads, he only wants to get enough sales so he can make Laid in America 2. With that in mind, this kind of tweet only provokes him further.

doctur

Faced with torrent taunts, KSI turns his rant up to 11, threatening violence against pirates, calling them unrepeatable words, and promising to use them as a toilet. He says he simply can’t grasp the idea that after giving all of his videos away for free, the one time he asks people to pay, they won’t. Welcome to the Internet.

KSI is quite a character and while he comes over as wild, he certainly knows what he’s doing. No doubt he’s angry (or maybe his distributor Universal Studios is angry) that people are pirating his movie but he also knows his rants about the topic are making him money while attracting even more attention.

Thus far, the video embedded below has almost 1.9 million views but KSI’s change of tone halfway through suggests he really does care about his stuff getting pirated. Maybe that’s part of the show too, but it’s fairly convincing. (Warning:NSFW x2)

Finally, as any YouTube visitor knows, the comments on the site can be absolutely vicious and those under KSI’s video are no different. There are hundreds informing the YouTube star that his movie is going to get pirated anyway but there are also some pointing out that KSI has no right to ask other people not to pirate.

Apparently, some time ago, KSI was found out to be using a pirated version of Sony Vegas to create his YouTube videos. His ‘fans’ aren’t letting him forget that but in true rebellious fashion, KSI isn’t letting it bother him either.

Armed with an image depicting himself as a pirate, KSI fired right back.

ksi-2

Touché…….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.