Tag Archives: iplayer

EC2 Instance Update – C5 Instances with Local NVMe Storage (C5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-c5-instances-with-local-nvme-storage-c5d/

As you can see from my EC2 Instance History post, we add new instance types on a regular and frequent basis. Driven by increasingly powerful processors and designed to address an ever-widening set of use cases, the size and diversity of this list reflects the equally diverse group of EC2 customers!

Near the bottom of that list you will find the new compute-intensive C5 instances. With a 25% to 50% improvement in price-performance over the C4 instances, the C5 instances are designed for applications like batch and log processing, distributed and or real-time analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. Some of these applications can benefit from access to high-speed, ultra-low latency local storage. For example, video encoding, image manipulation, and other forms of media processing often necessitates large amounts of I/O to temporary storage. While the input and output files are valuable assets and are typically stored as Amazon Simple Storage Service (S3) objects, the intermediate files are expendable. Similarly, batch and log processing runs in a race-to-idle model, flushing volatile data to disk as fast as possible in order to make full use of compute resources.

New C5d Instances with Local Storage
In order to meet this need, we are introducing C5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for the applications that I described above, as well as others that you will undoubtedly dream up! Here are the specs:

Instance NamevCPUsRAMLocal StorageEBS BandwidthNetwork Bandwidth
c5d.large24 GiB1 x 50 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.xlarge48 GiB1 x 100 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.2xlarge816 GiB1 x 225 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.4xlarge1632 GiB1 x 450 GB NVMe SSD2.25 GbpsUp to 10 Gbps
c5d.9xlarge3672 GiB1 x 900 GB NVMe SSD4.5 Gbps10 Gbps
c5d.18xlarge72144 GiB2 x 900 GB NVMe SSD9 Gbps25 Gbps

Other than the addition of local storage, the C5 and C5d share the same specs. Both are powered by 3.0 GHz Intel Xeon Platinum 8000-series processors, optimized for EC2 and with full control over C-states on the two largest sizes, giving you the ability to run two cores at up to 3.5 GHz using Intel Turbo Boost Technology.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
C5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent C5 instances.

Jeff;

PS – We will be adding local NVMe storage to other EC2 instance types in the months to come, so stay tuned!

Battle for Wesnoth 1.14 released

Post Syndicated from corbet original https://lwn.net/Articles/753984/rss

Version 1.14 of the
Battle for Wesnoth role-playing game — the first release in over three
years — is available. “Along with the long-awaited debut on Steam,
this new release series brings forth a vast number of additions and changes
in all areas: a new single-player campaign, a visual and functional refresh
of the multiplayer lobby and add-ons manager, a refurbished display engine,
new unit graphics and animations, and much more.

Amazon GameLift FleetIQ and Spot Instances – Save up to 90% On Game Server Hosting

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-gamelift-fleetiq-and-spot-instances-save-up-to-90-on-game-server-hosting/

Amazon GameLift is a scalable, cloud-based runtime environment for session-based multiplayer games. You simply upload a build of your game, tell Amazon GameLift which type of EC2 instances you’d like to host it on, and sit back while Amazon GameLift takes care of setting up sessions and maintaining a suitably-sized fleet of EC2 instances. This automatic scaling allows you to accommodate demand that varies over time without having to keep compute resources in reserve during quiet periods.

Use Spot Instances
Last week we added a new feature to further decrease your per-player, per-hour costs when you host your game on Amazon GameLift. Before that launch, Amazon GameLift instances were always launched in On-Demand form. Instances of this type are always billed at fixed prices, as detailed on the Amazon GameLift Pricing page.

You can now make use Amazon GameLift Spot Instances in your GameLift fleets. These instances represent unused capacity and have prices that rise and fall over time. While your results will vary, you may see savings of up to 90% when compared to On-Demand Instances.

While you can use Spot Instances as a simple money-saving tool, there are other interesting use cases as well. Every game has a life cycle, along with a cadre of loyal players who want to keep on playing until you finally unplug and decommission the servers. You could create an Amazon GameLift fleet comprised of low-cost Spot Instances and keep that beloved game up and running as long as possible without breaking the bank. Behind the scenes, an Amazon GameLift Queue will make use of both Spot and On-Demand Instances, balancing price and availability in an attempt to give you the best possible service at the lowest price.

As I mentioned earlier, Spot Instances represent capacity that is not in use by On-Demand Instances. When this capacity decreases, existing Spot Instances could be interrupted with two minutes of notification and then terminated. Fortunately, there’s a lot of capacity and terminations are, statistically speaking, quite rare. To reduce the frequency even further, Amazon GameLift Queues now include a new feature that we call FleetIQ.

FleetIQ is powered by historical pricing and termination data for Spot Instances. This data, in combination with a very conservative strategy for choosing instance types, further reduces the odds that any particular game will be notified and then interrupted. The onProcessTerminate callback in your game’s server process will be activated if the underlying Spot Instance is about to be interrupted. At that point you have two minutes to close out the game, save any logs, free up any resources, and otherwise wrap things up. While you are doing this, you can call GetTerminationTime to see how much time remains.

Creating a Fleet
To take advantage of Spot Instances and FleetIQ, you can use the Amazon GameLift console or API to set up Queues with multiple fleets of Spot and On-Demand Instances. By adding more fleets into each Queue, you give FleetIQ more options to improve latency, interruption rate, and cost. To start a new game session on an instance, FleetIQ first selects the region with the lowest latency for each player, then chooses the fleet with the lowest interruption rate and cost.

Let’s walk through the process. I’ll create a fleet of On-Demand Instances and a fleet of Spot Instances, in that order:

And:

I take a quick break while the fleets are validated and activated:

Then I create a queue for my game. I select the fleets as the destinations for the queue:

If I am building a game that will have a global user base, I can create fleets in additional AWS Regions and use a player latency policy so that game sessions will be created in a suitable region:

To learn more about how to use this feature, take a look at the Spot Fleet Integration Guide.

Now Available
You can use Amazon GameLift Spot Instance fleets to host your session-based games now! Take a look, give it a try, and let me know what you think.

If you are planning to attend GDC this year, be sure to swing by booth 1001. Check out our GDC 2018 site for more information on our dev day talks, classroom sessions, and in-booth demos.

Jeff;

 

Now Available – Compute-Intensive C5 Instances for Amazon EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-compute-intensive-c5-instances-for-amazon-ec2/

I’m thrilled to announce that the new compute-intensive C5 instances are available today in six sizes for launch in three AWS regions!

These instances designed for compute-heavy applications like batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. The new instances offer a 25% price/performance improvement over the C4 instances, with over 50% for some workloads. They also have additional memory per vCPU, and (for code that can make use of the new AVX-512 instructions), twice the performance for vector and floating point workloads.

Over the years we have been working non-stop to provide our customers with the best possible networking, storage, and compute performance, with a long-term focus on offloading many types of work to dedicated hardware designed and built by AWS. The C5 instance type incorporates the latest generation of our hardware offloads, and also takes another big step forward with the addition of a new hypervisor that runs hand-in-glove with our hardware. The new hypervisor allows us to give you access to all of the processing power provided by the host hardware, while also making performance even more consistent and further raising the bar on security. We’ll be sharing many technical details about it at AWS re:Invent.

The New Instances
The C5 instances are available in six sizes:

Instance NamevCPUs
RAM
EBS BandwidthNetwork Bandwidth
c5.large24 GiBUp to 2.25 GbpsUp to 10 Gbps
c5.xlarge48 GiBUp to 2.25 GbpsUp to 10 Gbps
c5.2xlarge816 GiBUp to 2.25 GbpsUp to 10 Gbps
c5.4xlarge1632 GiB2.25 GbpsUp to 10 Gbps
c5.9xlarge3672 GiB4.5 Gbps10 Gbps
c5.18xlarge72144 GiB9 Gbps25 Gbps

Each vCPU is a hardware hyperthread on a 3.0 GHz Intel Xeon Platinum 8000-series processor. This custom processor, optimized for EC2, allows you have full control over the C-states on the two largest sizes, allowing you to run a single core at up to 3.5 GHz using Intel Turbo Boost Technology.

As you can see from the table, the four smallest instance sizes offer substantially more EBS and network bandwidth than the previous generation of compute-intensive instances.

Because all networking and storage functionality is implemented in hardware, C5 instances require HVM AMIs that include drivers for the Elastic Network Adapter (ENA) and NVMe. The latest Amazon Linux, Microsoft Windows, Ubuntu, RHEL, CentOS, SLES, Debian, and FreeBSD AMIs all support C5 instances. If you are doing machine learning inferencing, or other compute-intensive work, be sure to check out the most recent version of the Intel Math Kernel Library. It has been optimized for the Intel® Xeon® Platinum processor and has the potential to greatly accelerate your work.

In order to remain compatible with instances that use the Xen hypervisor, the device names for EBS volumes will continue to use the existing /dev/sd and /dev/xvd prefixes. The device name that you provide when you attach a volume to an instance is not used because the NVMe driver assigns its own device name (read Amazon EBS and NVMe to learn more):

The nvme command displays additional information about each volume (install it using sudo yum -y install nvme-cli if necessary):

The SN field in the output can be mapped to an EBS volume ID by inserting a “-” after the “vol” prefix (sadly, the NVMe SN field is not long enough to store the entire ID). Here’s a simple script that uses this information to create an EBS snapshot of each attached volume:

$ sudo nvme list | \
  awk '/dev/ {print(gensub("vol", "vol-", 1, $2))}' | \
  xargs -n 1 aws ec2 create-snapshot --volume-id

With a little more work (and a lot of testing), you could create a script that expands EBS volumes that are getting full.

Getting to C5
As I mentioned earlier, our effort to offload work to hardware accelerators has been underway for quite some time. Here’s a recap:

CC1 – Launched in 2010, the CC1 was designed to support scale-out HPC applications. It was the first EC2 instance to support 10 Gbps networking and one of the first to support HVM virtualization. The network fabric that we designed for the CC1 (based on our own switch hardware) has become the standard for all AWS data centers.

C3 – Launched in 2013, the C3 introduced Enhanced Networking and uses dedicated hardware accelerators to support the software defined network inside of each Virtual Private Cloud (VPC). Hardware virtualization removes the I/O stack from the hypervisor in favor of direct access by the guest OS, resulting in higher performance and reduced variability.

C4 – Launched in 2015, the C4 instances are EBS Optimized by default via a dedicated network connection, and also offload EBS processing (including CPU-intensive crypto operations for encrypted EBS volumes) to a hardware accelerator.

C5 – Launched today, the hypervisor that powers the C5 instances allow practically all of the resources of the host CPU to be devoted to customer instances. The ENA networking and the NVMe interface to EBS are both powered by hardware accelerators. The instances do not require (or support) the Xen paravirtual networking or block device drivers, both of which have been removed in order to increase efficiency.

Going forward, we’ll use this hypervisor to power other instance types and plan to share additional technical details in a set of AWS re:Invent sessions.

Launch a C5 Today
You can launch C5 instances today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions in On-Demand and Spot form (Reserved Instances are also available), with additional Regions in the works.

One quick note before I go: The current NVMe driver is not optimized for high-performance sequential workloads and we don’t recommend the use of C5 instances in conjunction with sc1 or st1 volumes. We are aware of this issue and have been working to optimize the driver for this important use case.

Jeff;

Introducing Cost Allocation Tags for Amazon SQS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/introducing-cost-allocation-tags-for-amazon-sqs/

You have long had the ability to tag your AWS resources and to see cost breakouts on a per-tag basis. Cost allocation was launched in 2012 (see AWS Cost Allocation for Customer Bills) and we have steadily added support for additional services, most recently DynamoDB (Introducing Cost Allocation Tags for Amazon DynamoDB), Lambda (AWS Lambda Supports Tagging and Cost Allocations), and EBS (New – Cost Allocation for AWS Snapshots).

Today, we are launching tag-based cost allocation for Amazon Simple Queue Service (SQS). You can now assign tags to your queues and use them to manage your costs at any desired level: application, application stage (for a loosely coupled application that communicates via queues), project, department, or developer. After you have tagged your queues, you can use the AWS Tag Editor to search queues that have tags of interest.

Here’s how I would add three tags (app, stage, and department) to one of my queues:

This feature is available now in all AWS Regions and you can start using in today! To learn more about tagging, read Tagging Your Amazon SQS Queues. To learn more about cost allocation via tags, read Using Cost Allocation Tags. To learn more about how to use message queues to build loosely coupled microservices for modern applications, read our blog post (Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS) and watch the recording of our recent webinar, Decouple and Scale Applications Using Amazon SQS and Amazon SNS.

If you are coming to AWS re:Invent, plan to attend session ARC 330: How the BBC Built a Massive Media Pipeline Using Microservices. In the talk you will find out how they used SNS and SQS to improve the elasticity and reliability of the BBC iPlayer architecture.

Jeff;

PlayerUnknown’s Battlegrounds on a Game Boy?!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/playerunknowns-battlegrounds-game-boy/

My evenings spent watching the Polygon Awful Squad play PlayerUnknown’s Battlegrounds for hours on end have made me mildly obsessed with this record-breaking Steam game.

PlayerUnknown's Battlegrounds Raspberry Pi

So when Michael Darby’s latest PUBG-inspired Game Boy build appeared in my notifications last week, I squealed with excitement and quickly sent the link to my team…while drinking a cocktail by a pool in Turkey ☀️🍹

PUBG ON A GAMEBOY

https://314reactor.com/ https://www.hackster.io/314reactor https://twitter.com/the_mikey_d

PlayerUnknown’s Battlegrounds

For those unfamiliar with the game: PlayerUnknown’s Battlegrounds, or PUBG for short, is a Battle-Royale-style multiplayer online video game in which individuals or teams fight to the death on an island map. As players collect weapons, ammo, and transport, their ‘safe zone’ shrinks, forcing a final face-off until only one character remains.

The game has been an astounding success on Steam, the digital distribution platform which brings PUBG to the masses. It records daily player counts of over a million!

PlayerUnknown's Battlegrounds Raspberry Pi

Yeah, I’d say one or two people seem to enjoy it!

PUBG on a Game Boy?!

As it’s a fairly complex game, let’s get this out of the way right now: no, Michael is not running the entire game on a Nintendo Game Boy. That would be magic silly impossible. Instead, he’s streaming the game from his home PC to a Raspberry Pi Zero W fitted within the hacked handheld console.

Michael removed the excess plastic inside an old Game Boy Color shell to make space for a Zero W, LiPo battery, and TFT screen. He then soldered the necessary buttons to GPIO pins, and wrote a Python script to control them.

PlayerUnknown's Battlegrounds Raspberry Pi

The maker battleground

The full script can be found here, along with a more detailed tutorial for the build.

In order to stream PUBG to the Zero W, Michael uses the open-source NVIDIA steaming service Moonlight. He set his PC’s screen resolution to 800×600 and its frame rate to 30, so that streaming the game to the TFT screen works perfectly, albeit with no sound.

PlayerUnknown's Battlegrounds Raspberry Pi

The end result is a rather impressive build that has confused YouTube commenters since he uploaded footage for it last week. The video has more than 60000 views to date, so it appears we’re not the only ones impressed with Michael’s make.

314reactor

If you’re a regular reader of our blog, you may recognise Michael’s name from his recent Nerf blaster mod. And fans of Raspberry Pi may also have seen his Pi-powered Windows 98 wristwatch earlier in the year. He blogs at 314reactor, where you can read more about his digital making projects.

Windows 98 Wrist watch Raspberry Pi PlayerUnknown's Battlegrounds

Player Two has entered the game

Now it’s your turn. Have you used a Raspberry Pi to create a gaming system? I’m not just talking arcades and RetroPie here. We want to see everything, from Pi-powered board games to tech on the football field.

Share your builds in the comments below and while you’re at it, what game would you like to stream to a handheld device?

The post PlayerUnknown’s Battlegrounds on a Game Boy?! appeared first on Raspberry Pi.

A few tidbits on networking in games

Post Syndicated from Eevee original https://eev.ee/blog/2017/05/22/a-few-tidbits-on-networking-in-games/

Nova Dasterin asks, via Patreon:

How about do something on networking code, for some kind of realtime game (platformer or MMORPG or something). 😀

Ah, I see. You’re hoping for my usual detailed exploration of everything I know about networking code in games.

Well, joke’s on you! I don’t know anything about networking.

Wait… wait… maybe I know one thing.

Doom

Surprise! The thing I know is, roughly, how multiplayer Doom works.

Doom is 100% deterministic. Its random number generator is really a list of shuffled values; each request for a random number produces the next value in the list. There is no seed, either; a game always begins at the first value in the list. Thus, if you play the game twice with exactly identical input, you’ll see exactly the same playthrough: same damage, same monster behavior, and so on.

And that’s exactly what a Doom demo is: a file containing a recording of player input. To play back a demo, Doom runs the game as normal, except that it reads input from a file rather than the keyboard.

Multiplayer works the same way. Rather than passing around the entirety of the world state, Doom sends the player’s input to all the other players. Once a node has received input from every connected player, it advances the world by one tic. There’s no client or server; every peer talks to every other peer.

You can read the code if you want to, but at a glance, I don’t think there’s anything too surprising here. Only sending input means there’s not that much to send, and the receiving end just has to queue up packets from every peer and then play them back once it’s heard from everyone. The underlying transport was pluggable (this being the days before we’d even standardized on IP), which complicated things a bit, but the Unix port that’s on GitHub just uses UDP. The Doom Wiki has some further detail.

This approach is very clever and has a few significant advantages. Bandwidth requirements are fairly low, which is important if it happens to be 1993. Bandwidth and processing requirements are also completely unaffected by the size of the map, since map state never touches the network.

Unfortunately, it has some drawbacks as well. The biggest is that, well, sometimes you want to get the world state back in sync. What if a player drops and wants to reconnect? Everyone has to quit and reconnect to one another. What if an extra player wants to join in? It’s possible to load a saved game in multiplayer, but because the saved game won’t have an actor for the new player, you can’t really load it; you’d have to start fresh from the beginning of a map.

It’s fairly fundamental that Doom allows you to save your game at any moment… but there’s no way to load in the middle of a network game. Everyone has to quit and restart the game, loading the right save file from the command line. And if some players load the wrong save file… I’m not actually sure what happens! I’ve seen ZDoom detect the inconsistency and refuse to start the game, but I suspect that in vanilla Doom, players would have mismatched world states and their movements would look like nonsense when played back in each others’ worlds.

Ah, yes. Having the entire game state be generated independently by each peer leads to another big problem.

Cheating

Maybe this wasn’t as big a deal with Doom, where you’d probably be playing with friends or acquaintances (or coworkers). Modern games have matchmaking that pits you against strangers, and the trouble with strangers is that a nontrivial number of them are assholes.

Doom is a very moddable game, and it doesn’t check that everyone is using exactly the same game data. As long as you don’t change anything that would alter the shape of the world or change the number of RNG rolls (since those would completely desynchronize you from other players), you can modify your own game however you like, and no one will be the wiser. For example, you might change the light level in a dark map, so you can see more easily than the other players. Lighting doesn’t affect the game, only how its drawn, and it doesn’t go over the network, so no one would be the wiser.

Or you could alter the executable itself! It knows everything about the game state, including the health and loadout of the other players; altering it to show you this information would give you an advantage. Also, all that’s sent is input; no one said the input had to come from a human. The game knows where all the other players are, so you could modify it to generate the right input to automatically aim at them. Congratulations; you’ve invented the aimbot.

I don’t know how you can reliably fix these issues. There seems to be an entire underground ecosystem built around playing cat and mouse with game developers. Perhaps the most infamous example is World of Warcraft, where people farm in-game gold as automatically as possible to sell to other players for real-world cash.

Egregious cheating in multiplayer really gets on my nerves; I couldn’t bear knowing that it was rampant in a game I’d made. So I will probably not be working on anything with random matchmaking anytime soon.

Starbound

Let’s jump to something a little more concrete and modern.

Starbound is a procedurally generated universe exploration game — like Terraria in space. Or, if you prefer, like Minecraft in space and also flat. Notably, it supports multiplayer, using the more familiar client/server approach. The server uses the same data files as single-player, but it runs as a separate process; if you want to run a server on your own machine, you run the server and then connect to localhost with the client.

I’ve run a server before, but that doesn’t tell me anything about how it works. Starbound is an interesting example because of the existence of StarryPy — a proxy server that can add some interesting extra behavior by intercepting packets going to and from the real server.

That means StarryPy necessarily knows what the protocol looks like, and perhaps we can glean some insights by poking around in it. Right off the bat there’s a list of all the packet types and rough shapes of their data.

I modded StarryPy to print out every single decoded packet it received (from either the client or the server), then connected and immediately disconnected. (Note that these aren’t necessarily TCP packets; they’re just single messages in the Starbound protocol.) Here is my quick interpretation of what happens:

  1. The client and server briefly negotiate a connection. The password, if any, is sent with a challenge and response.

  2. The client sends a full description of its “ship world” — the player’s ship, which they take with them to other servers. The server sends a partial description of the planet the player is either on, or orbiting.

  3. From here, the server and client mostly communicate world state in the form of small delta updates. StarryPy doesn’t delve into the exact format here, unfortunately. The world basically freezes around you during a multiplayer lag spike, though, so it’s safe to assume that the vast bulk of game simulation happens server-side, and the effects are broadcast to clients.

The protocol has specific message types for various player actions: damaging tiles, dropping items, connecting wires, collecting liquids, moving your ship, and so on. So the basic model is that the player can attempt to do stuff with the chunk of the world they’re looking at, and they’ll get a reaction whenever the server gets back to them.

(I’m dimly aware that some subset of object interactions can happen client-side, but I don’t know exactly which ones. The implications for custom scripted objects are… interesting. Actually, those are slightly hellish in general; Starbound is very moddable, but last I checked it has no way to send mods from the server to the client or anything similar, and by default the server doesn’t even enforce that everyone’s using the same set of mods… so it’s possible that you’ll have an object on your ship that’s only provided by a mod you have but the server lacks, and then who knows what happens.)

IRC

Hang on, this isn’t a video game at all.

Starbound’s “fire and forget” approach reminds me a lot of IRC — a protocol I’ve even implemented, a little bit, kinda. IRC doesn’t have any way to match the messages you send to the responses you get back, and success is silent for some kinds of messages, so it’s impossible (in the general case) to know what caused an error. The most obvious fix for this would be to attach a message id to messages sent out by the client, and include the same id on responses from the server.

It doesn’t look like Starbound has message ids or any other solution to this problem — though StarryPy doesn’t document the protocol well enough for me to be sure. The server just sends a stream of stuff it thinks is important, and when it gets a request from the client, it queues up a response to that as well. It’s TCP, so the client should get all the right messages, eventually. Some of them might be slightly out of order depending on the order the client does stuff, but that’s not a big deal; anyway, the server knows the canonical state.

Some thoughts

I bring up IRC because I’m kind of at the limit of things that I know. But one of those things is that IRC is simultaneously very rickety and wildly successful: it’s a decade older than Google and still in use. (Some recent offerings are starting to eat its lunch, but those are really because clients are inaccessible to new users and the protocol hasn’t evolved much. The problems with the fundamental design of the protocol are only obvious to server and client authors.)

Doom’s cheery assumption that the game will play out the same way for every player feels similarly rickety. Obviously it works — well enough that you can go play multiplayer Doom with exactly the same approach right now, 24 years later — but for something as complex as an FPS it really doesn’t feel like it should.

So while I don’t have enough experience writing multiplayer games to give you a run-down of how to do it, I think the lesson here is that you can get pretty far with simple ideas. Maybe your game isn’t deterministic like Doom — although there’s no reason it couldn’t be — but you probably still have to save the game, or at least restore the state of the world on death/loss/restart, right? There you go: you already have a fragment of a concept of entity state outside the actual entities. Codify that, stick it on the network, and see what happens.

I don’t know if I’ll be doing any significant multiplayer development myself; I don’t even play many multiplayer games. But I’d always assumed it would be a nigh-impossible feat of architectural engineering, and I’m starting to think that maybe it’s no more difficult than anything else in game dev. Easy to fudge, hard to do well, impossible to truly get right so give up that train of thought right now.

Also now I am definitely thinking about how a multiplayer puzzle-platformer would work.

In Case You Missed These: AWS Security Blog Posts from January, February, and March

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-january-february-and-march/

Image of lock and key

In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.

March

March 22: How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53
Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

March 21: New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption
The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

March 21: Updated CJIS Workbook Now Available by Request
The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).

March 9: New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions
Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.

March 8: How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

March 7: How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network
Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.

Image of lock and key

February

February 27: Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts
Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization, and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

February 23: s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3
Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.

February 22: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.

February 22: How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules
AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

February 13: AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud
I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

February 13: How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials
You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

February 13: How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory
Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.

February 10: How to Easily Log On to AWS Services by Using Your On-Premises Active Directory
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

February 9: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.

February 8: How to Remediate Amazon Inspector Security Findings Automatically
The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings but also to automate addressing those findings. Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings and then implements the appropriate remediation based on the type of issue.

February 6: How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector
In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda. This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

Image of lock and key

January

January 30: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption
Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.

January 27: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events
Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

January 26: Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data
Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

January 24: New SOC 2 Report Available: Confidentiality
As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

January 18: Compliance in the Cloud for New Financial Services Cybersecurity Regulations
Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.

January 9: New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift
In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.

January 6: The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016
The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.

January 6: FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services
Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

January 4: The Top 20 Most Viewed AWS IAM Documentation Pages in 2016
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

January 3: The Most Viewed AWS Security Blog Posts in 2016
The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

January 3: How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups
You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.

– Craig

Launch: Amazon GameLift Now Supports All C++ and C# Game Engines

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-amazon-gamelift-now-supports-all-c-and-c-game-engines/

Calling all Game Developers! GDC 2017 was a blast in San Francisco a couple of weeks ago, so there is no better time to be inspired and passionate about learning and building cool games.

Therefore, I am excited to share that Amazon GameLift is now available for all C++ and C# game engines, including Amazon Lumberyard, Unreal Engine, and Unity, all with enhanced game session matching capabilities. For those of you not familiar with Amazon GameLift, let me introduce this managed service designed to aid game developers in delivering fun and innovative online game experiences.

Amazon GameLift is a managed AWS service for hosting dedicated game servers, making it easier for game developers to scale their game capacity and match players into available game sessions. With Amazon GameLift, you can host servers, track game availability, defend game servers from distributed denial of service (DDoS) attacks, and deploy updates without taking your game offline. The Amazon GameLift service powers dedicated game servers for Amazon Game Studios, as well as external game development customers, and is designed to support session-based games with game loops that start and end within a specified time.

The latest Amazon GameLift release enhances the current functionality of the service, as well as adding awesome new features to help simplify game development and deployment for developers. Let us review some of the cool features of the Amazon GameLift service:

  • Multi-engine support: Initially, Amazon GameLift service could only be used with the Amazon Lumberyard game engine. The service is now enhanced to integrate with popular game engines like Unreal Engine, Unity, as well as, custom C# and C++ game engines.
  • New server SDK language support: In order to support a larger set of customers and developers, the service provides an Amazon GameLift Server SDK available for C# and C++. This includes an Unreal Engine plugin, which is a customized version of the C++ Server SDK that is compatible with the Unreal Engine API for Amazon GameLift.
  • Client SDK language support expansion: The Amazon GameLift Client SDK is bundled with the AWS SDK, which is available in a myriad of different languages. This allows game developers to build game clients with an integration of the Amazon GameLift service in their language of choice.
  • Matchmaking: Amazon GameLift continually scans available game servers around the world and matches them against player requests to join games. If low-latency game servers are not available, you can configure the service to automatically add more capacity near your players. Amazon GameLift maintains a queue of waiting players until new games start or new instances launch, then places waiting players into the lowest latency game.
  • Player data handling: Game developers can now store custom player information and pass it directly to a game server. A game server or other game entity with an API call can then retrieve Player data from Amazon GameLift.
  • Console Support: Amazon GameLift supports games developed and architected for XBox One and PS4.

Amazon GameLift does the heavy lifting of tasks once required to create session-based multiplayer games by simplifying the process of deploying, scaling, and maintaining game servers while reducing the time, cost, and risks associated with building the infrastructure from scratch.

The reference architecture of a gaming solution that utilizes the Amazon GameLift would look as follows:

 

Integrating Amazon GameLift into Your Games

The process of integrating Amazon GameLift into your game build can be broken down in a few simple steps:

  1. Prepare your game server for hosting on Amazon GameLift by setting up your game server project with the Amazon GameLift Server SDK and adding communication code to the project.
  2. Package and upload your game server build to the AWS region targeted for game deployment
  3. Create and build a fleet of computing resources to host the game.
  4. Prepare your game client to connect to game sessions maintained by Amazon GameLift using the AWS SDK with Amazon GameLift APIs and add code to game client for calls to Amazon GameLift service and identifying the player region.
  5. Test your Amazon GameLift integration by connecting an Amazon GameLift-hosted game session and verifying game sessions are being created.

Let’s get started putting these steps into practice by setting up the Amazon GameLift Server SDK in a simple game server project using the Unreal game engine.

Unreal Engine (UE)

We start with Epic’s Unreal game engine. For simplicity, we will create the sample Shooter Game project with online multiplayer functionality built-in, and save it locally on the computer.

Now that I have the Multiplayer Shooter Game sample downloaded and open locally on my machine, I will need to be able to manipulate the C++ code to add the Amazon GameLift service to the UE Online Sub-System to manage the online game sessions. The Shooter Game sample is leveraging the Blueprints Visual Scripting system in Unreal Engine. The Blueprints system is a gameplay scripting system based on node-based interfaces in the UE editor, which enables game designers and content creators to create gameplay elements and functionality within UE editor.

Since it is my goal to use the Amazon GameLift C++ SDK to include the Amazon GameLift service in the game and alter the game code, I will need to create Visual Studio project solution to tie in the game and correlate the source code and any binaries from the Shooter Game to the project. To accomplish this I navigate to the context menu and select the File menu option. In the menu dropdown, I find and select the Generate Visual Studio Project Files option.

Once the project has generated, I only need to return to the Context menu and select File, then Open with Visual Studio in order to open the project and view the source code.

In preparation for adding the Amazon Game Lift service to the Shooter Game as the game service and for game session management, you will need to enable the OnlineSubSystem module in your project. In order to do this, open the game build settings file in the Visual Studio project. Since this game project is named ShooterGame, the build file is named ShooterGame.Build.cs and is located in the Source/ShooterGame folder(s) as shown below.

Open your Build files and uncomment the line for the OnlineSubsystemNull module. Since I am using the sample that already utilizes a multiplayer online system, my build options are set appropriately, and the code looks like this:

public class ShooterGame : ModuleRules
{
	public ShooterGame(TargetInfo Target)
	{
		PrivateIncludePaths.AddRange(
			new string[] { 
				"ShooterGame/Classes/Player",
				"ShooterGame/Private",
				"ShooterGame/Private/UI",
				"ShooterGame/Private/UI/Menu",
				"ShooterGame/Private/UI/Style",
				"ShooterGame/Private/UI/Widgets",
            		}
		);
       PublicDependencyModuleNames.AddRange(
			new string[] {
				"Core",
				"CoreUObject",
				"Engine",
				"OnlineSubsystem",
				"OnlineSubsystemUtils",
				"AssetRegistry",
             			"AIModule",
				"GameplayTasks",
			}
		);
       PrivateDependencyModuleNames.AddRange(
			new string[] {
				"InputCore",
				"Slate",
				"SlateCore",
				"ShooterGameLoadingScreen",
				"Json"
			}
		);
		DynamicallyLoadedModuleNames.AddRange(
			new string[] {
				"OnlineSubsystemNull",
				"NetworkReplayStreaming",
				"NullNetworkReplayStreaming",
				"HttpNetworkReplayStreaming"
			}
		);
		PrivateIncludePathModuleNames.AddRange(
			new string[] {
				"NetworkReplayStreaming"
			}
		);
	}
}

Now that we are set with the Shooter Game project, let’s turn our attention on the Amazon GameLift SDK. I want to leverage the C++ SDK as a plugin for the Unreal Engine, therefore, I need to compile the SDK using the using a compilation directive that builds the binaries for this game engine.

With the SDK source downloaded, I can compile the SDK from the source based upon my operating system. Since I am using a Windows machine for this project, I will complete the following steps:

  • Make an out directory to hold the binaries generated from the code compilation:

mkdir out

  • Change to the previously created directory:

cd out

  • Use CMake to specify a build system generator for VS 2015 Win x64 and set UE compilation flag:

cmake -DBUILD_FOR_UNREAL=1 -G “Visual Studio 14 2015 Win64” <source directory>

  • Build C++ project to create binaries using selected Build System (MS Build for this project):

msbuild ALL_BUILD.vcxproj /p:Configuration=Release

With my libraries compiled, I should have the following binary files required to use the Amazon GameLift Unreal Engine plugin.

Linux:

* out/prefix/lib/aws-cpp-sdk-gamelift-server.so

Windows:

* out\prefix\bin\aws-cpp-sdk-gamelift-server.dll

* out\prefix\lib\aws-cpp-sdk-gamelift-server.lib

As you can see below, since I am on Windows, my compiled Amazon GameLift libraries, aws-cpp-sdk-gamelift-server.dll and aws-cpp-sdk-gamelift-server.lib, are located in the prefix\bin and prefix\lib folders respectively.

After copying the binaries to the GameLiftSDK Unreal Engine plugin folder, my Amazon GameLift plugin folder is configured and ready to be added to an Unreal Engine game project.

Given this, it is now time to add the Amazon GameLift plugin to the Unreal Engine ShooterGame project. I could use the Unreal Engine Editor to add the plugin, but instead, I will stay in the Visual Studio project and add the plugin by updating the game directory and project file.

In Windows Explorer, I add a folder called Plugins in the ShooterGame directory and copy my prepared GameLiftServerSDK folder into the directory as noted by the Unreal Engine documentation on plugins.

Now I will open up the ShooterGame.Build.cs file, which is a C# file that holds information about game dependencies.

Within the file I will add the following code:

PublicDependencyModuleNames.AddRange(
            new string[] {
                "Core",
                "CoreUObject",
                "Engine",
                "InputCore",
                "GameLiftServerSDK"
            }
       );

Just to ensure all is in sync with the changes made thus far, I close Visual Studio, go back to the UE Editor, and select Refresh Visual Studio Project.

Upon completion, I select Open Visual Studio and the Plugins folder I added in the ShooterGame directory is now included in the project and able to be viewed in Solution Explorer.

Next, I rebuild my entire solution to get the Amazon GameLift SDK binaries integrated into the project.

I’ll go back to the UE Editor and select Build from the toolbar to ensure the aspects of the Amazon GameLift plugin are included in my ShooterGame. Once compilation is complete, a quick visit to the Settings toolbar and Plugins option shows that the Amazon GameLift plugin is added and is recognized in the project. I will select the Enabled checkbox, which will prompt me to restart the UE Editor. I select Restart Now and allow the Unreal Engine to rebuild the game code files.

Upon completion of the build, the editor will restart and reopen my ShooterGame.

Now things are set for the use of the Amazon GameLift SDK in the ShooterGame project.

With the Unreal editor open, I’ll go into the Open Visual Studio menu option to get back to the ShooterGame code. This will open up Visual Studio and the game code. With Visual Studio open, I go to the ShooterGameMode.cpp file to add the code to initialize the Amazon GameLift SDK. Some key things I must do in order to correctly add the code for Amazon GameLift within my Shooter game project are:

  1. Enclose the Amazon GameLift code within a preprocessor condition using the flag WITH_GAMELIFT=1
  2. Build a dedicated server in Unreal Engine for my targeted server OS ex. Linux
  3. Ensure my build target is a game server type i.e. Type == TargetRules.TargetType.Server

You can find an example of the code needed to add Amazon GameLift in your Unreal Engine project in the documentation here. In addition, you can learn how to build a dedicated server for Unreal Engine by following the Dedicated Server Guide for Windows and Linux provided in the Unreal Engine wiki. With these resources in hand, you should be well on your way to integrating Amazon GameLift into a game project.

I just did a quick review of incorporating the Amazon GameLift SDK in the Unreal Engine game engine, but don’t forget you have the option to add the Amazon GameLift SDK into C# engines like Unity. By downloading the Amazon GameLift Server SDK and compiling the .Net framework 3.5 solution, GameLiftServerSDKNet35.sln. The GameLiftServerSDKNet35.sln solution will enable you to add the Amazon GameLift libraries your Unity3D project. Review the Amazon GameLift SDK documentation, Using the C# Server SDK for Unity, in order to learn more about setting up and using the Amazon GameLift C# Server SDK plugin.

Summary

We reviewed just one of the new aspects added of the Amazon GameLift managed service, but the service provides game developers and game studios with even more. Amazon GameLift enables the building of distributed games by making it easy to manage infrastructure, scale capacity, and match players into available game sessions while defending games from DDoS attacks.

You can learn more about the Amazon GameLift service by reviewing the Amazon GameLift documentation, the Amazon GameLift developer guide and/or check out the Amazon GameLift tutorials on the Amazon GameDev tutorial page in order to hit the ground running with game development with Amazon GameLift service.

Happy Gaming!

– Tara

New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/new-amazon-gamedev-blog-post-protect-multiplayer-game-servers-from-ddos-attacks-by-using-amazon-gamelift/

In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect.

Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift.

Read the post to learn more.

– Craig

Embedding Lua in ZDoom

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/26/embedding-lua-in-zdoom/

I’ve spent a little time trying to embed a Lua interpreter in ZDoom. I didn’t get too far yet; it’s just an experimental thing I poke at every once and a while. The existing pile of constraints makes it an interesting problem, though.

Background

ZDoom is a “source port” (read: fork) of the Doom engine, with all the changes from the commercial forks merged in (mostly Heretic, Hexen, Strife), and a lot of internal twiddles exposed. It has a variety of mechanisms for customizing game behavior; two are major standouts.

One is ACS, a vaguely C-ish language inherited from Hexen. It’s mostly used to automate level behavior — at the simplest, by having a single switch perform multiple actions. It supports the usual loops and conditionals, it can store data persistently, and ZDoom exposes a number of functions to it for inspecting and altering the state of the world, so it can do some neat tricks. Here’s an arbitrary script from my DUMP2 map.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
script "open_church_door" (int tag)
{
    // Open the door more quickly on easier skill levels, so running from the
    // arch-vile is a more viable option
    int skill = GameSkill();
    int speed;
    if (skill < SKILL_NORMAL)
        speed = 64;  // blazing door speed
    else if (skill == SKILL_NORMAL)
        speed = 16;  // normal door speed
    else
        speed = 8;  // very dramatic door speed

    Door_Raise(tag, speed, 68);  // double usual delay
}

However, ZDoom doesn’t actually understand the language itself; ACS is compiled to bytecode. There’s even at least one alternative language that compiles to the same bytecode, which is interesting.

The other big feature is DECORATE, a mostly-declarative mostly-interpreted language for defining new kinds of objects. It’s a fairly direct reflection of how Doom actors are implemented, which is in terms of states. In Doom and the other commercial games, actor behavior was built into the engine, but this language has allowed almost all actors to be extracted as text files instead. For example, the imp is implemented partly as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  Melee:
  Missile:
    TROO EF 8 A_FaceTarget
    TROO G 6 A_TroopAttack
    Goto See
  ...
  }

TROO is the name of the imp’s sprite “family”. A, B, and so on are individual frames. The numbers are durations in tics (35 per second). All of the A_* things (which are optional) are action functions, behavioral functions (built into the engine) that run when the actor switches to that frame. An actor starts out at its Spawn state, so an imp behaves as follows:

  • Spawn. Render as TROO frame A. (By default, action functions don’t run on the very first frame they’re spawned.)
  • Wait 10 tics.
  • Change to TROO frame B. Run A_Look, which checks to see if a player is within line of sight, and if so jumps to the See state.
  • Wait 10 tics.
  • Repeat. (This time, frame A will also run A_Look, since the imp was no longer just spawned.)

All monster and item behavior is one big state table. Even the player’s own weapons work this way, which becomes very confusing — at some points a weapon can be running two states simultaneously. Oh, and there’s A_CustomMissile for monster attacks but A_FireCustomMissile for weapon attacks, and the arguments are different, and if you mix them up you’ll get extremely confusing parse errors.

It’s a little bit of a mess. It’s fairly flexible for what it is, and has come a long way — for example, even original Doom couldn’t pass arguments to action functions (since they were just function pointers), so it had separate functions like A_TroopAttack for every monster; now that same function can be written generically. People have done some very clever things with zero-delay frames (to run multiple action functions in a row) and storing state with dummy inventory items, too. Still, it’s not quite a programming language, and it’s easy to run into walls and bizarre quirks.

When DECORATE lets you down, you have one interesting recourse: to call an ACS script!

Unfortunately, ACS also has some old limitations. The only type it truly understands is int, so you can’t manipulate an actor directly or even store one in a variable. Instead, you have to work with TIDs (“thing IDs”). Every actor has a TID (zero is special-cased to mean “no TID”), and most ACS actor-related functions are expressed in terms of TIDs. For level automation, this is fine, and probably even what you want — you can dump a group of monsters in a map, give them all a TID, and then control them as a group fairly easily.

But if you want to use ACS to enhance DECORATE, you have a bit of a problem. DECORATE defines individual actor behavior. Also, many DECORATE actors are designed independently of a map and intended to be reusable anywhere. DECORATE should thus not touch TIDs at all, because they’re really the map‘s concern, and mucking with TIDs might break map behavior… but ACS can’t refer to actors any other way. A number of action functions can, but you can’t call action functions from ACS, only DECORATE. The workarounds for this are not pretty, especially for beginners, and they’re very easy to silently get wrong.

Also, ultimately, some parts of the engine are just not accessible to either ACS or DECORATE, and neither language is particularly amenable to having them exposed. Adding more native types to ACS is rather difficult without making significant changes to both the language and bytecode, and DECORATE is barely a language at all.

Some long-awaited work is finally being done on a “ZScript”, which purports to solve all of these problems by expanding DECORATE into an entire interpreted-C++-ish scripting language with access to tons of internals. I don’t know what I think of it, and it only seems to half-solve the problem, since it doesn’t replace ACS.

Trying out Lua

Lua is supposed to be easy to embed, right? That’s the one thing it’s famous for. Before ZScript actually started to materialize, I thought I’d take a little crack at embedding a Lua interpreter and exposing some API stuff to it.

It’s not very far along yet, but it can do one thing that’s always been completely impossible in both ACS and DECORATE: print out the player’s entire inventory. You can check how many of a given item the player has in either language, but neither has a way to iterate over a collection. In Lua, it’s pretty easy.

1
2
3
4
5
6
function lua_test_script(activator, ...)
    for item, amount in pairs(activator.inventory) do
        -- This is Lua's builtin print(), so it goes to stdout
        print(item.class.name, amount)
    end
end

I made a tiny test map with a switch that tries to run the ACS script named lua_test_script. I hacked the name lookup to first look for the name in Lua’s global scope; if the function exists, it’s called immediately, and ACS isn’t consulted at all. The code above is just a regular (global) function in a regular Lua file, embedded as a lump in the map. So that was a good start, and was pretty neat to see work.

Writing the bindings

I used the bare Lua API at first. While its API is definitely very simple, actually using it to define and expose a large API in practice is kind of repetitive and error-prone, and I was never confident I was doing it quite right. It’s plain C and it works entirely through stack manipulation and it relies on a lot of casting to/from void*, so virtually anything might go wrong at any time.

I was on the cusp of writing a bunch of gross macros to automate the boring parts, and then I found sol2, which is pretty great. It makes heavy use of basically every single C++11 feature, so it’s a nightmare when it breaks (and I’ve had to track down a few bugs), but it’s expressive as hell when it works:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
lua.new_usertype<AActor>("zdoom.AActor",
    "__tostring", [](AActor& actor) { return "<actor>"; },
    // Pointer to an unbound method.  Sol automatically makes this an attribute
    // rather than a method because it takes no arguments, then wraps its
    // return value to pass it back to Lua, no manual wrapper code required.
    "class", &AActor::GetClass,
    "inventory", sol::property([](AActor& actor) -> ZLuaInventory { return ZLuaInventory(actor); }),
    // Pointers to unbound attributes.  Sol turns these into writable
    // attributes on the Lua side.
    "health", &AActor::health,
    "floorclip", &AActor::Floorclip,
    "weave_index_xy", &AActor::WeaveIndexXY,
    "weave_index_z", &AActor::WeaveIndexZ);

This is the type of the activator argument from the script above. It works via template shenanigans, so most of the work is done at compile time. AActor has a lot of properties of various types; wrapping them with the bare Lua API would’ve been awful, but wrapping them with Sol is fairly straightforward.

Lifetime

activator.inventory is a wrapper around a ZLuaInventory object, which I made up. It’s just a tiny proxy struct that tries to represent the inventory of a particular actor, because the engine itself doesn’t quite have such a concept — an actor’s “inventory” is a single item (itself an actor), and each item has a pointer to the next item in the inventory. Creating an intermediate type lets me hide that detail from Lua and pretend the inventory is a real container.

The inventory is thus not a real table; pairs() works on it because it provides the __pairs metamethod. It calls an iter method returning a closure, per Lua’s iteration API, which Sol makes just work:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
struct ZLuaInventory {
    ...
    std::function<AInventory* ()> iter()
    {
        TObjPtr<AInventory> item = this->actor->Inventory;
        return [item]() mutable {
            AInventory* ret = item;
            if (ret)
                item = ret->NextInv();
            return ret;
        };
    }
}

C++’s closures are slightly goofy and it took me a few tries to land on this, but it works.

Well, sort of.

I don’t know how I got this idea in my head, but I was pretty sure that ZDoom’s TObjPtr did reference counting and would automatically handle the lifetime problems in the above code. Eventually Lua reaps the closure, then C++ reaps the closure, then the wrapped AInventorys refcount drops, and all is well.

Turns out TObjPtr doesn’t do reference counting. Rather, all the game objects participate in tracing garbage collection. The basic idea is to start from some root object and recursively traverse all the objects reachable from that root; whatever isn’t reached is garbage and can be deleted.

Unfortunately, the Lua interpreter is not reachable from ZDoom’s own object tree. If an object ends up only being held by Lua, ZDoom will think it’s garbage and delete it prematurely, leaving a dangling reference. Those are bad.

I think I can fix without too much trouble. Sol allows customizing how it injects particular types, so I can use that for the type tree that participates in this GC scheme and keep an unordered_set of all objects that are alive in Lua. The Lua interpreter itself is already wrapped in an object that participates in the GC, so when the GC descends to the wrapper, it’s easy to tell it that that set of objects is alive. I’ll probably need to figure out read/write barriers, too, but I haven’t looked too closely at how ZDoom uses those yet. I don’t know whether it’s possible for an object to be “dead” (as in no longer usable, not just 0 health) before being reaped, but if so, I’ll need to figure out something there too.

It’s a little ironic that I have to do this weird workaround when ZDoom’s tracing garbage collector is based on… Lua’s.

ZDoom does have types I want to expose that aren’t garbage collected, but those are all map structures like sectors, which are never created or destroyed at runtime. I will have to be careful with the Lua interpreter itself to make sure those can’t live beyond the current map, but I haven’t really dealt with map changes at all yet. The ACS approach is that everything is map-local, and there’s some limited storage for preserving values across maps; I could do something similar, perhaps only allowing primitive scalars.

Asynchronicity

Another critical property of ACS scripts is that they can pause themselves. They can either wait for a set number of tics with delay(), or wait for map geometry to stop being busy with something like tagwait(). So you can raise up some stairs, wait for the stairs to finish appearing, and then open the door they lead to. Or you can simulate game rules by running a script in an infinite loop that waits for a few tics between iterations. It’s pretty handy. It’s incredibly handy. It’s non-negotiable.

Luckily, Lua can emulate this using coroutines. I implemented the delay case yesterday:

1
2
3
4
5
function lua_test_script(activator, ...)
    zprint("hey it's me what's up", ...)
    coroutine.yield("delay", 70)
    zprint("i'm back again")
end

When I press the switch, I see the first message, then there’s a two-second pause (Doom is 35fps), then I see the second message.

A lot more details need to be hammered out before this is really equivalent to what ACS can do, but the basic functionality is there. And since these are full-stack coroutines, I can trivially wrap that yield gunk in a delay(70) function, so you never have to know the difference.

Determinism

ZDoom has demos and peer-to-peer multiplayer. Both features rely critically on the game state’s unfolding exactly the same way, given the same seed and sequence of inputs.

ACS goes to great lengths to preserve this. It executes deterministically. It has very, very few ways to make decisions based on anything but the current state of the game. Netplay and demos just work; modders and map authors never have to think about it.

I don’t know if I can guarantee the same about Lua. I’d think so, but I don’t know so. Will the order of keys in a table be exactly the same on every system, for example? That’s important! Even the ACS random-number generator is deterministic.

I hope this is the case. I know some games, like Starbound, implicitly assume for multiplayer purposes that scripts will execute the same way on every system. So it’s probably fine. I do wish Lua made some sort of guarantee here, though, especially since it’s such an obvious and popular candidate for game scripting.

Savegames

ZDoom allows you to quicksave at any time.

Any time.

Not while a script is running, mind you. Script execution blocks the gameplay thread, so only one thing can actually be happening at a time. But what happens if you save while a script is in the middle of a tagwait?

The coroutine needs to be persisted, somehow. More importantly, when the game is loaded, the coroutine needs to be restored to the same state: paused in the same place, with locals set to the same values. Even if those locals were wrapped pointers to C++ objects, which now have different addresses.

Vanilla Lua has no way to do this. Vanilla Lua has a pretty poor serialization story overall — nothing is built in — which is honestly kind of shocking. People use Lua for games, right? Like, a lot? How is this not an extremely common problem?

A potential solution exists in the form of Eris, a modified Lua that does all kinds of invasive things to allow absolutely anything to be serialized. Including coroutines!

So Eris makes this at least possible. I haven’t made even the slightest attempt at using it yet, but a few gotchas already stand out to me.

For one, Eris serializes everything. Even regular ol’ functions are serialized as Lua bytecode. A naïve approach would thus end up storing a copy of the entire game script in the save file.

Eris has a thing called the “permanent object table”, which allows giving names to specific Lua values. Those values are then serialized by name instead, and the names are looked up in the same table to deserialize. So I could walk the Lua namespace myself after the initial script load and stick all reachable functions in this table to avoid having them persisted. (That won’t catch if someone loads new code during play, but that sounds like a really bad idea anyway, and I’d like to prevent it if possible.) I have to do this to some extent anyway, since Eris can’t persist the wrapped C++ functions I’m exposing to Lua. Even if a script does some incredibly fancy dynamic stuff to replace global functions with closures at runtime, that’s okay; they’ll be different functions, so Eris will fall back to serializing them.

Then when the save is reloaded, Eris will replace any captured references to a global function with the copy that already exists in the map script. ZDoom doesn’t let you load saves across different mods, so the functions should be the same. I think. Hmm, maybe I should check on exactly what the load rules are. If you can load a save against a more recent copy of a map, you’ll want to get its updated scripts, but stored closures and coroutines might be old versions, and that is probably bad. I don’t know if there’s much I can do about that, though, unless Eris can somehow save the underlying code from closures/coros as named references too.

Eris also has a mechanism for storing wrapped native objects, so all I have to worry about is translating pointers, and that’s a problem Doom has already solved (somehow). Alas, that mechanism is also accessible to pure Lua code, and the docs warn that it’s possible to get into an infinite loop when loading. I’d rather not give modders the power to fuck up a save file, so I’ll have to disable that somehow.

Finally, since Eris loads bytecode, it’s possible to do nefarious things with a specially-crafted save file. But since the save file is full of a web of pointers anyway, I suspect it’s not too hard to segfault the game with a specially-crafted save file anyway. I’ll need to look into this. Or maybe I won’t, since I don’t seriously expect this to be merged in.

Runaway scripts

Speaking of which, ACS currently has detection for “runaway scripts”, i.e. those that look like they might be stuck in an infinite loop (or are just doing a ludicrous amount of work). Since scripts are blocking, the game does not actually progress while a script is running, and a very long script would appear to freeze the game.

I think ACS does this by counting instructions. I see Lua has its own mechanism for doing that, so limiting script execution “time” shouldn’t be too hard.

Defining new actors

I want to be able to use Lua with (or instead of) DECORATE, too, but I’m a little hung up on syntax.

I do have something slightly working — I was able to create a variant imp class with a bunch more health from Lua, then spawn it and fight it. Also, I did it at runtime, which is probably bad — I don’t know that there’s any way to destroy an actor class, so having them be map-scoped makes no sense.

That could actually pose a bit of a problem. The Lua interpreter should be scoped to a single map, but actor classes are game-global. Do they live in separate interpreters? That seems inconvenient. I could load the game-global stuff, take an internal-only snapshot of the interpreter with Lua (bytecode and all), and then restore it at the beginning of each level? Hm, then what happens if you capture a reference to an actor method in a save file…? Christ.

I could consider making the interpreter global and doing black magic to replace all map objects with nil when changing maps, but I don’t think that can possibly work either. ZDoom has hubs — levels that can be left and later revisited, preserving their state just like with a save — and that seems at odds with having a single global interpreter whose state persists throughout the game.

Er, anyway. So, the problem with syntax is that DECORATEs own syntax is extremely compact and designed for its very specific goal of state tables. Even ZScript appears to preserve the state table syntax, though it lets you write your own action functions or just provide a block of arbitrary code. Here’s a short chunk of the imp implementation again, for reference.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  ...
  }

Some tricky parts that stand out to me:

  • Labels are important, since these are state tables, and jumping to a particular state is very common. It’s tempting to use Lua coroutines here somehow, but short of using a lot of goto in Lua code (yikes!), jumping around arbitrarily doesn’t work. Also, it needs to be possible to tell an actor to jump to a particular state from outside — that’s how A_Look works, and there’s even an ACS function to do it manually.

  • Aside from being shorthand, frames are fine. Though I do note that hacks like AABBCCDD 3 are relatively common. The actual animation that’s wanted here is ABCD 6, but because animation and behavior are intertwined, the labels need to be repeated to run the action function more often. I wonder if it’s desirable to be able to separate display and behavior?

  • The durations seem straightforward, but they can actually be a restricted kind of expression as well. So just defining them as data in a table doesn’t quite work.

  • This example doesn’t have any, but states can also have a number of flags, indicated by keywords after the duration. (Slightly ambiguous, since there’s nothing strictly distinguishing them from action functions.) Bright, for example, is a common flag on projectiles, weapons, and important pickups; it causes the sprite to be drawn fullbright during that frame.

  • Obviously, actor behavior is a big part of the game sim, so ideally it should require dipping into Lua-land as little as possible.

Ideas I’ve had include the following.

Emulate state tables with arguments? A very straightforward way to do the above would be to just, well, cram it into one big table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
define_actor{
    ...
    states = {
        'Spawn:',
        'TROO', 'AB', 10, A_Look,
        'loop',
        'See:',
        'TROO', 'AABBCCDD', 3, A_Chase,
        'loop',
        ...
    },
}

It would work, technically, I guess, except for non-literal durations, but I’d basically just be exposing the DECORATE parser from Lua and it would be pretty ridiculous.

Keep the syntax, but allow calling Lua from it? DECORATE is okay, for the most part. For simple cases, it’s great, even. Would it be good enough to be able to write new action functions in Lua? Maybe. Your behavior would be awkwardly split between Lua and DECORATE, though, which doesn’t seem ideal. But it would be the most straightforward approach, and it would completely avoid questions of how to emulate labels and state counts.

As an added benefit, this would keep DECORATE almost-purely declarative — which means editor tools could still reliably parse it and show you previews of custom objects.

Split animation from behavior? This could go several ways, but the most obvious to me is something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
define_actor{
    ...
    states = {
        spawn = function(self)
            self:set_animation('AB', 10)
            while true do
                A_Look(self)
                delay(10)
            end
        end,
        see = function(self)
            self:set_animation('ABCD', 6)
            while true do
                A_Chase(self)
                delay(3)
            end
        end,
    },
}

This raises plenty of other API questions, like how to wait until an animation has finished or how to still do work on a specific frame, but I think those are fairly solvable. The big problems are that it’s very much not declarative, and it ends up being rather wordier. It’s not all boilerplate, though; it’s fairly straightforward. I see some value in having state delays and level script delays work the same way, too. And in some cases, you have only an animation with no code at all, so the heavier use of Lua should balance out. I don’t know.

A more practical problem is that, currently, it’s possible to jump to an arbitrary number of states past a given label, and that would obviously make no sense with this approach. It’s pretty rare and pretty unreadable, so maybe that’s okay. Also, labels aren’t blocks, so it’s entirely possible to have labels that don’t end with a keyword like loop and instead carry straight on into the next label — but those are usually used for logic more naturally expressed as for or while, so again, maybe losing that ability is okay.

Or… perhaps it makes sense to do both of these last two approaches? Built-in classes should stay as DECORATE anyway, so that existing code can still inherit from them and perform jumps with offsets, but new code could go entirely Lua for very complex actors.

Alas, this is probably one of those questions that won’t have an obvious answer unless I just build several approaches and port some non-trivial stuff to them to see how they feel.

And further

An enduring desire among ZDoom nerds has been the ability to write custom “thinkers”. Thinkers are really anything that gets to act each tic, but the word also specifically refers to the logic responsible for moving floors, opening doors, changing light levels, and so on. Exposing those more directly to Lua, and letting you write your own, would be pretty interesting.

Anyway

I don’t know if I’ll do all of this. I somewhat doubt it, in fact. I pick it up for half a day every few weeks to see what more I can make it do, just because it’s interesting. It has virtually no chance of being upstreamed anyway (the only active maintainer hates Lua, and thinks poorly of dynamic languages in general; plus, it’s redundant with ZScript) and I don’t really want to maintain my own yet another Doom fork, so I don’t expect it to ever be a serious project.

The source code for what I’ve done so far is available, but it’s brittle and undocumented, so I’m not going to tell you where to find it. If it gets far enough along to be useful as more than a toy, I’ll make a slightly bigger deal about it.

Embedding Lua in ZDoom

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/26/embedding-lua-in-zdoom/

I’ve spent a little time trying to embed a Lua interpreter in ZDoom. I didn’t get too far yet; it’s just an experimental thing I poke at every once and a while. The existing pile of constraints makes it an interesting problem, though.

Background

ZDoom is a “source port” (read: fork) of the Doom engine, with all the changes from the commercial forks merged in (mostly Heretic, Hexen, Strife), and a lot of internal twiddles exposed. It has a variety of mechanisms for customizing game behavior; two are major standouts.

One is ACS, a vaguely C-ish language inherited from Hexen. It’s mostly used to automate level behavior — at the simplest, by having a single switch perform multiple actions. It supports the usual loops and conditionals, it can store data persistently, and ZDoom exposes a number of functions to it for inspecting and altering the state of the world, so it can do some neat tricks. Here’s an arbitrary script from my DUMP2 map.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
script "open_church_door" (int tag)
{
    // Open the door more quickly on easier skill levels, so running from the
    // arch-vile is a more viable option
    int skill = GameSkill();
    int speed;
    if (skill < SKILL_NORMAL)
        speed = 64;  // blazing door speed
    else if (skill == SKILL_NORMAL)
        speed = 16;  // normal door speed
    else
        speed = 8;  // very dramatic door speed

    Door_Raise(tag, speed, 68);  // double usual delay
}

However, ZDoom doesn’t actually understand the language itself; ACS is compiled to bytecode. There’s even at least one alternative language that compiles to the same bytecode, which is interesting.

The other big feature is DECORATE, a mostly-declarative mostly-interpreted language for defining new kinds of objects. It’s a fairly direct reflection of how Doom actors are implemented, which is in terms of states. In Doom and the other commercial games, actor behavior was built into the engine, but this language has allowed almost all actors to be extracted as text files instead. For example, the imp is implemented partly as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  Melee:
  Missile:
    TROO EF 8 A_FaceTarget
    TROO G 6 A_TroopAttack
    Goto See
  ...
  }

TROO is the name of the imp’s sprite “family”. A, B, and so on are individual frames. The numbers are durations in tics (35 per second). All of the A_* things (which are optional) are action functions, behavioral functions (built into the engine) that run when the actor switches to that frame. An actor starts out at its Spawn state, so an imp behaves as follows:

  • Spawn. Render as TROO frame A. (By default, action functions don’t run on the very first frame they’re spawned.)
  • Wait 10 tics.
  • Change to TROO frame B. Run A_Look, which checks to see if a player is within line of sight, and if so jumps to the See state.
  • Wait 10 tics.
  • Repeat. (This time, frame A will also run A_Look, since the imp was no longer just spawned.)

All monster and item behavior is one big state table. Even the player’s own weapons work this way, which becomes very confusing — at some points a weapon can be running two states simultaneously. Oh, and there’s A_CustomMissile for monster attacks but A_FireCustomMissile for weapon attacks, and the arguments are different, and if you mix them up you’ll get extremely confusing parse errors.

It’s a little bit of a mess. It’s fairly flexible for what it is, and has come a long way — for example, even original Doom couldn’t pass arguments to action functions (since they were just function pointers), so it had separate functions like A_TroopAttack for every monster; now that same function can be written generically. People have done some very clever things with zero-delay frames (to run multiple action functions in a row) and storing state with dummy inventory items, too. Still, it’s not quite a programming language, and it’s easy to run into walls and bizarre quirks.

When DECORATE lets you down, you have one interesting recourse: to call an ACS script!

Unfortunately, ACS also has some old limitations. The only type it truly understands is int, so you can’t manipulate an actor directly or even store one in a variable. Instead, you have to work with TIDs (“thing IDs”). Every actor has a TID (zero is special-cased to mean “no TID”), and most ACS actor-related functions are expressed in terms of TIDs. For level automation, this is fine, and probably even what you want — you can dump a group of monsters in a map, give them all a TID, and then control them as a group fairly easily.

But if you want to use ACS to enhance DECORATE, you have a bit of a problem. DECORATE defines individual actor behavior. Also, many DECORATE actors are designed independently of a map and intended to be reusable anywhere. DECORATE should thus not touch TIDs at all, because they’re really the map‘s concern, and mucking with TIDs might break map behavior… but ACS can’t refer to actors any other way. A number of action functions can, but you can’t call action functions from ACS, only DECORATE. The workarounds for this are not pretty, especially for beginners, and they’re very easy to silently get wrong.

Also, ultimately, some parts of the engine are just not accessible to either ACS or DECORATE, and neither language is particularly amenable to having them exposed. Adding more native types to ACS is rather difficult without making significant changes to both the language and bytecode, and DECORATE is barely a language at all.

Some long-awaited work is finally being done on a “ZScript”, which purports to solve all of these problems by expanding DECORATE into an entire interpreted-C++-ish scripting language with access to tons of internals. I don’t know what I think of it, and it only seems to half-solve the problem, since it doesn’t replace ACS.

Trying out Lua

Lua is supposed to be easy to embed, right? That’s the one thing it’s famous for. Before ZScript actually started to materialize, I thought I’d take a little crack at embedding a Lua interpreter and exposing some API stuff to it.

It’s not very far along yet, but it can do one thing that’s always been completely impossible in both ACS and DECORATE: print out the player’s entire inventory. You can check how many of a given item the player has in either language, but neither has a way to iterate over a collection. In Lua, it’s pretty easy.

1
2
3
4
5
6
function lua_test_script(activator, ...)
    for item, amount in pairs(activator.inventory) do
        -- This is Lua's builtin print(), so it goes to stdout
        print(item.class.name, amount)
    end
end

I made a tiny test map with a switch that tries to run the ACS script named lua_test_script. I hacked the name lookup to first look for the name in Lua’s global scope; if the function exists, it’s called immediately, and ACS isn’t consulted at all. The code above is just a regular (global) function in a regular Lua file, embedded as a lump in the map. So that was a good start, and was pretty neat to see work.

Writing the bindings

I used the bare Lua API at first. While its API is definitely very simple, actually using it to define and expose a large API in practice is kind of repetitive and error-prone, and I was never confident I was doing it quite right. It’s plain C and it works entirely through stack manipulation and it relies on a lot of casting to/from void*, so virtually anything might go wrong at any time.

I was on the cusp of writing a bunch of gross macros to automate the boring parts, and then I found sol2, which is pretty great. It makes heavy use of basically every single C++11 feature, so it’s a nightmare when it breaks (and I’ve had to track down a few bugs), but it’s expressive as hell when it works:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
lua.new_usertype<AActor>("zdoom.AActor",
    "__tostring", [](AActor& actor) { return "<actor>"; },
    // Pointer to an unbound method.  Sol automatically makes this an attribute
    // rather than a method because it takes no arguments, then wraps its
    // return value to pass it back to Lua, no manual wrapper code required.
    "class", &AActor::GetClass,
    "inventory", sol::property([](AActor& actor) -> ZLuaInventory { return ZLuaInventory(actor); }),
    // Pointers to unbound attributes.  Sol turns these into writable
    // attributes on the Lua side.
    "health", &AActor::health,
    "floorclip", &AActor::Floorclip,
    "weave_index_xy", &AActor::WeaveIndexXY,
    "weave_index_z", &AActor::WeaveIndexZ);

This is the type of the activator argument from the script above. It works via template shenanigans, so most of the work is done at compile time. AActor has a lot of properties of various types; wrapping them with the bare Lua API would’ve been awful, but wrapping them with Sol is fairly straightforward.

Lifetime

activator.inventory is a wrapper around a ZLuaInventory object, which I made up. It’s just a tiny proxy struct that tries to represent the inventory of a particular actor, because the engine itself doesn’t quite have such a concept — an actor’s “inventory” is a single item (itself an actor), and each item has a pointer to the next item in the inventory. Creating an intermediate type lets me hide that detail from Lua and pretend the inventory is a real container.

The inventory is thus not a real table; pairs() works on it because it provides the __pairs metamethod. It calls an iter method returning a closure, per Lua’s iteration API, which Sol makes just work:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
struct ZLuaInventory {
    ...
    std::function<AInventory* ()> iter()
    {
        TObjPtr<AInventory> item = this->actor->Inventory;
        return [item]() mutable {
            AInventory* ret = item;
            if (ret)
                item = ret->NextInv();
            return ret;
        };
    }
}

C++’s closures are slightly goofy and it took me a few tries to land on this, but it works.

Well, sort of.

I don’t know how I got this idea in my head, but I was pretty sure that ZDoom’s TObjPtr did reference counting and would automatically handle the lifetime problems in the above code. Eventually Lua reaps the closure, then C++ reaps the closure, then the wrapped AInventorys refcount drops, and all is well.

Turns out TObjPtr doesn’t do reference counting. Rather, all the game objects participate in tracing garbage collection. The basic idea is to start from some root object and recursively traverse all the objects reachable from that root; whatever isn’t reached is garbage and can be deleted.

Unfortunately, the Lua interpreter is not reachable from ZDoom’s own object tree. If an object ends up only being held by Lua, ZDoom will think it’s garbage and delete it prematurely, leaving a dangling reference. Those are bad.

I think I can fix without too much trouble. Sol allows customizing how it injects particular types, so I can use that for the type tree that participates in this GC scheme and keep an unordered_set of all objects that are alive in Lua. The Lua interpreter itself is already wrapped in an object that participates in the GC, so when the GC descends to the wrapper, it’s easy to tell it that that set of objects is alive. I’ll probably need to figure out read/write barriers, too, but I haven’t looked too closely at how ZDoom uses those yet. I don’t know whether it’s possible for an object to be “dead” (as in no longer usable, not just 0 health) before being reaped, but if so, I’ll need to figure out something there too.

It’s a little ironic that I have to do this weird workaround when ZDoom’s tracing garbage collector is based on… Lua’s.

ZDoom does have types I want to expose that aren’t garbage collected, but those are all map structures like sectors, which are never created or destroyed at runtime. I will have to be careful with the Lua interpreter itself to make sure those can’t live beyond the current map, but I haven’t really dealt with map changes at all yet. The ACS approach is that everything is map-local, and there’s some limited storage for preserving values across maps; I could do something similar, perhaps only allowing primitive scalars.

Asynchronicity

Another critical property of ACS scripts is that they can pause themselves. They can either wait for a set number of tics with delay(), or wait for map geometry to stop being busy with something like tagwait(). So you can raise up some stairs, wait for the stairs to finish appearing, and then open the door they lead to. Or you can simulate game rules by running a script in an infinite loop that waits for a few tics between iterations. It’s pretty handy. It’s incredibly handy. It’s non-negotiable.

Luckily, Lua can emulate this using coroutines. I implemented the delay case yesterday:

1
2
3
4
5
function lua_test_script(activator, ...)
    zprint("hey it's me what's up", ...)
    coroutine.yield("delay", 70)
    zprint("i'm back again")
end

When I press the switch, I see the first message, then there’s a two-second pause (Doom is 35fps), then I see the second message.

A lot more details need to be hammered out before this is really equivalent to what ACS can do, but the basic functionality is there. And since these are full-stack coroutines, I can trivially wrap that yield gunk in a delay(70) function, so you never have to know the difference.

Determinism

ZDoom has demos and peer-to-peer multiplayer. Both features rely critically on the game state’s unfolding exactly the same way, given the same seed and sequence of inputs.

ACS goes to great lengths to preserve this. It executes deterministically. It has very, very few ways to make decisions based on anything but the current state of the game. Netplay and demos just work; modders and map authors never have to think about it.

I don’t know if I can guarantee the same about Lua. I’d think so, but I don’t know so. Will the order of keys in a table be exactly the same on every system, for example? That’s important! Even the ACS random-number generator is deterministic.

I hope this is the case. I know some games, like Starbound, implicitly assume for multiplayer purposes that scripts will execute the same way on every system. So it’s probably fine. I do wish Lua made some sort of guarantee here, though, especially since it’s such an obvious and popular candidate for game scripting.

Savegames

ZDoom allows you to quicksave at any time.

Any time.

Not while a script is running, mind you. Script execution blocks the gameplay thread, so only one thing can actually be happening at a time. But what happens if you save while a script is in the middle of a tagwait?

The coroutine needs to be persisted, somehow. More importantly, when the game is loaded, the coroutine needs to be restored to the same state: paused in the same place, with locals set to the same values. Even if those locals were wrapped pointers to C++ objects, which now have different addresses.

Vanilla Lua has no way to do this. Vanilla Lua has a pretty poor serialization story overall — nothing is built in — which is honestly kind of shocking. People use Lua for games, right? Like, a lot? How is this not an extremely common problem?

A potential solution exists in the form of Eris, a modified Lua that does all kinds of invasive things to allow absolutely anything to be serialized. Including coroutines!

So Eris makes this at least possible. I haven’t made even the slightest attempt at using it yet, but a few gotchas already stand out to me.

For one, Eris serializes everything. Even regular ol’ functions are serialized as Lua bytecode. A naïve approach would thus end up storing a copy of the entire game script in the save file.

Eris has a thing called the “permanent object table”, which allows giving names to specific Lua values. Those values are then serialized by name instead, and the names are looked up in the same table to deserialize. So I could walk the Lua namespace myself after the initial script load and stick all reachable functions in this table to avoid having them persisted. (That won’t catch if someone loads new code during play, but that sounds like a really bad idea anyway, and I’d like to prevent it if possible.) I have to do this to some extent anyway, since Eris can’t persist the wrapped C++ functions I’m exposing to Lua. Even if a script does some incredibly fancy dynamic stuff to replace global functions with closures at runtime, that’s okay; they’ll be different functions, so Eris will fall back to serializing them.

Then when the save is reloaded, Eris will replace any captured references to a global function with the copy that already exists in the map script. ZDoom doesn’t let you load saves across different mods, so the functions should be the same. I think. Hmm, maybe I should check on exactly what the load rules are. If you can load a save against a more recent copy of a map, you’ll want to get its updated scripts, but stored closures and coroutines might be old versions, and that is probably bad. I don’t know if there’s much I can do about that, though, unless Eris can somehow save the underlying code from closures/coros as named references too.

Eris also has a mechanism for storing wrapped native objects, so all I have to worry about is translating pointers, and that’s a problem Doom has already solved (somehow). Alas, that mechanism is also accessible to pure Lua code, and the docs warn that it’s possible to get into an infinite loop when loading. I’d rather not give modders the power to fuck up a save file, so I’ll have to disable that somehow.

Finally, since Eris loads bytecode, it’s possible to do nefarious things with a specially-crafted save file. But since the save file is full of a web of pointers anyway, I suspect it’s not too hard to segfault the game with a specially-crafted save file anyway. I’ll need to look into this. Or maybe I won’t, since I don’t seriously expect this to be merged in.

Runaway scripts

Speaking of which, ACS currently has detection for “runaway scripts”, i.e. those that look like they might be stuck in an infinite loop (or are just doing a ludicrous amount of work). Since scripts are blocking, the game does not actually progress while a script is running, and a very long script would appear to freeze the game.

I think ACS does this by counting instructions. I see Lua has its own mechanism for doing that, so limiting script execution “time” shouldn’t be too hard.

Defining new actors

I want to be able to use Lua with (or instead of) DECORATE, too, but I’m a little hung up on syntax.

I do have something slightly working — I was able to create a variant imp class with a bunch more health from Lua, then spawn it and fight it. Also, I did it at runtime, which is probably bad — I don’t know that there’s any way to destroy an actor class, so having them be map-scoped makes no sense.

That could actually pose a bit of a problem. The Lua interpreter should be scoped to a single map, but actor classes are game-global. Do they live in separate interpreters? That seems inconvenient. I could load the game-global stuff, take an internal-only snapshot of the interpreter with Lua (bytecode and all), and then restore it at the beginning of each level? Hm, then what happens if you capture a reference to an actor method in a save file…? Christ.

I could consider making the interpreter global and doing black magic to replace all map objects with nil when changing maps, but I don’t think that can possibly work either. ZDoom has hubs — levels that can be left and later revisited, preserving their state just like with a save — and that seems at odds with having a single global interpreter whose state persists throughout the game.

Er, anyway. So, the problem with syntax is that DECORATEs own syntax is extremely compact and designed for its very specific goal of state tables. Even ZScript appears to preserve the state table syntax, though it lets you write your own action functions or just provide a block of arbitrary code. Here’s a short chunk of the imp implementation again, for reference.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  ...
  }

Some tricky parts that stand out to me:

  • Labels are important, since these are state tables, and jumping to a particular state is very common. It’s tempting to use Lua coroutines here somehow, but short of using a lot of goto in Lua code (yikes!), jumping around arbitrarily doesn’t work. Also, it needs to be possible to tell an actor to jump to a particular state from outside — that’s how A_Look works, and there’s even an ACS function to do it manually.

  • Aside from being shorthand, frames are fine. Though I do note that hacks like AABBCCDD 3 are relatively common. The actual animation that’s wanted here is ABCD 6, but because animation and behavior are intertwined, the labels need to be repeated to run the action function more often. I wonder if it’s desirable to be able to separate display and behavior?

  • The durations seem straightforward, but they can actually be a restricted kind of expression as well. So just defining them as data in a table doesn’t quite work.

  • This example doesn’t have any, but states can also have a number of flags, indicated by keywords after the duration. (Slightly ambiguous, since there’s nothing strictly distinguishing them from action functions.) Bright, for example, is a common flag on projectiles, weapons, and important pickups; it causes the sprite to be drawn fullbright during that frame.

  • Obviously, actor behavior is a big part of the game sim, so ideally it should require dipping into Lua-land as little as possible.

Ideas I’ve had include the following.

Emulate state tables with arguments? A very straightforward way to do the above would be to just, well, cram it into one big table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
define_actor{
    ...
    states = {
        'Spawn:',
        'TROO', 'AB', 10, A_Look,
        'loop',
        'See:',
        'TROO', 'AABBCCDD', 3, A_Chase,
        'loop',
        ...
    },
}

It would work, technically, I guess, except for non-literal durations, but I’d basically just be exposing the DECORATE parser from Lua and it would be pretty ridiculous.

Keep the syntax, but allow calling Lua from it? DECORATE is okay, for the most part. For simple cases, it’s great, even. Would it be good enough to be able to write new action functions in Lua? Maybe. Your behavior would be awkwardly split between Lua and DECORATE, though, which doesn’t seem ideal. But it would be the most straightforward approach, and it would completely avoid questions of how to emulate labels and state counts.

As an added benefit, this would keep DECORATE almost-purely declarative — which means editor tools could still reliably parse it and show you previews of custom objects.

Split animation from behavior? This could go several ways, but the most obvious to me is something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
define_actor{
    ...
    states = {
        spawn = function(self)
            self:set_animation('AB', 10)
            while true do
                A_Look(self)
                delay(10)
            end
        end,
        see = function(self)
            self:set_animation('ABCD', 6)
            while true do
                A_Chase(self)
                delay(3)
            end
        end,
    },
}

This raises plenty of other API questions, like how to wait until an animation has finished or how to still do work on a specific frame, but I think those are fairly solvable. The big problems are that it’s very much not declarative, and it ends up being rather wordier. It’s not all boilerplate, though; it’s fairly straightforward. I see some value in having state delays and level script delays work the same way, too. And in some cases, you have only an animation with no code at all, so the heavier use of Lua should balance out. I don’t know.

A more practical problem is that, currently, it’s possible to jump to an arbitrary number of states past a given label, and that would obviously make no sense with this approach. It’s pretty rare and pretty unreadable, so maybe that’s okay. Also, labels aren’t blocks, so it’s entirely possible to have labels that don’t end with a keyword like loop and instead carry straight on into the next label — but those are usually used for logic more naturally expressed as for or while, so again, maybe losing that ability is okay.

Or… perhaps it makes sense to do both of these last two approaches? Built-in classes should stay as DECORATE anyway, so that existing code can still inherit from them and perform jumps with offsets, but new code could go entirely Lua for very complex actors.

Alas, this is probably one of those questions that won’t have an obvious answer unless I just build several approaches and port some non-trivial stuff to them to see how they feel.

And further

An enduring desire among ZDoom nerds has been the ability to write custom “thinkers”. Thinkers are really anything that gets to act each tic, but the word also specifically refers to the logic responsible for moving floors, opening doors, changing light levels, and so on. Exposing those more directly to Lua, and letting you write your own, would be pretty interesting.

Anyway

I don’t know if I’ll do all of this. I somewhat doubt it, in fact. I pick it up for half a day every few weeks to see what more I can make it do, just because it’s interesting. It has virtually no chance of being upstreamed anyway (the only active maintainer hates Lua, and thinks poorly of dynamic languages in general; plus, it’s redundant with ZScript) and I don’t really want to maintain my own yet another Doom fork, so I don’t expect it to ever be a serious project.

The source code for what I’ve done so far is available, but it’s brittle and undocumented, so I’m not going to tell you where to find it. If it gets far enough along to be useful as more than a toy, I’ll make a slightly bigger deal about it.

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/32-security-and-compliance-sessions-now-live-in-the-reinvent-2016-session-catalog/

re:Invent 2016 logo

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof?
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.

 

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.

 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is “yes,” this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type.

 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.

 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.

 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.

 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.” This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.

 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.

 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.

 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.

 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.

 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.

 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.

 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.

 

The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.

 

For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.

 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.

 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.

 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.

 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.

 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.

 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.

 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources. In addition, we will show you how to configure network security using VPCs and security groups.

 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.

 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.

 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.

 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler.

 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).

 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.

 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.

 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.

 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

 

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.

 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.

– Craig

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx3UX2WK7G84E5J/32-Security-and-Compliance-Sessions-Now-Live-in-the-re-Invent-2016-Session-Catal

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32 titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof? 
 
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
 
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.
  

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.
 
 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is "yes," this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type. 
 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.
 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.
 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
 
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.
 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.”
 
This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.
 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.
 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.
 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.
 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.
 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.
 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.
 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.
 
The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.
 
For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.
 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.
 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.
 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.
 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.
 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.
 
 
 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.
 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources.
 
In addition, we will show you how to configure network security using VPCs and security groups. 
 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.
 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.
 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.
 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
 
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler. 
 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).
 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.
 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.
 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.
 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.
 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.
 
– Craig