Tag Archives: games

Fooling an AI Article Writer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/fooling-an-ai-article-writer.html

World of Warcraft players wrote about a fictional game element, “Glorbo,” on a subreddit for the game, trying to entice an AI bot to write an article about it. It worked:

And it…worked. Zleague auto-published a post titled “World of Warcraft Players Excited For Glorbo’s Introduction.”

[…]

That is…all essentially nonsense. The article was left online for a while but has finally been taken down (here’s a mirror, it’s hilarious). All the authors listed as having bylines on the site are fake. It appears this entire thing is run with close to zero oversight.

Expect lots more of this sort of thing in the future. Also, expect the AI bots to get better at detecting this sort of thing. It’s going to be an arms race.

Practice Your Security Prompting Skills

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2023/07/practice-your-security-prompting-skills.html

Gandalf is an interactive LLM game where the goal is to get the chatbot to reveal its password. There are eight levels of difficulty, as the chatbot gets increasingly restrictive instructions as to how it will answer. It’s a great teaching tool.

I am stuck on Level 7.

Feel free to give hints and discuss strategy in the comments below. I probably won’t look at them until I’ve cracked the last level.

How gaming companies can use Amazon Redshift Serverless to build scalable analytical applications faster and easier

Post Syndicated from Satesh Sonti original https://aws.amazon.com/blogs/big-data/how-gaming-companies-can-use-amazon-redshift-serverless-to-build-scalable-analytical-applications-faster-and-easier/

This post provides guidance on how to build scalable analytical solutions for gaming industry use cases using Amazon Redshift Serverless. It covers how to use a conceptual, logical architecture for some of the most popular gaming industry use cases like event analysis, in-game purchase recommendations, measuring player satisfaction, telemetry data analysis, and more. This post also discusses the art of the possible with newer innovations in AWS services around streaming, machine learning (ML), data sharing, and serverless capabilities.

Our gaming customers tell us that their key business objectives include the following:

  • Increased revenue from in-app purchases
  • High average revenue per user and lifetime value
  • Improved stickiness with better gaming experience
  • Improved event productivity and high ROI

Our gaming customers also tell us that while building analytics solutions, they want the following:

  • Low-code or no-code model – Out-of-the-box solutions are preferred to building customized solutions.
  • Decoupled and scalable – Serverless, auto scaled, and fully managed services are preferred over manually managed services. Each service should be easily replaceable, enhanced with little or no dependency. Solutions should be flexible to scale up and down.
  • Portability to multiple channels – Solutions should be compatible with most of endpoint channels like PC, mobile, and gaming platforms.
  • Flexible and easy to use – The solutions should provide less restrictive, easy-to-access, and ready-to-use data. They should also provide optimal performance with low or no tuning.

Analytics reference architecture for gaming organizations

In this section, we discuss how gaming organizations can use a data hub architecture to address the analytical needs of an enterprise, which requires the same data at multiple levels of granularity and different formats, and is standardized for faster consumption. A data hub is a center of data exchange that constitutes a hub of data repositories and is supported by data engineering, data governance, security, and monitoring services.

A data hub contains data at multiple levels of granularity and is often not integrated. It differs from a data lake by offering data that is pre-validated and standardized, allowing for simpler consumption by users. Data hubs and data lakes can coexist in an organization, complementing each other. Data hubs are more focused around enabling businesses to consume standardized data quickly and easily. Data lakes are more focused around storing and maintaining all the data in an organization in one place. And unlike data warehouses, which are primarily analytical stores, a data hub is a combination of all types of repositories—analytical, transactional, operational, reference, and data I/O services, along with governance processes. A data warehouse is one of the components in a data hub.

The following diagram is a conceptual analytics data hub reference architecture. This architecture resembles a hub-and-spoke approach. Data repositories represent the hub. External processes are the spokes feeding data to and from the hub. This reference architecture partly combines a data hub and data lake to enable comprehensive analytics services.

Let’s look at the components of the architecture in more detail.

Sources

Data can be loaded from multiple sources, such as systems of record, data generated from applications, operational data stores, enterprise-wide reference data and metadata, data from vendors and partners, machine-generated data, social sources, and web sources. The source data is usually in either structured or semi-structured formats, which are highly and loosely formatted, respectively.

Data inbound

This section consists of components to process and load the data from multiple sources into data repositories. It can be in batch mode, continuous, pub/sub, or any other
custom integration. ETL (extract, transform, and load) technologies, streaming services, APIs, and data exchange interfaces are the core components of this pillar. Unlike ingestion processes, data can be transformed as per business rules before loading. You can apply technical or business data quality rules and load raw data as well. Essentially, it provides the flexibility to get the data into repositories in its most usable form.

Data repositories

This section consists of a group of data stores, which includes data warehouses, transactional or operational data stores, reference data stores, domain data stores housing purpose-built business views, and enterprise datasets (file storage). The file storage component is usually a common component between a data hub and a data lake to avoid data duplication and provide comprehensiveness. Data can also be shared among all these repositories without physically moving with features, such as data sharing and federated queries. However, data copy and duplication are allowed considering various consumption needs in terms of formats and latency.

Data outbound

Data is often consumed using structured queries for analytical needs. Also, datasets are accessed for ML, data exporting, and publishing needs. This section consists of components to query the data, export, exchange, and APIs. In terms of implementation, the same technologies may be used for both inbound and outbound, but the functions are different. However, it’s not mandatory to use the same technologies. These processes aren’t transformation heavy because the data is already standardized and almost ready to consume. The focus is on the ease of consumption and integration with consuming services.

Consumption

This pillar consists of various consumption channels for enterprise analytical needs. It includes business intelligence (BI) users, canned and interactive reports, dashboards, data science workloads, Internet of Things (IoT), web apps, and third-party data consumers. Popular consumption entities in many organizations are queries, reports, and data science workloads. Because there are multiple data stores maintaining data at different granularity and formats to service consumer needs, these consumption components depend on data catalogs for finding the right source.

Data governance

Data governance is key to the success of a data hub reference architecture. It constitutes components like metadata management, data quality, lineage, masking, and stewardship, which are required for organized maintenance of the data hub. Metadata management helps organize the technical and business metadata catalog, and consumers can reference this catalog to know what data is available in which repository and at what granularity, format, owners, refresh frequency, and so on. Along with metadata management, data quality is important to increase confidence for consumers. This includes data cleansing, validation, conformance, and data controls.

Security and monitoring

Users and application access should be controlled at multiple levels. It starts with authentication, then authorizing who and what should be accessed, policy management, encryption, and applying data compliance rules. It also includes monitoring components to log the activity for auditing and analysis.

Analytics data hub solution architecture on AWS

The following reference architecture provides an AWS stack for the solution components.

Let’s look at each component again and the relevant AWS services.

Data inbound services

AWS Glue and Amazon EMR services are ideal for batch processing. They scale automatically and are able to process most of the industry standard data formats. Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, and Amazon Managed Streaming for Apache Kafka (Amazon MSK) enables you to build streaming process applications. These streaming services integrate well with the Amazon Redshift streaming feature. This helps you process real-time sources, IoT data, and data from online channels. You can also ingest data with third-party tools like Informatica, dbt, and Matallion.

You can build RESTful APIs and WebSocket APIs using Amazon API Gateway and AWS Lambda, which will enable real-time two-way communication with web sources, social, and IoT sources. AWS Data Exchange helps with subscribing to third-party data in AWS Marketplace. Data subscription and access is fully managed with this service. Refer to the respective service documentation for further details.

Data repository services

Amazon Redshift is the recommended data storage service for OLAP (Online Analytical Processing) workloads such as cloud data warehouses, data marts, and other analytical data stores. This service is the core of this reference architecture on AWS and can address most analytical needs out of the box. You can use simple SQL to analyze structured and semi-structured data across data warehouses, data marts, operational databases, and data lakes to deliver the best price performance at any scale. The Amazon Redshift data sharing feature provides instant, granular, and high-performance access without data copies and data movement across multiple Amazon Redshift data warehouses in the same or different AWS accounts, and across Regions.

For ease of use, Amazon Redshift offers a serverless option. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver fast performance for even the most demanding and unpredictable workloads, and you pay only for what you use. Just load your data and start querying right away in Amazon Redshift Query Editor or in your favorite BI tool and continue to enjoy the best price performance and familiar SQL features in an easy-to-use, zero administration environment.

Amazon Relational Database Service (Amazon RDS) is a fully managed service for building transactional and operational data stores. You can choose from many popular engines such as MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server. With the Amazon Redshift federated query feature, you can query transactional and operational data in place without moving the data. The federated query feature currently supports Amazon RDS for PostgreSQL, Amazon Aurora PostgreSQL-Compatible Edition, Amazon RDS for MySQL, and Amazon Aurora MySQL-Compatible Edition.

Amazon Simple Storage Service (Amazon S3) is the recommended service for multi-format storage layers in the architecture. It offers industry-leading scalability, data availability, security, and performance. Organizations typically store data in Amazon S3 using open file formats. Open file formats enable analysis of the same Amazon S3 data using multiple processing and consumption layer components. Data in Amazon S3 can be easily queried in place using SQL with Amazon Redshift Spectrum. It helps you query and retrieve structured and semi-structured data from files in Amazon S3 without having to load the data. Multiple Amazon Redshift data warehouses can concurrently query the same datasets in Amazon S3 without the need to make copies of the data for each data warehouse.

Data outbound services

Amazon Redshift comes with the web-based analytics workbench Query Editor V2.0, which helps you run queries, explore data, create SQL notebooks, and collaborate on data with your teams in SQL through a common interface. AWS Transfer Family helps securely transfer files using SFTP, FTPS, FTP, and AS2 protocols. It supports thousands of concurrent users and is a fully managed, low-code service. Similar to inbound processes, you can utilize Amazon API Gateway and AWS Lambda for data pull using the Amazon Redshift Data API. And AWS Data Exchange helps publish your data to third parties for consumption through AWS Marketplace.

Consumption services

Amazon QuickSight is the recommended service for creating reports and dashboards. It enables you to create interactive dashboards, visualizations, and advanced analytics with ML insights. Amazon SageMaker is the ML platform for all your data science workload needs. It helps you build, train, and deploy models consuming the data from repositories in the data hub. You can use Amazon front-end web and mobile services and AWS IoT services to build web, mobile, and IoT endpoint applications to consume data out of the data hub.

Data governance services

The AWS Glue Data Catalog and AWS Lake Formation are the core data governance services AWS currently offers. These services help manage metadata centrally for all the data repositories and manage access controls. They also help with data classification and can automatically handle schema changes. You can use Amazon DataZone to discover and share data at scale across organizational boundaries with built-in governance and access controls. AWS is investing in this space to provide more a unified experience for AWS services. There are many partner products such as Collibra, Alation, Amorphic, Informatica, and more, which you can use as well for data governance functions with AWS services.

Security and monitoring services

AWS Identity and Access Management (AWS IAM) manages identities for AWS services and resources. You can define users, groups, roles, and policies for fine-grained access management of your workforce and workloads. AWS Key Management Service (AWS KMS) manages AWS keys or customer managed keys for your applications. Amazon CloudWatch and AWS CloudTrail help provide monitoring and auditing capabilities. You can collect metrics and events and analyze them for operational efficiency.

In this post, we’ve discussed the most common AWS services for the respective solution components. However, you aren’t limited to only these services. There are many other AWS services for specific use cases that may be more appropriate for your needs than what we discussed here. You can reach to AWS Analytics Solutions Architects for appropriate guidance.

Example architectures for gaming use cases

In this section, we discuss example architectures for two gaming use cases.

Game event analysis

In-game events (also called timed or live events) encourage player engagement through excitement and anticipation. Events entice players to interact with the game, increasing player satisfaction and revenue with in-game purchases. Events have become more and more important, especially as games shift from being static pieces of entertainment to be played as is to offering dynamic and changing content through the use of services that use information to make decisions about game play as the game is being played. This enables games to change as the players play and influence what works and what doesn’t, and gives any game a potentially infinite lifespan.

This capability of in-game events to offer fresh content and activities within a familiar framework is how you keep players engaged and playing for months to years. Players can enjoy new experiences and challenges within the familiar framework or world that they have grown to love.

The following example shows how such an architecture might appear, including changes to support various sections of the process like breaking the data into separate containers to accommodate scalability, charge-back, and ownership.

To fully understand how events are viewed by the players and to make decisions about future events requires information on how the latest event was actually performed. This means gathering a lot of data as the players play to build key performance indicators (KPIs) that measure the effectiveness and player satisfaction with each event. This requires analytics that specifically measure each event and capture, analyze, report on, and measure player experience for each event. These KPIs include the following:

  • Initial user flow interactions – What actions users are taking after they first receive or download an event update in a game. Are there any clear drop-off points or bottlenecks that are turning people off the event?
  • Monetization – When, what, and where users are spending money on in the event, whether it’s buying in-game currencies, answering ads, specials, and so on.
  • Game economy – How can users earn and spend virtual currencies or goods during an event, using in-game money, trades, or barter.
  • In-game activity – Player wins, losses, leveling up, competition wins, or player achievements within the event.
  • User to user interactions – Invitations, gifting, chats (private and group), challenges, and so on during an event.

These are just some of the KPIs and metrics that are key for predictive modeling of events as the game acquires new players while keeping existing users involved, engaged, and playing.

In-game activity analysis

In-game activity analysis essentially looks at any meaningful, purposeful activity the player might show, with the goal of trying to understand what actions are taken, their timing, and outcomes. This includes situational information about the players, including where they are playing (both geographical and cultural), how often, how long, what they undertake on each login, and other activities.

The following example shows how such an architecture might appear, including changes to support various sections of the process like breaking the data into separate warehouses. The multi-cluster warehouse approach helps scale the workload independently, provides flexibility to the implemented charge-back model, and supports decentralized data ownership.

The solution essentially logs information to help understand the behavior of your players, which can lead to insights that increase retention of existing players, and acquisition of new ones. This can provide the ability to do the following:

  • Provide in-game purchase recommendations
  • Measure player trends in the short term and over time
  • Plan events the players will engage in
  • Understand what parts of your game are most successful and which are less so

You can use this understanding to make decisions about future game updates, make in-game purchase recommendations, determine when and how your game economy may need to be balanced, and even allow players to change their character or play as the game progresses by injecting this information and accompanying decisions back into the game.

Conclusion

This reference architecture, while showing examples of only a few analysis types, provides a faster technology path for enabling game analytics applications. The decoupled, hub/spoke approach brings the agility and flexibility to implement different approaches to analytics and understanding the performance of game applications. The purpose-built AWS services described in this architecture provide comprehensive capabilities to easily collect, store, measure, analyze, and report game and event metrics. This helps you efficiently perform in-game analytics, event analysis, measure player satisfaction, and provide tailor-made recommendations to game players, efficiently organize events, and increase retention rates.

Thanks for reading the post. If you have any feedback or questions, please leave them in the comments.


About the authors

Satesh Sonti is a Sr. Analytics Specialist Solutions Architect based out of Atlanta, specialized in building enterprise data platforms, data warehousing, and analytics solutions. He has over 16 years of experience in building data assets and leading complex data platform programs for banking and insurance clients across the globe.

Tanya Rhodes is a Senior Solutions Architect based out of San Francisco, focused on games customers with emphasis on analytics, scaling, and performance enhancement of games and supporting systems. She has over 25 years of experience in enterprise and solutions architecture specializing in very large business organizations across multiple lines of business including games, banking, healthcare, higher education, and state governments.

Monday Night Itch #1: Mystery Trap Adventure

Post Syndicated from Eevee original https://eev.ee/blog/2022/01/31/monday-night-itch-1-mystery-trap-adventure/

Welcome to Monday Night Itch, a harebrained scheme to encourage folks to play more non-AAA games by adding a touch of social gamification. I thought I would be tweeting my adventures here, but I just had an experience so profound it can only be captured within a blog post.

The rules

Rules” is a strong word, but nevertheless:

  • Every Monday, find a game on itch.io, and pay at least $2 for it.

    You can buy a game with a price tag, or download a free game and leave a tip, but the point of this endeavor is to put money into more places in the ecosystem. (Note that it is possible, though uncommon, for a developer to disable payments altogether.)

  • Play it.

  • Leave a nice comment.

  • Tell at least one person what you played, and what you thought about it.

That’s it. Buy a game, play it, tell someone about it. You can stream it, tweet it, screenshot it, or just tell your boyfriend about it. You don’t have to like it

Your score is how many times you’ve done this, and your streak is how many weeks you’ve done it in a row.

Some other quick tips about itch

The itch app is cool. It’s a pretty thin wrapper around the website, but it adds automatic updating and big red “Launch” buttons and other stuff to make it feel a bit more like a Steam-ish thing. Do keep in mind that devs can upload whatever they want, and sometimes the itch app gets confused.

If you’re not a fan of running mystery software you downloaded from the Internet, you can just play web games and leave tips on those.

There are a lot of NSFW games on itch, but they’re hidden from the main browse pages by default. You can enable them site-wide in your user settings, or add /nsfw to the end of a browse page URL (for example, https://itch.io/gameshttps://itch.io/games/nsfw) to force a list of only NSFW games.

The main event

I decided I wanted to reward Linux releases, and also chip a few bucks towards games with a price tag that aren’t necessarily getting much exposure, so I went to the full list of recent paid Linux games. This is how I discovered Mystery Trap Adventure.

I found myself very much wanting to play this, but I also found myself wondering what sort of impact I should be trying for as the very first iteration of this project. Would I torpedo it if I played a game made by a less experienced dev? Are people looking to this expecting me to uncover unknown indie gems, like I’m wandering a beach with a metal detector?

I checked the dev’s itch profile and this is their ninth project. Every single previous work of their has only a single comment: from them, announcing that comments can be left below. That’s heartbreaking to me, and what made me absolutely sure I wanted to play this. I want to make their day.

And then, dear reader, I felt ashamed. Because who the fuck cares. The world already has enough people who believe that indie games are only valuable if they create the illusion of an eight-digit budget, and I am not here to enable them. Creative work does not need to be polished, mass-appeal, least common denominator stuff handed down from heaven by a billion-dollar international corporation in order to be interesting or worthwhile.

But more importantly, it’s my thing and I’m gonna do whatever the hell I want.

The title screen for Mystery Trap Adventure: a collage of mismatched artwork on a nearly cyan background

And so, Mystery Trap Adventure.

The first thing to note is that the game does not, in fact, have a Linux release. I did strongly suspect this, since a single download is flagged as all of Windows, Mac, and Linux, but the only way to be sure was to buy it. (They’re asking $4; I paid them $10.) Even Wine had trouble with it, for some reason, so I had to play it on our Windows media center.

It’s a sidescrolling platformer where you play as a dragon; you can jump about one tile high (roughly your own height) and shoot fireballs (useful for destroying bricks and defeating the boss). The main obstacle is spikes, which kill you instantly.

Right at the beginning, there’s a block you have to jump on top of, and it was very obvious that I sort of “stuck” to the side of it if I touched it. I thought at first that this was the result of a common platforming gotcha: if you model the player as a dynamic body and implement movement (including air control) as a force on them, then they will stick to walls as long as the corresponding direction is held. This happens because forces on dynamic bodies are external, as though a giant ghost hand were pushing them — so if a player is trying to air control into a wall, the friction against the wall will hold them in place, just as if you were holding a book against a wall with your hand.

(Solving that problem is beyond the scope of this post, sorry.)

Okay, common pitfall, no big deal. I wander ahead a bit. I encounter a slice of watermelon, which allows me to teleport a short distance once. I screw this up the first time while messing with the controls — there’s a wall directly in front of it, so the teleport must be used to skip past that wall — and have to restart.

Now something interesting happens. I’m in a pit with walls on both sides. I can’t teleport again, and even if I could, there are spikes beyond the next wall, so that would kill me immediately.

A screenshot of the situation just described

It dawns on me that this microscopic game has walljumping.

I’m still fairly certain that the player character is a dynamic body, but now I wonder: is the wall stickiness actually due to the friction interaction, or is it a deliberate feature to enable walljumping?

Or, perhaps more likely, is it both? Did the developer trip over this pitfall, and decide to make a gameplay feature out of it? It almost seems unbelievable. I wouldn’t consider walljumping a basic platforming ability, and it’s not obvious how to solve the friction problem, but it seems that this relatively new developer may have solved both problems by simply smashing them together.

And if that’s the case, dearest reader: I fucking love it. That is the true spirit of game development, I think — you have a big complicated simulation you want to make, and you have a big complicated engine that you want to make do it, and you have to kinda mold both of them into fitting better with the other.

I don’t know. I could be completely wrong about this came to be. Or they could have copy/pasted from someone else who had this idea. Either way, it made me smile to see.

The walljumping controls are, ahem, not exactly intuitive, which is why it took me nonzero time to realize it was an ability at all. But honestly, I liked that too. Nowadays, everyone knows exactly how every platforming ability is “supposed” to work, because devs are all copying the same ideas from each other that have been refined over a thousand different iterations. This reminded me of playing games in the early and mid 90s, before everything had standardized as much, when part of the game itself was just working out the right muscle memory to make the right things happen. It’s surprising to find nostalgia in a game because it’s not like others I’ve played before, but there it was. Working out the right timing without any visual cues felt like a puzzle in itself, and getting out of the pit without landing in the spikes was remarkably satisfying. (If it helps: I used different hands for movement and jumping, and I landed on top of the right wall before trying to jump over the spikes.)

Beyond this, the tone changes somewhat to IWBTG-esque traps with no telegraphing. Walking directly to the right will cause spikes to appear from the ground, killing you instantly. Thankfully there aren’t too many of these, and the game is very short, so simply memorizing the handful of places they appear is easy enough.

I have less to say about the rest of the game; you get another quirky powerup you only use once, dodge another couple surprise traps, and face a single boss. The boss is a very large human warrior dude who walks straight at you and swings his sword, which kills you. There’s another fruit above you, but it seems out of reach. He is definitely too tall to jump over. The only solution I found is to simply spam fireballs at him before he can reach you, but I don’t know if this is intended. It seems like it can’t be, since his “health bar” takes the form of a grid of his face behind him, and from where you enter the area, you can’t actually see the whole grid? So surely I’m supposed to be able to get further to the right? But I don’t know.


I finished the game and came back to the following reply to my original thread about this whole concept:

most, i.e. all, small Indy games are terrible.

What a snotty, entitled, mean-spirited sentiment. As if the very existence of a game with lower production values than Resident Evil 8 were a personal offense. It seems to be fairly common, too, and I just do not understand it. Small indie games aren’t trying to squeeze you for more money, lure you in with gambling, exploit your friendships, make your entire life revolve around them. They’re just there.

This attitude is like showing up to everyone who mentions YouTube just to proclaim that everything on it sucks, because Paramount movies are better. That’s great, no one asked! Sometimes I just want to see a seven-second clip of a kitten filmed in a dark room by a $20 phone, because dammit, kittens are still fun to watch. No one makes a point of dunking on videos like that, so I don’t know why anyone is so harsh on amateur games either. Especially when making games is so much more difficult!

Mystery Trap Adventure is that video. Someone had an idea, worked out how to express it, and put it out into the world just because they wanted to. I don’t expect anyone else to buy it or play it; I just want you to know that I did, and it made me smile for a few minutes.

Including Hackers in NATO Wargames

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/including-hackers-in-nato-wargames.html

This essay makes the point that actual computer hackers would be a useful addition to NATO wargames:

The international information security community is filled with smart people who are not in a military structure, many of whom would be excited to pose as independent actors in any upcoming wargames. Including them would increase the reality of the game and the skills of the soldiers building and training on these networks. Hackers and cyberwar experts would demonstrate how industrial control systems such as power supply for refrigeration and temperature monitoring in vaccine production facilities are critical infrastructure; they’re easy targets and should be among NATO’s priorities at the moment.

Diversity of thought leads to better solutions. We in the information security community strongly support the involvement of acknowledged nonmilitary experts in the development and testing of future cyberwar scenarios. We are confident that independent experts, many of whom see sharing their skills as public service, would view participation in these cybergames as a challenge and an honor.