Tag Archives: BASIC

Eevee mugshot set for Doom

Post Syndicated from Eevee original https://eev.ee/release/2017/11/23/eevee-mugshot-set-for-doom/

Screenshot of Industrial Zone from Doom II, with an Eevee face replacing the usual Doom marine in the status bar

A full replacement of Doomguy’s vast array of 42 expressions.

You can get it yourself if you want to play Doom as me, for some reason? It does nothing but replace a few sprites, so it works with any Doom flavor (including vanilla) on 1, 2, or Final. Just run Doom with -file eeveemug.wad. With GZDoom, you can load it automatically.


I don’t entirely know why I did this. I drew the first one on a whim, then realized there was nothing really stopping me from making a full set, so I spent a day doing that.

The funny thing is that I usually play Doom with ZDoom’s “alternate” HUD. It’s a full-screen overlay rather than a huge bar, and — crucially — it does not show the mugshot. It can’t even be configured to show the mugshot. As far as I’m aware, it can’t even be modded to show the mugshot. So I have to play with the OG status bar if I want to actually use the thing I made.

Preview of the Eevee mugshot sprites arranged in a grid, where the Eevee becomes more beaten up in each subsequent column

I’m pretty happy with the results overall! I think I did a decent job emulating the Doom “surreal grit” style. I did the shading with Aseprite‘s shading mode — instead of laying down a solid color, it shifts pixels along a ramp of colors you select every time you draw over them. Doom’s palette has a lot of browns, so I made a ramp out of all of them and kept going over furry areas, nudging pixels into being lighter or darker, until I liked the texture. It was a lot like making a texture in a sketch with a lot of scratchy pencil strokes.

I also gleaned some interesting things about smoothness and how the eye interprets contours? I tried to explain this on Twitter and had a hell of a time putting it into words, but the short version is that it’s amazing to see the difference a single misplaced pixel can make, especially as you slide that pixel between dark and light.


Doom's palette of 256 colors, many of which are very long gradients of reds and browns

Speaking of which, Doom’s palette is incredibly weird to work with. Thank goodness Eevees are brown! The game does have to draw arbitrary levels of darkness all with the same palette, which partly explains the number of dark colors and gradients — but I believe a number of the colors are exact duplicates, so close they might as well be duplicates, or completely unused in stock Doom assets. I guess they had no reason to optimize for people trying to add arbitrary art to the game 25 years later, though. (And nowadays, GZDoom includes a truecolor software renderer, so the palette is becoming less and less important.)

I originally wanted the god mode sprite to be a Sylveon, but Sylveon is made of pink and azure and blurple, and I don’t think I could’ve pulled it off with this set of colors. I even struggled with the color of the mane a bit — I usually color it with pretty pale colors, but Doom only has a couple of those, and they’re very saturated. I ended up using a lot more dark yellows than I would normally, and thankfully it worked out pretty well.

The most significant change I made between the original sprite and the final set was the eye color:

A comparison between an original Doom mugshot sprite, the first sprite I drew, and how it ended up

(This is STFST20, a frame from the default three-frame “glacing around” animation that plays when the player has between 40 and 59 health. Doom Wiki has a whole article on the mugshot if you’re interested.)

The blue eyes in my original just do not work at all. The Doom palette doesn’t have a lot of subtle colors, and its blues in particular are incredibly bad. In the end, I made the eyes basically black, though with a couple pixels of very dark blue in them.

After I decided to make the full set, I started by making a neutral and completely healthy front pose, then derived the others from that (with a very complicated system of layers). You can see some of the side effects of that here: the face doesn’t actually turn when glancing around, because hoo boy that would’ve been a lot of work, and so the cheek fluff is visible on both sides.

I also notice that there are two columns of identical pixels in each eye! I fixed that in the glance to the right, but must’ve forgotten about it here. Oh, well; I didn’t even notice until I zoomed in just now.

A general comparison between the Doom mugshots and my Eevee ones, showing each pose in its healthy state plus the neutral pose in every state of deterioration

The original sprites might not be quite aligned correctly in the above image. The available space in the status bar is 35×31, of which a couple pixels go to an inset border, leaving 33×30. I drew all of my sprites at that size, but the originals are all cropped and have varying offsets (part of the Doom sprite format). I extremely can’t be assed to check all of those offsets for over a dozen sprites, so I just told ImageMagick to center them. (I only notice right now that some of the original sprites are even a full 31 pixels tall and draw over the top border that I was so careful to stay out of!)

Anyway, this is a representative sample of the Doom mugshot poses.

The top row shows all eight frames at full health. The first three are the “idle” state, drawn when nothing else is going on; the sprite usually faces forwards, but glances around every so often at random. The forward-facing sprite is the one I finalized first.

I tried to take a lot of cues from the original sprite, seeing as I wanted to match the style. I’d never tried drawing a sprite with a large palette and a small resolution before, and the first thing that struck me was Doomguy’s lips — the upper lip, lips themselves, and shadow under the lower lip are all created with only one row of pixels each. I thought that was amazing. Now I even kinda wish I’d exaggerated that effect a bit more, but I was wary of going too dark when there’s a shadow only a couple pixels away. I suppose Doomguy has the advantage of having, ah, a chin.

I did much the same for the eyebrows, which was especially necessary because Doomguy has more of a forehead than my Eevee does. I probably could’ve exaggerated those a bit more, as well! Still, I love how they came out — especially in the simple looking-around frames, where even a two-pixel eyebrow raise is almost comically smug.

The fourth frame is a wild-ass grin (even named STFEVL0), which shows for a short time after picking up a new weapon. Come to think of it, that’s a pretty rare occurrence when playing straight through one of the Doom games; you keep your weapons between levels.

The fifth through seventh are also a set. If the player takes damage, the status bar will briefly show one of these frames to indicate where the damage is coming from. You may notice that where Doomguy bravely faces the source of the pain, I drew myself wincing and recoiling away from it.

The middle frame of that set also appears while the player is firing continuously (regardless of damage), so I couldn’t really make it match the left and right ones. I like the result anyway. It was also great fun figuring out the expressions with the mouth — that’s another place where individual pixels make a huge difference.

Finally, the eighth column is the legendary “ouch” face, which appears when the player takes more than 20 damage at once. It may look completely alien to you, because vanilla Doom has a bug that only shows this face when the player gains 20 or more health while taking damage. This is vanishingly rare (though possible!), so the frame virtually never appears in vanilla Doom. Lots of source ports have fixed this bug, making the ouch face it a bit better known, but I usually play without the mugshot visible so it still looks super weird to me. I think my own spin on it is a bit less, ah, body horror?

The second row shows deterioration. It is pretty weird drawing yourself getting beaten up.

A lot of Doomguy’s deterioration is in the form of blood dripping from under his hair, which I didn’t think would translate terribly well to a character without hair. Instead, I went a little cartoony with it, adding bandages here and there. I had a little bit of a hard time with the bloodshot eyes at this resolution, which I realize as I type it is a very poor excuse when I had eyes three times bigger than Doomguy’s. I do love the drooping ears, with the possible exception of the fifth state, which I’m not sure is how that would actually look…? Oh well. I also like the bow becoming gradually unravelled, eventually falling off entirely when you die.

Oh, yes, the sixth frame there (before the gap) is actually for a dead player. Doomguy’s bleeding becomes markedly more extreme here, but again that didn’t really work for me, so I went a little sillier with it. A little. It’s still pretty weird drawing yourself dead.

That leaves only god mode, which is incredible. I love that glow. I love the faux whisker shapes it makes. I love how it fades into the background. I love that 100% pure “oh this is pretty good” smile. It all makes me want to just play Doom in god mode forever.

Now that I’ve looked closely at these sprites again, I spy a good half dozen little inconsistencies and nitpicks, which I’m going to refrain from spelling out. I did do this in only a day, and I think it came out pretty dang well considering.

Maybe I’ll try something else like this in the future. Not quite sure what, though; there aren’t many small and self-contained sets of sprites like this in Doom. Monsters are several times bigger and have a zillion different angles. Maybe some pickups, which only have one frame?


Hmm. Parting thought: I’m not quite sure where I should host this sort of one-off thing. It arguably belongs on Itch, but seems really out of place alongside entire released games. It also arguably belongs on the idgames archive, but I’m hesitant to put it there because it’s such an obscure thing of little interest to a general audience. At the moment it’s just a file I’ve uploaded to wherever on my own space, but I now have three little Doom experiments with no real permanent home.

NetNeutrality vs. Verizon censoring Naral

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/netneutrality-vs-verizon-censoring-naral.html

In response to my anti-NetNeutrality blogs/tweets, people ask what about this? In this post, I address the second question.

Firstly, it’s not a NetNeutrality issue (which applies only to the Internet), but an issue with text-messages. In other words, it’s something that will continue to happen even with NetNeutrality rules. People relate this to NetNeutrality as an analogy, not because it actually is such an issue.

Secondly, it’s an edge/content issue, not a transit issue. The details in this case is that Verizon provides a program for sending bulk messages to its customers from the edge of the network. Verizon isn’t censoring text messages in transit, but from the edge. You can send a text message to your friend on the Verizon network, and it won’t be censored. Thus the analogy is incorrect — the correct analogy would be with content providers like Twitter and Facebook, not ISPs like Comcast.

Like all cell phone vendors, Verizon polices this content, canceling accounts that abuse the system, like spammers. We all agree such censorship is a good thing, and that such censorship of content providers is not remotely a NetNeutrality issue. Content providers do this not because they disapprove of the content of spam such much as the distaste their customers have for spam.
Content providers that are political, rather than neutral to politics is indeed worrisome. It’s not a NetNeutrality issue per se, but it is a general “neutrality” issue. We free-speech activists want all content providers (Twitter, Facebook, Verizon mass-texting programs) to be free of political censorship — though we don’t want government to mandate such neutrality.
But even here, Verizon may be off the hook. They appear not be to be censoring one political view over another, but the controversial/unsavory way Naral expresses its views. Presumably, Verizon would be okay with less controversial political content.

In other words, as Verizon expresses it’s principles, it wants to block content that drivers away customers, but is otherwise neutral to the content. While this may unfairly target controversial political content, it’s at least basically neutral.

So in conclusion, while activists portray this as a NetNeutrality issue, it isn’t. It’s not even close.

What We’re Thankful For

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/what-were-thankful-for/

All of us at Backblaze hope you have a wonderful Thanksgiving, and that you can enjoy it with family and friends. We asked everyone at Backblaze to express what they are thankful for. Here are their responses.

Fall leaves

What We’re Thankful For

Aside from friends, family, hobbies, health, etc. I’m thankful for my home. It’s not much, but it’s mine, and allows me to indulge in everything listed above. Or not, if I so choose. And coffee.

— Tony

I’m thankful for my wife Jen, and my other friends. I’m thankful that I like my coworkers and can call them friends too. I’m thankful for my health. I’m thankful that I was born into a middle class family in the US and that I have been very, very lucky because of that.

— Adam

Besides the most important things which are being thankful for my family, my health and my friends, I am very thankful for Backblaze. This is the first job I’ve ever had where I truly feel like I have a great work/life balance. With having 3 kids ages 8, 6 and 4, a husband that works crazy hours and my tennis career on the rise (kidding but I am on 4 teams) it’s really nice to feel like I have balance in my life. So cheers to Backblaze – where a girl can have it all!

— Shelby

I am thankful to work at a high-tech company that recognizes the contributions of engineers in their 40s and 50s.

— Jeannine

I am thankful for the music, the songs I’m singing. Thankful for all the joy they’re bringing. Who can live without it, I ask in all honesty? What would life be? Without a song or a dance what are we? So I say thank you for the music. For giving it to me!

— Yev

I’m thankful that I don’t look anything like the portrait my son draws of me…seriously.

— Natalie

I am thankful to work for a company that puts its people and product ahead of profits.

— James

I am thankful that even in the middle of disasters, turmoil, and violence, there are always people who commit amazing acts of generosity, courage, and kindness that restore my faith in mankind.

— Roderick

The future.

— Ahin

The Future

I am thankful for the current state of modern inexpensive broadband networking that allows me to stay in touch with friends and family that are far away, allows Backblaze to exist and pay my salary so I can live comfortably, and allows me to watch cat videos for free. The internet makes this an amazing time to be alive.

— Brian

Other than being thankful for family & good health, I’m quite thankful through the years I’ve avoided losing any of my 12+TB photo archive. 20 years of photoshoots, family photos and cell phone photos kept safe through changing storage media (floppy drives, flopticals, ZIP, JAZ, DVD-RAM, CD, DVD and hard drives), not to mention various technology/software solutions. It’s a data minefield out there, especially in the long run with changing media formats.

— Jim

I am thankful for non-profit organizations and their volunteers, such as IMAlive. Possibly the greatest gift you can give someone is empowerment, and an opportunity for them to recognize their own resilience and strength.

— Emily

I am thankful for my loving family, friends who make me laugh, a cool company to work for, talented co-workers who make me a better engineer, and beautiful Fall days in Wisconsin!

— Marjorie

Marjorie Wisconsin

I’m thankful for preschool drawings about thankfulness.

— Adam

I am thankful for new friends and working for a company that allows us to be ourselves.

— Annalisa

I’m thankful for my dog as I always find a reason to smile at him everyday. Yes, he still smells from his skunkin’ last week and he tracks mud in my house, but he came from the San Quentin puppy-prisoner program and I’m thankful I found him and that he found me! My vet is thankful as well.

— Terry

I’m thankful that my colleagues are also my friends outside of the office and that the rain season has started in California.

— Aaron

I’m thankful for family, friends, and beer. Mostly for family and friends, but beer is really nice too!

— Ken

There are so many amazing blessings that make up my daily life that I thank God for, so here I go – my basic needs of food, water and shelter, my husband and 2 daughters and the rest of the family (here and abroad) — their love, support, health, and safety, waking up to a new day every day, friends, music, my job, funny things, hugs and more hugs (who does not like hugs?).

— Cecilia

I am thankful to be blessed with a close-knit extended family, and for everything they do for my new, growing family. With a toddler and a second child on the way, it helps having so many extra sets of hands around to help with the kids!

— Zack

I’m thankful for family and friends, the opportunities my parents gave me by moving the U.S., and that all of us together at Backblaze have built a place to be proud of.

— Gleb

Aside for being thankful for family and friends, I am also thankful I live in a place with such natural beauty. Being so close to mountains and the ocean, and everything in between, is something that I don’t take for granted!

— Sona

I’m thankful for my wonderful wife, family, friends, and co-workers. I’m thankful for having a happy and healthy son, and the chance to watch him grow on a daily basis.

— Ariel

I am thankful for a dog-friendly workplace.

— LeAnn

I’m thankful for my amazing new wife and that she’s as much of a nerd as I am.

— Troy

I am thankful for every reunion with my siblings and families.

— Cecilia

I am thankful for my funny, strong-willed, happy daughter, my awesome husband, my family, and amazing friends. I am also thankful for the USA and all the opportunities that come with living here. Finally, I am thankful for Backblaze, a truly great place to work and for all of my co-workers/friends here.

— Natasha

I am thankful that I do not need to hunt and gather everyday to put food on the table but at the same time I feel that I don’t appreciate the food the sits before me as much as I should. So I use Thanksgiving to think about the people and the animals that put food on my family’s table.

— KC

I am thankful for my cat, Catnip. She’s been with me for 18 years and seen me through so many ups and downs. She’s been along my side through two long-term relationships, several moves, and one marriage. I know we don’t have much time together and feel blessed every day she’s here.

— JC

I am thankful for imperfection and misshapen candies. The imperceptible romance of sunsets through bus windows. The dream that family, friends, co-workers, and strangers are connected by love. I am thankful to my ancestors for enduring so much hardship so that I could be here enjoying Bay Area burritos.

— Damon

Autumn leaves

The post What We’re Thankful For appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How AWS Managed Microsoft AD Helps to Simplify the Deployment and Improve the Security of Active Directory–Integrated .NET Applications

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-aws-managed-microsoft-ad-helps-to-simplify-the-deployment-and-improve-the-security-of-active-directory-integrated-net-applications/

Companies using .NET applications to access sensitive user information, such as employee salary, Social Security Number, and credit card information, need an easy and secure way to manage access for users and applications.

For example, let’s say that your company has a .NET payroll application. You want your Human Resources (HR) team to manage and update the payroll data for all the employees in your company. You also want your employees to be able to see their own payroll information in the application. To meet these requirements in a user-friendly and secure way, you want to manage access to the .NET application by using your existing Microsoft Active Directory identities. This enables you to provide users with single sign-on (SSO) access to the .NET application and to manage permissions using Active Directory groups. You also want the .NET application to authenticate itself to access the database, and to limit access to the data in the database based on the identity of the application user.

Microsoft Active Directory supports these requirements through group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD). AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables you to manage gMSAs and KCD through your administrative account, helping you to migrate and develop .NET applications that need these native Active Directory features.

In this blog post, I give an overview of how to use AWS Managed Microsoft AD to manage gMSAs and KCD and demonstrate how you can configure a gMSA and KCD in six steps for a .NET application:

  1. Create your AWS Managed Microsoft AD.
  2. Create your Amazon RDS for SQL Server database.
  3. Create a gMSA for your .NET application.
  4. Deploy your .NET application.
  5. Configure your .NET application to use the gMSA.
  6. Configure KCD for your .NET application.

Solution overview

The following diagram shows the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD. The diagram also illustrates authentication and access and is numbered to show the six key steps required to use a gMSA and KCD. To deploy this solution, the AWS Managed Microsoft AD directory must be in the same Amazon Virtual Private Cloud (VPC) as RDS for SQL Server. For this example, my company name is Example Corp., and my directory uses the domain name, example.com.

Diagram showing the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD

Deploy the solution

The following six steps (numbered to correlate with the preceding diagram) walk you through configuring and using a gMSA and KCD.

1. Create your AWS Managed Microsoft AD directory

Using the Directory Service console, create your AWS Managed Microsoft AD directory in your Amazon VPC. In my example, my domain name is example.com.

Image of creating an AWS Managed Microsoft AD directory in an Amazon VPC

2. Create your Amazon RDS for SQL Server database

Using the RDS console, create your Amazon RDS for SQL Server database instance in the same Amazon VPC where your directory is running, and enable Windows Authentication. To enable Windows Authentication, select your directory in the Microsoft SQL Server Windows Authentication section in the Configure Advanced Settings step of the database creation workflow (see the following screenshot).

In my example, I create my Amazon RDS for SQL Server db-example database, and enable Windows Authentication to allow my db-example database to authenticate against my example.com directory.

Screenshot of configuring advanced settings

3. Create a gMSA for your .NET application

Now that you have deployed your directory, database, and application, you can create a gMSA for your .NET application.

To perform the next steps, you must install the Active Directory administration tools on a Windows server that is joined to your AWS Managed Microsoft AD directory domain. If you do not have a Windows server joined to your directory domain, you can deploy a new Amazon EC2 for Microsoft Windows Server instance and join it to your directory domain.

To create a gMSA for your .NET application:

  1. Log on to the instance on which you installed the Active Directory administration tools by using a user that is a member of the Admins security group or the Managed Service Accounts Admins security group in your organizational unit (OU). For my example, I use the Admin user in the example OU.

Screenshot of logging on to the instance on which you installed the Active Directory administration tools

  1. Identify which .NET application servers (hosts) will run your .NET application. Create a new security group in your OU and add your .NET application servers as members of this new group. This allows a group of application servers to use a single gMSA, instead of creating one gMSA for each server. In my example, I create a group, App_server_grp, in my example OU. I also add Appserver1, which is my .NET application server computer name, as a member of this new group.

Screenshot of creating a new security group

  1. Create a gMSA in your directory by running Windows PowerShell from the Start menu. The basic syntax to create the gMSA at the Windows PowerShell command prompt follows.
    PS C:\Users\admin> New-ADServiceAccount -name [gMSAname] -DNSHostName [domainname] -PrincipalsAllowedToRetrieveManagedPassword [AppServersSecurityGroup] -TrustedForDelegation $truedn <Enter>

    In my example, the gMSAname is gMSAexample, the DNSHostName is example.com, and the PrincipalsAllowedToRetrieveManagedPassword is the recently created security group, App_server_grp.

    PS C:\Users\admin> New-ADServiceAccount -name gMSAexample -DNSHostName example.com -PrincipalsAllowedToRetrieveManagedPassword App_server_grp -TrustedForDelegation $truedn <Enter>

    To confirm you created the gMSA, you can run the Get-ADServiceAccount command from the PowerShell command prompt.

    PS C:\Users\admin> Get-ADServiceAccount gMSAexample <Enter>
    
    DistinguishedName : CN=gMSAexample,CN=Managed Service Accounts,DC=example,DC=com
    Enabled           : True
    Name              : gMSAexample
    ObjectClass       : msDS-GroupManagedServiceAccount
    ObjectGUID        : 24d8b68d-36d5-4dc3-b0a9-edbbb5dc8a5b
    SamAccountName    : gMSAexample$
    SID               : S-1-5-21-2100421304-991410377-951759617-1603
    UserPrincipalName :

    You also can confirm you created the gMSA by opening the Active Directory Users and Computers utility located in your Administrative Tools folder, expand the domain (example.com in my case), and expand the Managed Service Accounts folder.
    Screenshot of confirming the creation of the gMSA

4. Deploy your .NET application

Deploy your .NET application on IIS on Amazon EC2 for Windows Server instances. For this step, I assume you are the application’s expert and already know how to deploy it. Make sure that all of your instances are joined to your directory.

5. Configure your .NET application to use the gMSA

You can configure your .NET application to use the gMSA to enforce strong password security policy and ensure password rotation of your service account. This helps to improve the security and simplify the management of your .NET application. Configure your .NET application in two steps:

  1. Grant to gMSA the required permissions to run your .NET application in the respective application folders. This is a critical step because when you change the application pool identity account to use gMSA, downtime can occur if the gMSA does not have the application’s required permissions. Therefore, make sure you first test the configurations in your development and test environments.
  2. Configure your application pool identity on IIS to use the gMSA as the service account. When you configure a gMSA as the service account, you include the $ at the end of the gMSA name. You do not need to provide a password because AWS Managed Microsoft AD automatically creates and rotates the password. In my example, my service account is gMSAexample$, as shown in the following screenshot.

Screenshot of configuring application pool identity

You have completed all the steps to use gMSA to create and rotate your .NET application service account password! Now, you will configure KCD for your .NET application.

6. Configure KCD for your .NET application

You now are ready to allow your .NET application to have access to other services by using the user identity’s permissions instead of the application service account’s permissions. Note that KCD and gMSA are independent features, which means you do not have to create a gMSA to use KCD. For this example, I am using both features to show how you can use them together. To configure a regular service account such as a user or local built-in account, see the Kerberos constrained delegation with ASP.NET blog post on MSDN.

In my example, my goal is to delegate to the gMSAexample account the ability to enforce the user’s permissions to my db-example SQL Server database, instead of the gMSAexample account’s permissions. For this, I have to update the msDS-AllowedToDelegateTo gMSA attribute. The value for this attribute is the service principal name (SPN) of the service instance that you are targeting, which in this case is the db-example Amazon RDS for SQL Server database.

The SPN format for the msDS-AllowedToDelegateTo attribute is a combination of the service class, the Kerberos authentication endpoint, and the port number. The Amazon RDS for SQL Server Kerberos authentication endpoint format is [database_name].[domain_name]. The value for my msDS-AllowedToDelegateTo attribute is MSSQLSvc/db-example.example.com:1433, where MSSQLSvc and 1433 are the SQL Server Database service class and port number standards, respectively.

Follow these steps to perform the msDS-AllowedToDelegateTo gMSA attribute configuration:

  1. Log on to your Active Directory management instance with a user identity that is a member of the Kerberos Delegation Admins security group. In this case, I will use admin.
  2. Open the Active Directory Users and Groups utility located in your Administrative Tools folder, choose View, and then choose Advanced Features.
  3. Expand your domain name (example.com in this example), and then choose the Managed Service Accounts security group. Right-click the gMSA account for the application pool you want to enable for Kerberos delegation, choose Properties, and choose the Attribute Editor tab.
  4. Search for the msDS-AllowedToDelegateTo attribute on the Attribute Editor tab and choose Edit.
  5. Enter the MSSQLSvc/db-example.example.com:1433 value and choose Add.
    Screenshot of entering the value of the multi-valued string
  6. Choose OK and Apply, and your KCD configuration is complete.

Congratulations! At this point, your application is using a gMSA rather than an embedded static user identity and password, and the application is able to access SQL Server using the identity of the application user. The gMSA eliminates the need for you to rotate the application’s password manually, and it allows you to better scope permissions for the application. When you use KCD, you can enforce access to your database consistently based on user identities at the database level, which prevents improper access that might otherwise occur because of an application error.

Summary

In this blog post, I demonstrated how to simplify the deployment and improve the security of your .NET application by using a group Managed Service Account and Kerberos constrained delegation with your AWS Managed Microsoft AD directory. I also outlined the main steps to get your .NET environment up and running on a managed Active Directory and SQL Server infrastructure. This approach will make it easier for you to build new .NET applications in the AWS Cloud or migrate existing ones in a more secure way.

For additional information about using group Managed Service Accounts and Kerberos constrained delegation with your AWS Managed Microsoft AD directory, see the AWS Directory Service documentation.

To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions about this post or its solution, start a new thread on the Directory Service forum.

– Peter

The Truth Behind the “Kodi Boxes Can Kill Their Owners” Headlines

Post Syndicated from Andy original https://torrentfreak.com/the-truth-behind-the-kodi-boxes-can-kill-their-owners-headlines-171118/

Another week, another batch of ‘Kodi Box Armageddon’ stories. This time it hasn’t been directly about the content they can provide but the physical risks they pose to their owners.

After being primed in advance, the usual British tabloids jumped into action early Thursday, noting that following tests carried out on “illicit streaming devices” (aka Android set-top devices), 100% of them failed to meet UK national electrical safety regulations.

The tests were carried out by Electrical Safety First, a charity which was prompted into action by anti-piracy outfit Federation Against Copyright Theft.

“A series of product safety tests on popular illicit streaming devices entering the UK have found that 100% fail to meet national electrical safety regulations,” a FACT statement reads.

“The news is all the more significant as the Intellectual Property Office (IPO) estimates that more than one million of these illegal devices have been sold in the UK in the last two years, representing a significant risk to the general public.”

After reading many sensational headlines stating that “Kodi Boxes Might Kill Their Owners”, please excuse us for groaning. This story has absolutely nothing – NOTHING – to do with Kodi or any other piece of software. Quite obviously, software doesn’t catch fire.

So, suspecting that there might be more to this than meets the eye, we decided to look beyond the press releases into the actual Electrical Safety First (ESF) report. While we have no doubt that ESF is extremely competent in its field (it is, no question), the front page of its report is disappointing.

Despite the items sent for testing being straightforward Android-based media players, the ESF report clearly describes itself as examining “illicit streaming devices”. It’s terminology that doesn’t describe the subject matter from an electrical, safety or technical perspective but is pretty convenient for FACT clients Sky and the Premier League.

Nevertheless, the full picture reveals rather more than most of the headlines suggest.

First of all, it’s important to know that ESF tested just nine devices out of the million or so allegedly sold in the UK during the past two years. Even more importantly, every single one of those devices was supplied to ESF by FACT.

Now, we’re not suggesting they were hand-picked to fail but it’s clear that the samples weren’t provided from a neutral source. Also, as we’ll learn shortly, it’s possible to determine in advance if an item will fail to meet UK standards simply by looking at its packaging and casing.

But perhaps even more intriguing is that the electrical testing carried out by ESF related primarily not to the set-top boxes themselves, but to their power supplies. ESF say so themselves.

“The product review relates primarily to the switched mode power supply units for the connection to the mains supply, which were supplied with the devices, to identify any potential risks to consumers such as electric shocks, heating and resistance to fire,” ESF reports.

The set-top boxes themselves were only assessed “in terms of any faults in the marking, warnings and instructions,” the group adds.

So, what we’re really talking about here isn’t dangerous illicit streaming devices set-top boxes, but the power supply units that come with them. It might seem like a small detail but we’ll come to the vast importance of this later on.

Firstly, however, we should note that none of the equipment supplied by FACT complied with Schedule 1 of the Electrical Equipment (Safety) Regulations 1994. This means that they failed to have the “Conformité Européene” or CE logo present. That’s unacceptable.

In addition, none of them lived up the requirements of Schedule 3 of the Electrical Equipment (Safety) Regulations 1994 either, which in part requires the manufacturer’s brand name or trademark to be “clearly printed on the electrical equipment or, where that is not possible, on the packaging.” (That’s how you can tell they’ll definitely fail UK standards, before sending them for testing)

Also, none of the samples were supplied with “sufficient safety or warning information to ensure the safe and correct use, assembly, installation or maintenance of the equipment.” This represents ‘a technical breach’ of the regulations, ESF reports.

Finally, several of the samples were considered to be a potential risk to their users, either via electric shock and/or fire. That’s an important finding and people who suspect they have such devices at home should definitely take note.

However, the really important point isn’t mentioned in the tabloids, probably since it distracts from the “Kodi Armageddon” narrative which underlies the whole study and subsequent reports.

ESF says that one of the key issues is that the set-top boxes come unbranded, something which breaches safety regulations while making it difficult for consumers to assess whether they’re buying a quality product. Crucially, this is not exclusively a set-top box problem, it is much, MUCH bigger.

“Issues with power supply units or unbranded and counterfeit chargers go beyond illicit streaming devices. In the last year, issues have been reported with other consumer electrical devices, such as laptop chargers and counterfeit phone chargers,” the same ESF report reveals.

“The total annual online sales of mains plug-in chargers is estimated to be in the region of 1.8 million and according to Electrical Safety First, it is likely that most of these sales involve cheap, unbranded chargers.”

So, we looked into this issue of problem power supplies and chargers generally, to see where this report fits into the bigger picture. It transpires it’s a massive problem, all over the UK, across a wide range of products. In fact, Trading Standards reports that 99% of non-genuine Apple chargers bought online “fail a basic safety test”.

But buying from reputable High Street retailers doesn’t help either.

During the past year, Poundworld was fined for selling – wait for it – 72,000 dangerous chargers. Home Bargains was also fined for selling “thousands” of power adaptors that fail to meet UK standards.

“All samples provided failed to comply with Electrical Equipment Safety Regulations and were not marked with the manufacturer’s name,” Trading Standards reports.

That sounds familiar.

So, there you have it. Far from this being an isolated “Kodi Box Crisis” as some have proclaimed, this is a broad issue affecting imported electrical items in general. On this basis, one can’t help but think the tabloids missed a trick here. Think of the power of this headline:

ALL UNBRANDED ELECTRICAL EQUIPMENT CAN KILL, DISCONNECT EVERYTHING

or, alternatively:

PIRATES URGED TO SWITCH TO BRANDED AMAZON FIRESTICKS, SAFER FOR KODI

Perhaps not….

The ESF report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Event-Driven Computing with Amazon SNS and AWS Compute, Storage, Database, and Networking Services

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/event-driven-computing-with-amazon-sns-compute-storage-database-and-networking-services/

Contributed by Otavio Ferreira, Manager, Software Development, AWS Messaging

Like other developers around the world, you may be tackling increasingly complex business problems. A key success factor, in that case, is the ability to break down a large project scope into smaller, more manageable components. A service-oriented architecture guides you toward designing systems as a collection of loosely coupled, independently scaled, and highly reusable services. Microservices take this even further. To improve performance and scalability, they promote fine-grained interfaces and lightweight protocols.

However, the communication among isolated microservices can be challenging. Services are often deployed onto independent servers and don’t share any compute or storage resources. Also, you should avoid hard dependencies among microservices, to preserve maintainability and reusability.

If you apply the pub/sub design pattern, you can effortlessly decouple and independently scale out your microservices and serverless architectures. A pub/sub messaging service, such as Amazon SNS, promotes event-driven computing that statically decouples event publishers from subscribers, while dynamically allowing for the exchange of messages between them. An event-driven architecture also introduces the responsiveness needed to deal with complex problems, which are often unpredictable and asynchronous.

What is event-driven computing?

Given the context of microservices, event-driven computing is a model in which subscriber services automatically perform work in response to events triggered by publisher services. This paradigm can be applied to automate workflows while decoupling the services that collectively and independently work to fulfil these workflows. Amazon SNS is an event-driven computing hub, in the AWS Cloud, that has native integration with several AWS publisher and subscriber services.

Which AWS services publish events to SNS natively?

Several AWS services have been integrated as SNS publishers and, therefore, can natively trigger event-driven computing for a variety of use cases. In this post, I specifically cover AWS compute, storage, database, and networking services, as depicted below.

Compute services

  • Auto Scaling: Helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You can configure Auto Scaling lifecycle hooks to trigger events, as Auto Scaling resizes your EC2 cluster.As an example, you may want to warm up the local cache store on newly launched EC2 instances, and also download log files from other EC2 instances that are about to be terminated. To make this happen, set an SNS topic as your Auto Scaling group’s notification target, then subscribe two Lambda functions to this SNS topic. The first function is responsible for handling scale-out events (to warm up cache upon provisioning), whereas the second is in charge of handling scale-in events (to download logs upon termination).

  • AWS Elastic Beanstalk: An easy-to-use service for deploying and scaling web applications and web services developed in a number of programming languages. You can configure event notifications for your Elastic Beanstalk environment so that notable events can be automatically published to an SNS topic, then pushed to topic subscribers.As an example, you may use this event-driven architecture to coordinate your continuous integration pipeline (such as Jenkins CI). That way, whenever an environment is created, Elastic Beanstalk publishes this event to an SNS topic, which triggers a subscribing Lambda function, which then kicks off a CI job against your newly created Elastic Beanstalk environment.

  • Elastic Load Balancing: Automatically distributes incoming application traffic across Amazon EC2 instances, containers, or other resources identified by IP addresses.You can configure CloudWatch alarms on Elastic Load Balancing metrics, to automate the handling of events derived from Classic Load Balancers. As an example, you may leverage this event-driven design to automate latency profiling in an Amazon ECS cluster behind a Classic Load Balancer. In this example, whenever your ECS cluster breaches your load balancer latency threshold, an event is posted by CloudWatch to an SNS topic, which then triggers a subscribing Lambda function. This function runs a task on your ECS cluster to trigger a latency profiling tool, hosted on the cluster itself. This can enhance your latency troubleshooting exercise by making it timely.

Storage services

  • Amazon S3: Object storage built to store and retrieve any amount of data.You can enable S3 event notifications, and automatically get them posted to SNS topics, to automate a variety of workflows. For instance, imagine that you have an S3 bucket to store incoming resumes from candidates, and a fleet of EC2 instances to encode these resumes from their original format (such as Word or text) into a portable format (such as PDF).In this example, whenever new files are uploaded to your input bucket, S3 publishes these events to an SNS topic, which in turn pushes these messages into subscribing SQS queues. Then, encoding workers running on EC2 instances poll these messages from the SQS queues; retrieve the original files from the input S3 bucket; encode them into PDF; and finally store them in an output S3 bucket.

  • Amazon EFS: Provides simple and scalable file storage, for use with Amazon EC2 instances, in the AWS Cloud.You can configure CloudWatch alarms on EFS metrics, to automate the management of your EFS systems. For example, consider a highly parallelized genomics analysis application that runs against an EFS system. By default, this file system is instantiated on the “General Purpose” performance mode. Although this performance mode allows for lower latency, it might eventually impose a scaling bottleneck. Therefore, you may leverage an event-driven design to handle it automatically.Basically, as soon as the EFS metric “Percent I/O Limit” breaches 95%, CloudWatch could post this event to an SNS topic, which in turn would push this message into a subscribing Lambda function. This function automatically creates a new file system, this time on the “Max I/O” performance mode, then switches the genomics analysis application to this new file system. As a result, your application starts experiencing higher I/O throughput rates.

  • Amazon Glacier: A secure, durable, and low-cost cloud storage service for data archiving and long-term backup.You can set a notification configuration on an Amazon Glacier vault so that when a job completes, a message is published to an SNS topic. Retrieving an archive from Amazon Glacier is a two-step asynchronous operation, in which you first initiate a job, and then download the output after the job completes. Therefore, SNS helps you eliminate polling your Amazon Glacier vault to check whether your job has been completed, or not. As usual, you may subscribe SQS queues, Lambda functions, and HTTP endpoints to your SNS topic, to be notified when your Amazon Glacier job is done.

  • AWS Snowball: A petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data.You can leverage Snowball notifications to automate workflows related to importing data into and exporting data from AWS. More specifically, whenever your Snowball job status changes, Snowball can publish this event to an SNS topic, which in turn can broadcast the event to all its subscribers.As an example, imagine a Geographic Information System (GIS) that distributes high-resolution satellite images to users via Web browser. In this example, the GIS vendor could capture up to 80 TB of satellite images; create a Snowball job to import these files from an on-premises system to an S3 bucket; and provide an SNS topic ARN to be notified upon job status changes in Snowball. After Snowball changes the job status from “Importing” to “Completed”, Snowball publishes this event to the specified SNS topic, which delivers this message to a subscribing Lambda function, which finally creates a CloudFront web distribution for the target S3 bucket, to serve the images to end users.

Database services

  • Amazon RDS: Makes it easy to set up, operate, and scale a relational database in the cloud.RDS leverages SNS to broadcast notifications when RDS events occur. As usual, these notifications can be delivered via any protocol supported by SNS, including SQS queues, Lambda functions, and HTTP endpoints.As an example, imagine that you own a social network website that has experienced organic growth, and needs to scale its compute and database resources on demand. In this case, you could provide an SNS topic to listen to RDS DB instance events. When the “Low Storage” event is published to the topic, SNS pushes this event to a subscribing Lambda function, which in turn leverages the RDS API to increase the storage capacity allocated to your DB instance. The provisioning itself takes place within the specified DB maintenance window.

  • Amazon ElastiCache: A web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.ElastiCache can publish messages using Amazon SNS when significant events happen on your cache cluster. This feature can be used to refresh the list of servers on client machines connected to individual cache node endpoints of a cache cluster. For instance, an ecommerce website fetches product details from a cache cluster, with the goal of offloading a relational database and speeding up page load times. Ideally, you want to make sure that each web server always has an updated list of cache servers to which to connect.To automate this node discovery process, you can get your ElastiCache cluster to publish events to an SNS topic. Thus, when ElastiCache event “AddCacheNodeComplete” is published, your topic then pushes this event to all subscribing HTTP endpoints that serve your ecommerce website, so that these HTTP servers can update their list of cache nodes.

  • Amazon Redshift: A fully managed data warehouse that makes it simple to analyze data using standard SQL and BI (Business Intelligence) tools.Amazon Redshift uses SNS to broadcast relevant events so that data warehouse workflows can be automated. As an example, imagine a news website that sends clickstream data to a Kinesis Firehose stream, which then loads the data into Amazon Redshift, so that popular news and reading preferences might be surfaced on a BI tool. At some point though, this Amazon Redshift cluster might need to be resized, and the cluster enters a ready-only mode. Hence, this Amazon Redshift event is published to an SNS topic, which delivers this event to a subscribing Lambda function, which finally deletes the corresponding Kinesis Firehose delivery stream, so that clickstream data uploads can be put on hold.At a later point, after Amazon Redshift publishes the event that the maintenance window has been closed, SNS notifies a subscribing Lambda function accordingly, so that this function can re-create the Kinesis Firehose delivery stream, and resume clickstream data uploads to Amazon Redshift.

  • AWS DMS: Helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.DMS also uses SNS to provide notifications when DMS events occur, which can automate database migration workflows. As an example, you might create data replication tasks to migrate an on-premises MS SQL database, composed of multiple tables, to MySQL. Thus, if replication tasks fail due to incompatible data encoding in the source tables, these events can be published to an SNS topic, which can push these messages into a subscribing SQS queue. Then, encoders running on EC2 can poll these messages from the SQS queue, encode the source tables into a compatible character set, and restart the corresponding replication tasks in DMS. This is an event-driven approach to a self-healing database migration process.

Networking services

  • Amazon Route 53: A highly available and scalable cloud-based DNS (Domain Name System). Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources.You can set CloudWatch alarms and get automated Amazon SNS notifications when the status of your Route 53 health check changes. As an example, imagine an online payment gateway that reports the health of its platform to merchants worldwide, via a status page. This page is hosted on EC2 and fetches platform health data from DynamoDB. In this case, you could configure a CloudWatch alarm for your Route 53 health check, so that when the alarm threshold is breached, and the payment gateway is no longer considered healthy, then CloudWatch publishes this event to an SNS topic, which pushes this message to a subscribing Lambda function, which finally updates the DynamoDB table that populates the status page. This event-driven approach avoids any kind of manual update to the status page visited by merchants.

  • AWS Direct Connect (AWS DX): Makes it easy to establish a dedicated network connection from your premises to AWS, which can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.You can monitor physical DX connections using CloudWatch alarms, and send SNS messages when alarms change their status. As an example, when a DX connection state shifts to 0 (zero), indicating that the connection is down, this event can be published to an SNS topic, which can fan out this message to impacted servers through HTTP endpoints, so that they might reroute their traffic through a different connection instead. This is an event-driven approach to connectivity resilience.

More event-driven computing on AWS

In addition to SNS, event-driven computing is also addressed by Amazon CloudWatch Events, which delivers a near real-time stream of system events that describe changes in AWS resources. With CloudWatch Events, you can route each event type to one or more targets, including:

Many AWS services publish events to CloudWatch. As an example, you can get CloudWatch Events to capture events on your ETL (Extract, Transform, Load) jobs running on AWS Glue and push failed ones to an SQS queue, so that you can retry them later.

Conclusion

Amazon SNS is a pub/sub messaging service that can be used as an event-driven computing hub to AWS customers worldwide. By capturing events natively triggered by AWS services, such as EC2, S3 and RDS, you can automate and optimize all kinds of workflows, namely scaling, testing, encoding, profiling, broadcasting, discovery, failover, and much more. Business use cases presented in this post ranged from recruiting websites, to scientific research, geographic systems, social networks, retail websites, and news portals.

Start now by visiting Amazon SNS in the AWS Management Console, or by trying the AWS 10-Minute Tutorial, Send Fan-out Event Notifications with Amazon SNS and Amazon SQS.

 

Capturing Custom, High-Resolution Metrics from Containers Using AWS Step Functions and AWS Lambda

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/capturing-custom-high-resolution-metrics-from-containers-using-aws-step-functions-and-aws-lambda/

Contributed by Trevor Sullivan, AWS Solutions Architect

When you deploy containers with Amazon ECS, are you gathering all of the key metrics so that you can correctly monitor the overall health of your ECS cluster?

By default, ECS writes metrics to Amazon CloudWatch in 5-minute increments. For complex or large services, this may not be sufficient to make scaling decisions quickly. You may want to respond immediately to changes in workload or to identify application performance problems. Last July, CloudWatch announced support for high-resolution metrics, up to a per-second basis.

These high-resolution metrics can be used to give you a clearer picture of the load and performance for your applications, containers, clusters, and hosts. In this post, I discuss how you can use AWS Step Functions, along with AWS Lambda, to cost effectively record high-resolution metrics into CloudWatch. You implement this solution using a serverless architecture, which keeps your costs low and makes it easier to troubleshoot the solution.

To show how this works, you retrieve some useful metric data from an ECS cluster running in the same AWS account and region (Oregon, us-west-2) as the Step Functions state machine and Lambda function. However, you can use this architecture to retrieve any custom application metrics from any resource in any AWS account and region.

Why Step Functions?

Step Functions enables you to orchestrate multi-step tasks in the AWS Cloud that run for any period of time, up to a year. Effectively, you’re building a blueprint for an end-to-end process. After it’s built, you can execute the process as many times as you want.

For this architecture, you gather metrics from an ECS cluster, every five seconds, and then write the metric data to CloudWatch. After your ECS cluster metrics are stored in CloudWatch, you can create CloudWatch alarms to notify you. An alarm can also trigger an automated remediation activity such as scaling ECS services, when a metric exceeds a threshold defined by you.

When you build a Step Functions state machine, you define the different states inside it as JSON objects. The bulk of the work in Step Functions is handled by the common task state, which invokes Lambda functions or Step Functions activities. There is also a built-in library of other useful states that allow you to control the execution flow of your program.

One of the most useful state types in Step Functions is the parallel state. Each parallel state in your state machine can have one or more branches, each of which is executed in parallel. Another useful state type is the wait state, which waits for a period of time before moving to the next state.

In this walkthrough, you combine these three states (parallel, wait, and task) to create a state machine that triggers a Lambda function, which then gathers metrics from your ECS cluster.

Step Functions pricing

This state machine is executed every minute, resulting in 60 executions per hour, and 1,440 executions per day. Step Functions is billed per state transition, including the Start and End state transitions, and giving you approximately 37,440 state transitions per day. To reach this number, I’m using this estimated math:

26 state transitions per-execution x 60 minutes x 24 hours

Based on current pricing, at $0.000025 per state transition, the daily cost of this metric gathering state machine would be $0.936.

Step Functions offers an indefinite 4,000 free state transitions every month. This benefit is available to all customers, not just customers who are still under the 12-month AWS Free Tier. For more information and cost example scenarios, see Step Functions pricing.

Why Lambda?

The goal is to capture metrics from an ECS cluster, and write the metric data to CloudWatch. This is a straightforward, short-running process that makes Lambda the perfect place to run your code. Lambda is one of the key services that makes up “Serverless” application architectures. It enables you to consume compute capacity only when your code is actually executing.

The process of gathering metric data from ECS and writing it to CloudWatch takes a short period of time. In fact, my average Lambda function execution time, while developing this post, is only about 250 milliseconds on average. For every five-second interval that occurs, I’m only using 1/20th of the compute time that I’d otherwise be paying for.

Lambda pricing

For billing purposes, Lambda execution time is rounded up to the nearest 100-ms interval. In general, based on the metrics that I observed during development, a 250-ms runtime would be billed at 300 ms. Here, I calculate the cost of this Lambda function executing on a daily basis.

Assuming 31 days in each month, there would be 535,680 five-second intervals (31 days x 24 hours x 60 minutes x 12 five-second intervals = 535,680). The Lambda function is invoked every five-second interval, by the Step Functions state machine, and runs for a 300-ms period. At current Lambda pricing, for a 128-MB function, you would be paying approximately the following:

Total compute

Total executions = 535,680
Total compute = total executions x (3 x $0.000000208 per 100 ms) = $0.334 per day

Total requests

Total requests = (535,680 / 1000000) * $0.20 per million requests = $0.11 per day

Total Lambda Cost

$0.11 requests + $0.334 compute time = $0.444 per day

Similar to Step Functions, Lambda offers an indefinite free tier. For more information, see Lambda Pricing.

Walkthrough

In the following sections, I step through the process of configuring the solution just discussed. If you follow along, at a high level, you will:

  • Configure an IAM role and policy
  • Create a Step Functions state machine to control metric gathering execution
  • Create a metric-gathering Lambda function
  • Configure a CloudWatch Events rule to trigger the state machine
  • Validate the solution

Prerequisites

You should already have an AWS account with a running ECS cluster. If you don’t have one running, you can easily deploy a Docker container on an ECS cluster using the AWS Management Console. In the example produced for this post, I use an ECS cluster running Windows Server (currently in beta), but either a Linux or Windows Server cluster works.

Create an IAM role and policy

First, create an IAM role and policy that enables Step Functions, Lambda, and CloudWatch to communicate with each other.

  • The CloudWatch Events rule needs permissions to trigger the Step Functions state machine.
  • The Step Functions state machine needs permissions to trigger the Lambda function.
  • The Lambda function needs permissions to query ECS and then write to CloudWatch Logs and metrics.

When you create the state machine, Lambda function, and CloudWatch Events rule, you assign this role to each of those resources. Upon execution, each of these resources assumes the specified role and executes using the role’s permissions.

  1. Open the IAM console.
  2. Choose Roles, create New Role.
  3. For Role Name, enter WriteMetricFromStepFunction.
  4. Choose Save.

Create the IAM role trust relationship
The trust relationship (also known as the assume role policy document) for your IAM role looks like the following JSON document. As you can see from the document, your IAM role needs to trust the Lambda, CloudWatch Events, and Step Functions services. By configuring your role to trust these services, they can assume this role and inherit the role permissions.

  1. Open the IAM console.
  2. Choose Roles and select the IAM role previously created.
  3. Choose Trust RelationshipsEdit Trust Relationships.
  4. Enter the following trust policy text and choose Save.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "states.us-west-2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create an IAM policy

After you’ve finished configuring your role’s trust relationship, grant the role access to the other AWS resources that make up the solution.

The IAM policy is what gives your IAM role permissions to access various resources. You must whitelist explicitly the specific resources to which your role has access, because the default IAM behavior is to deny access to any AWS resources.

I’ve tried to keep this policy document as generic as possible, without allowing permissions to be too open. If the name of your ECS cluster is different than the one in the example policy below, make sure that you update the policy document before attaching it to your IAM role. You can attach this policy as an inline policy, instead of creating the policy separately first. However, either approach is valid.

  1. Open the IAM console.
  2. Select the IAM role, and choose Permissions.
  3. Choose Add in-line policy.
  4. Choose Custom Policy and then enter the following policy. The inline policy name does not matter.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "logs:*" ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [ "cloudwatch:PutMetricData" ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [ "states:StartExecution" ],
            "Resource": [
                "arn:aws:states:*:*:stateMachine:WriteMetricFromStepFunction"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [ "lambda:InvokeFunction" ],
            "Resource": "arn:aws:lambda:*:*:function:WriteMetricFromStepFunction"
        },
        {
            "Effect": "Allow",
            "Action": [ "ecs:Describe*" ],
            "Resource": "arn:aws:ecs:*:*:cluster/ECSEsgaroth"
        }
    ]
}

Create a Step Functions state machine

In this section, you create a Step Functions state machine that invokes the metric-gathering Lambda function every five (5) seconds, for a one-minute period. If you divide a minute (60) seconds into equal parts of five-second intervals, you get 12. Based on this math, you create 12 branches, in a single parallel state, in the state machine. Each branch triggers the metric-gathering Lambda function at a different five-second marker, throughout the one-minute period. After all of the parallel branches finish executing, the Step Functions execution completes and another begins.

Follow these steps to create your Step Functions state machine:

  1. Open the Step Functions console.
  2. Choose DashboardCreate State Machine.
  3. For State Machine Name, enter WriteMetricFromStepFunction.
  4. Enter the state machine code below into the editor. Make sure that you insert your own AWS account ID for every instance of “676655494xxx”
  5. Choose Create State Machine.
  6. Select the WriteMetricFromStepFunction IAM role that you previously created.
{
    "Comment": "Writes ECS metrics to CloudWatch every five seconds, for a one-minute period.",
    "StartAt": "ParallelMetric",
    "States": {
      "ParallelMetric": {
        "Type": "Parallel",
        "Branches": [
          {
            "StartAt": "WriteMetricLambda",
            "States": {
             	"WriteMetricLambda": {
                  "Type": "Task",
				  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
    	  {
            "StartAt": "WaitFive",
            "States": {
            	"WaitFive": {
            		"Type": "Wait",
            		"Seconds": 5,
            		"Next": "WriteMetricLambdaFive"
          		},
             	"WriteMetricLambdaFive": {
                  "Type": "Task",
				  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
    	  {
            "StartAt": "WaitTen",
            "States": {
            	"WaitTen": {
            		"Type": "Wait",
            		"Seconds": 10,
            		"Next": "WriteMetricLambda10"
          		},
             	"WriteMetricLambda10": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
    	  {
            "StartAt": "WaitFifteen",
            "States": {
            	"WaitFifteen": {
            		"Type": "Wait",
            		"Seconds": 15,
            		"Next": "WriteMetricLambda15"
          		},
             	"WriteMetricLambda15": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait20",
            "States": {
            	"Wait20": {
            		"Type": "Wait",
            		"Seconds": 20,
            		"Next": "WriteMetricLambda20"
          		},
             	"WriteMetricLambda20": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait25",
            "States": {
            	"Wait25": {
            		"Type": "Wait",
            		"Seconds": 25,
            		"Next": "WriteMetricLambda25"
          		},
             	"WriteMetricLambda25": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait30",
            "States": {
            	"Wait30": {
            		"Type": "Wait",
            		"Seconds": 30,
            		"Next": "WriteMetricLambda30"
          		},
             	"WriteMetricLambda30": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait35",
            "States": {
            	"Wait35": {
            		"Type": "Wait",
            		"Seconds": 35,
            		"Next": "WriteMetricLambda35"
          		},
             	"WriteMetricLambda35": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait40",
            "States": {
            	"Wait40": {
            		"Type": "Wait",
            		"Seconds": 40,
            		"Next": "WriteMetricLambda40"
          		},
             	"WriteMetricLambda40": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait45",
            "States": {
            	"Wait45": {
            		"Type": "Wait",
            		"Seconds": 45,
            		"Next": "WriteMetricLambda45"
          		},
             	"WriteMetricLambda45": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait50",
            "States": {
            	"Wait50": {
            		"Type": "Wait",
            		"Seconds": 50,
            		"Next": "WriteMetricLambda50"
          		},
             	"WriteMetricLambda50": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait55",
            "States": {
            	"Wait55": {
            		"Type": "Wait",
            		"Seconds": 55,
            		"Next": "WriteMetricLambda55"
          		},
             	"WriteMetricLambda55": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          }
        ],
        "End": true
      }
  }
}

Now you’ve got a shiny new Step Functions state machine! However, you might ask yourself, “After the state machine has been created, how does it get executed?” Before I answer that question, create the Lambda function that writes the custom metric, and then you get the end-to-end process moving.

Create a Lambda function

The meaty part of the solution is a Lambda function, written to consume the Python 3.6 runtime, that retrieves metric values from ECS, and then writes them to CloudWatch. This Lambda function is what the Step Functions state machine is triggering every five seconds, via the Task states. Key points to remember:

The Lambda function needs permission to:

  • Write CloudWatch metrics (PutMetricData API).
  • Retrieve metrics from ECS clusters (DescribeCluster API).
  • Write StdOut to CloudWatch Logs.

Boto3, the AWS SDK for Python, is included in the Lambda execution environment for Python 2.x and 3.x.

Because Lambda includes the AWS SDK, you don’t have to worry about packaging it up and uploading it to Lambda. You can focus on writing code and automatically take a dependency on boto3.

As for permissions, you’ve already created the IAM role and attached a policy to it that enables your Lambda function to access the necessary API actions. When you create your Lambda function, make sure that you select the correct IAM role, to ensure it is invoked with the correct permissions.

The following Lambda function code is generic. So how does the Lambda function know which ECS cluster to gather metrics for? Your Step Functions state machine automatically passes in its state to the Lambda function. When you create your CloudWatch Events rule, you specify a simple JSON object that passes the desired ECS cluster name into your Step Functions state machine, which then passes it to the Lambda function.

Use the following property values as you create your Lambda function:

Function Name: WriteMetricFromStepFunction
Description: This Lambda function retrieves metric values from an ECS cluster and writes them to Amazon CloudWatch.
Runtime: Python3.6
Memory: 128 MB
IAM Role: WriteMetricFromStepFunction

import boto3

def handler(event, context):
    cw = boto3.client('cloudwatch')
    ecs = boto3.client('ecs')
    print('Got boto3 client objects')
    
    Dimension = {
        'Name': 'ClusterName',
        'Value': event['ECSClusterName']
    }

    cluster = get_ecs_cluster(ecs, Dimension['Value'])
    
    cw_args = {
       'Namespace': 'ECS',
       'MetricData': [
           {
               'MetricName': 'RunningTask',
               'Dimensions': [ Dimension ],
               'Value': cluster['runningTasksCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           },
           {
               'MetricName': 'PendingTask',
               'Dimensions': [ Dimension ],
               'Value': cluster['pendingTasksCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           },
           {
               'MetricName': 'ActiveServices',
               'Dimensions': [ Dimension ],
               'Value': cluster['activeServicesCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           },
           {
               'MetricName': 'RegisteredContainerInstances',
               'Dimensions': [ Dimension ],
               'Value': cluster['registeredContainerInstancesCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           }
        ]
    }
    cw.put_metric_data(**cw_args)
    print('Finished writing metric data')
    
def get_ecs_cluster(client, cluster_name):
    cluster = client.describe_clusters(clusters = [ cluster_name ])
    print('Retrieved cluster details from ECS')
    return cluster['clusters'][0]

Create the CloudWatch Events rule

Now you’ve created an IAM role and policy, Step Functions state machine, and Lambda function. How do these components actually start communicating with each other? The final step in this process is to set up a CloudWatch Events rule that triggers your metric-gathering Step Functions state machine every minute. You have two choices for your CloudWatch Events rule expression: rate or cron. In this example, use the cron expression.

A couple key learning points from creating the CloudWatch Events rule:

  • You can specify one or more targets, of different types (for example, Lambda function, Step Functions state machine, SNS topic, and so on).
  • You’re required to specify an IAM role with permissions to trigger your target.
    NOTE: This applies only to certain types of targets, including Step Functions state machines.
  • Each target that supports IAM roles can be triggered using a different IAM role, in the same CloudWatch Events rule.
  • Optional: You can provide custom JSON that is passed to your target Step Functions state machine as input.

Follow these steps to create the CloudWatch Events rule:

  1. Open the CloudWatch console.
  2. Choose Events, RulesCreate Rule.
  3. Select Schedule, Cron Expression, and then enter the following rule:
    0/1 * * * ? *
  4. Choose Add Target, Step Functions State MachineWriteMetricFromStepFunction.
  5. For Configure Input, select Constant (JSON Text).
  6. Enter the following JSON input, which is passed to Step Functions, while changing the cluster name accordingly:
    { "ECSClusterName": "ECSEsgaroth" }
  7. Choose Use Existing Role, WriteMetricFromStepFunction (the IAM role that you previously created).

After you’ve completed with these steps, your screen should look similar to this:

Validate the solution

Now that you have finished implementing the solution to gather high-resolution metrics from ECS, validate that it’s working properly.

  1. Open the CloudWatch console.
  2. Choose Metrics.
  3. Choose custom and select the ECS namespace.
  4. Choose the ClusterName metric dimension.

You should see your metrics listed below.

Troubleshoot configuration issues

If you aren’t receiving the expected ECS cluster metrics in CloudWatch, check for the following common configuration issues. Review the earlier procedures to make sure that the resources were properly configured.

  • The IAM role’s trust relationship is incorrectly configured.
    Make sure that the IAM role trusts Lambda, CloudWatch Events, and Step Functions in the correct region.
  • The IAM role does not have the correct policies attached to it.
    Make sure that you have copied the IAM policy correctly as an inline policy on the IAM role.
  • The CloudWatch Events rule is not triggering new Step Functions executions.
    Make sure that the target configuration on the rule has the correct Step Functions state machine and IAM role selected.
  • The Step Functions state machine is being executed, but failing part way through.
    Examine the detailed error message on the failed state within the failed Step Functions execution. It’s possible that the
  • IAM role does not have permissions to trigger the target Lambda function, that the target Lambda function may not exist, or that the Lambda function failed to complete successfully due to invalid permissions.
    Although the above list covers several different potential configuration issues, it is not comprehensive. Make sure that you understand how each service is connected to each other, how permissions are granted through IAM policies, and how IAM trust relationships work.

Conclusion

In this post, you implemented a Serverless solution to gather and record high-resolution application metrics from containers running on Amazon ECS into CloudWatch. The solution consists of a Step Functions state machine, Lambda function, CloudWatch Events rule, and an IAM role and policy. The data that you gather from this solution helps you rapidly identify issues with an ECS cluster.

To gather high-resolution metrics from any service, modify your Lambda function to gather the correct metrics from your target. If you prefer not to use Python, you can implement a Lambda function using one of the other supported runtimes, including Node.js, Java, or .NET Core. However, this post should give you the fundamental basics about capturing high-resolution metrics in CloudWatch.

If you found this post useful, or have questions, please comment below.

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/

This post written by: Magnus Bjorkman – Solutions Architect

Many customers are looking to run their services at global scale, deploying their backend to multiple regions. In this post, we describe how to deploy a Serverless API into multiple regions and how to leverage Amazon Route 53 to route the traffic between regions. We use latency-based routing and health checks to achieve an active-active setup that can fail over between regions in case of an issue. We leverage the new regional API endpoint feature in Amazon API Gateway to make this a seamless process for the API client making the requests. This post does not cover the replication of your data, which is another aspect to consider when deploying applications across regions.

Solution overview

Currently, the default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can’t use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover. The following diagram shows how you do this:

Active/active multi region architecture

  • Deploy your Rest API stack, consisting of API Gateway and Lambda, in two regions, such as us-east-1 and us-west-2.
  • Choose the regional API endpoint type for your API.
  • Create a custom domain name and choose the regional API endpoint type for that one as well. In both regions, you are configuring the custom domain name to be the same, for example, helloworldapi.replacewithyourcompanyname.com
  • Use the host name of the custom domain names from each region, for example, xxxxxx.execute-api.us-east-1.amazonaws.com and xxxxxx.execute-api.us-west-2.amazonaws.com, to configure record sets in Route 53 for your client-facing domain name, for example, helloworldapi.replacewithyourcompanyname.com

The above solution provides an active-active setup for your API across the two regions, but you are not doing failover yet. For that to work, set up a health check in Route 53:

Route 53 Health Check

A Route 53 health check must have an endpoint to call to check the health of a service. You could do a simple ping of your actual Rest API methods, but instead provide a specific method on your Rest API that does a deep ping. That is, it is a Lambda function that checks the status of all the dependencies.

In the case of the Hello World API, you don’t have any other dependencies. In a real-world scenario, you could check on dependencies as databases, other APIs, and external dependencies. Route 53 health checks themselves cannot use your custom domain name endpoint’s DNS address, so you are going to directly call the API endpoints via their region unique endpoint’s DNS address.

Walkthrough

The following sections describe how to set up this solution. You can find the complete solution at the blog-multi-region-serverless-service GitHub repo. Clone or download the repository locally to be able to do the setup as described.

Prerequisites

You need the following resources to set up the solution described in this post:

  • AWS CLI
  • An S3 bucket in each region in which to deploy the solution, which can be used by the AWS Serverless Application Model (SAM). You can use the following CloudFormation templates to create buckets in us-east-1 and us-west-2:
    • us-east-1:
    • us-west-2:
  • A hosted zone registered in Amazon Route 53. This is used for defining the domain name of your API endpoint, for example, helloworldapi.replacewithyourcompanyname.com. You can use a third-party domain name registrar and then configure the DNS in Amazon Route 53, or you can purchase a domain directly from Amazon Route 53.

Deploy API with health checks in two regions

Start by creating a small “Hello World” Lambda function that sends back a message in the region in which it has been deployed.


"""Return message."""
import logging

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the hello world message."""

    region = context.invoked_function_arn.split(':')[3]

    logger.info("message: " + "Hello from " + region)
    
    return {
		"message": "Hello from " + region
    }

Also create a Lambda function for doing a health check that returns a value based on another environment variable (either “ok” or “fail”) to allow for ease of testing:


"""Return health."""
import logging
import os

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the health."""

    logger.info("status: " + os.environ['STATUS'])
    
    return {
		"status": os.environ['STATUS']
    }

Deploy both of these using an AWS Serverless Application Model (SAM) template. SAM is a CloudFormation extension that is optimized for serverless, and provides a standard way to create a complete serverless application. You can find the full helloworld-sam.yaml template in the blog-multi-region-serverless-service GitHub repo.

A few things to highlight:

  • You are using inline Swagger to define your API so you can substitute the current region in the x-amazon-apigateway-integration section.
  • Most of the Swagger template covers CORS to allow you to test this from a browser.
  • You are also using substitution to populate the environment variable used by the “Hello World” method with the region into which it is being deployed.

The Swagger allows you to use the same SAM template in both regions.

You can only use SAM from the AWS CLI, so do the following from the command prompt. First, deploy the SAM template in us-east-1 with the following commands, replacing “<your bucket in us-east-1>” with a bucket in your account:


> cd helloworld-api
> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-east-1> --region us-east-1
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-east-1

Second, do the same in us-west-2:


> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-west-2> --region us-west-2
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-west-2

The API was created with the default endpoint type of Edge Optimized. Switch it to Regional. In the Amazon API Gateway console, select the API that you just created and choose the wheel-icon to edit it.

API Gateway edit API settings

In the edit screen, select the Regional endpoint type and save the API. Do the same in both regions.

Grab the URL for the API in the console by navigating to the method in the prod stage.

API Gateway endpoint link

You can now test this with curl:


> curl https://2wkt1cxxxx.execute-api.us-west-2.amazonaws.com/prod/helloworld
{"message": "Hello from us-west-2"}

Write down the domain name for the URL in each region (for example, 2wkt1cxxxx.execute-api.us-west-2.amazonaws.com), as you need that later when you deploy the Route 53 setup.

Create the custom domain name

Next, create an Amazon API Gateway custom domain name endpoint. As part of using this feature, you must have a hosted zone and domain available to use in Route 53 as well as an SSL certificate that you use with your specific domain name.

You can create the SSL certificate by using AWS Certificate Manager. In the ACM console, choose Get started (if you have no existing certificates) or Request a certificate. Fill out the form with the domain name to use for the custom domain name endpoint, which is the same across the two regions:

Amazon Certificate Manager request new certificate

Go through the remaining steps and validate the certificate for each region before moving on.

You are now ready to create the endpoints. In the Amazon API Gateway console, choose Custom Domain Names, Create Custom Domain Name.

API Gateway create custom domain name

A few things to highlight:

  • The domain name is the same as what you requested earlier through ACM.
  • The endpoint configuration should be regional.
  • Select the ACM Certificate that you created earlier.
  • You need to create a base path mapping that connects back to your earlier API Gateway endpoint. Set the base path to v1 so you can version your API, and then select the API and the prod stage.

Choose Save. You should see your newly created custom domain name:

API Gateway custom domain setup

Note the value for Target Domain Name as you need that for the next step. Do this for both regions.

Deploy Route 53 setup

Use the global Route 53 service to provide DNS lookup for the Rest API, distributing the traffic in an active-active setup based on latency. You can find the full CloudFormation template in the blog-multi-region-serverless-service GitHub repo.

The template sets up health checks, for example, for us-east-1:


HealthcheckRegion1:
  Type: "AWS::Route53::HealthCheck"
  Properties:
    HealthCheckConfig:
      Port: "443"
      Type: "HTTPS_STR_MATCH"
      SearchString: "ok"
      ResourcePath: "/prod/healthcheck"
      FullyQualifiedDomainName: !Ref Region1HealthEndpoint
      RequestInterval: "30"
      FailureThreshold: "2"

Use the health check when you set up the record set and the latency routing, for example, for us-east-1:


Region1EndpointRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    Region: us-east-1
    HealthCheckId: !Ref HealthcheckRegion1
    SetIdentifier: "endpoint-region1"
    HostedZoneId: !Ref HostedZoneId
    Name: !Ref MultiregionEndpoint
    Type: CNAME
    TTL: 60
    ResourceRecords:
      - !Ref Region1Endpoint

You can create the stack by using the following link, copying in the domain names from the previous section, your existing hosted zone name, and the main domain name that is created (for example, hellowordapi.replacewithyourcompanyname.com):

The following screenshot shows what the parameters might look like:
Serverless multi region Route 53 health check

Specifically, the domain names that you collected earlier would map according to following:

  • The domain names from the API Gateway “prod”-stage go into Region1HealthEndpoint and Region2HealthEndpoint.
  • The domain names from the custom domain name’s target domain name goes into Region1Endpoint and Region2Endpoint.

Using the Rest API from server-side applications

You are now ready to use your setup. First, demonstrate the use of the API from server-side clients. You can demonstrate this by using curl from the command line:


> curl https://hellowordapi.replacewithyourcompanyname.com/v1/helloworld/
{"message": "Hello from us-east-1"}

Testing failover of Rest API in browser

Here’s how you can use this from the browser and test the failover. Find all of the files for this test in the browser-client folder of the blog-multi-region-serverless-service GitHub repo.

Use this html file:


<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Multi-Region Client</title>
</head>
<body>
<div>
   <h1>Test Client</h1>

    <p id="client_result">

    </p>

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="settings.js"></script>
    <script src="client.js"></script>
</body>
</html>

The html file uses this JavaScript file to repeatedly call the API and print the history of messages:


var messageHistory = "";

(function call_service() {

   $.ajax({
      url: helloworldMultiregionendpoint+'v1/helloworld/',
      dataType: "json",
      cache: false,
      success: function(data) {
         messageHistory+="<p>"+data['message']+"</p>";
         $('#client_result').html(messageHistory);
      },
      complete: function() {
         // Schedule the next request when the current one's complete
         setTimeout(call_service, 10000);
      },
      error: function(xhr, status, error) {
         $('#client_result').html('ERROR: '+status);
      }
   });

})();

Also, make sure to update the settings in settings.js to match with the API Gateway endpoints for the DNS-proxy and the multi-regional endpoint for the Hello World API: var helloworldMultiregionendpoint = "https://hellowordapi.replacewithyourcompanyname.com/";

You can now open the HTML file in the browser (you can do this directly from the file system) and you should see something like the following screenshot:

Serverless multi region browser test

You can test failover by changing the environment variable in your health check Lambda function. In the Lambda console, select your health check function and scroll down to the Environment variables section. For the STATUS key, modify the value to fail.

Lambda update environment variable

You should see the region switch in the test client:

Serverless multi region broker test switchover

During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality. If you are using a browser like Chrome, you can kill all the connections to see a more immediate fail-over: chrome://net-internals/#sockets

Summary

You have implemented a simple way to do multi-regional serverless applications that fail over seamlessly between regions, either being accessed from the browser or from other applications/services. You achieved this by using the capabilities of Amazon Route 53 to do latency based routing and health checks for fail-over. You unlocked the use of these features in a serverless application by leveraging the new regional endpoint feature of Amazon API Gateway.

The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI, and it can be integrated into deployment tools to push the code across the regions to make sure it is available in all the needed regions. For more information about cross-region deployments, see Building a Cross-Region/Cross-Account Code Deployment Solution on AWS on the AWS DevOps blog.

I Still Prefer Eclipse Over IntelliJ IDEA

Post Syndicated from Bozho original https://techblog.bozho.net/still-prefer-eclipse-intellij-idea/

Over the years I’ve observed an inevitable shift from Eclipse to IntelliJ IDEA. Last year they were almost equal in usage, and I have the feeling things are swaying even more towards IDEA.

IDEA is like the iPhone of IDEs – its users tell you that “you will feel how much better it is once you get used to it”, “are you STILL using Eclipse??”, “IDEA is so much better, I thought everyone has switched”, etc.

I’ve been using mostly Eclipse for the past 12 years, but in some cases I did use IDEA – when I was writing Scala, when I was writing Android, and most recently – when Eclipse failed to be ready for the Java 9 release, so after half a day of trying to get it working, I just switched to IDEA until Eclipse finally gets a working Java 9 version (with Maven and the rest of the stuff).

But I will get back to Eclipse again, soon. And I still prefer it. Not just because of all the key combinations I’ve internalized (you can reuse those in IDEA), but because there are still things I find worse in IDEA. Of course, IDEA has so much more cool features like code improvement suggestions and actually working plugins for everything. But at least some of the problems I see have to do with the more basic development workflow and experience. And you can’t compensate for those with sugarcoating. So here they are:

  • Projects are not automatically built (by default), so you can end up with compilation errors that you don’t see until you open a non-compiling file or run a build. And turning the autobild on makes my machine crawl. I know I need an upgrade, but that’s not the point – not having “build on change” was a huge surprise to me the first time I tried IDEA. I recently complained about that on twitter and it turns out “it’s a feature”. The rationale seems to be that if you use refactoring, that shouldn’t happen. Well, there are dozens of cases when it does happen. Refactoring by adding a method parameter, by changing the type of a parameter, by removing a parameter (where the IDE can’t infer which parameter is removed based on the types), by changing return types. Also, a change in maven/gradle dependencies may introduces compilation issues that you don’t get to see. This is not a reasonable default at all, and I think the performance issues are the only reason it’s still the default. I think this makes the experience much worse.
  • You can have only one project per screen. Maybe there are those small companies with greenfield projects where you only need one. But I’ve never been in a situation, where you don’t at least occasionally need a separate project. Be it an “experiments” one, a “tools” one, or whatever. And no, multi-module maven projects (which IDEA handles well) are not sufficient. So each time you need to step out of your main project, you launch another screen. Apart from the bad usability, it’s double the memory, double the fun.
  • Speaking of memory, It seems to be taking more memory than Eclipse. I don’t have representative benchmarks of that, and I know that my 8 GB RAM home machine is way to small for development nowadays, but still.
  • It feels less responsive and clunky. There is some minor delay that I can’t define well, but “I feel it”. I read somewhere that they were excessively repainting the screen elements, so that might be the explanation. Eclipse feels smoother (I know that’s not a proper argument, but I can’t be more precise)
  • Due to some extra cleverness, I have “unused methods” and “never assigned fields” all around the project. It uses spring, so these methods and fields are controller methods and autowired fields. Maybe some spring plugin would take care of that, but spring is not the only framework that uses reflection. Even getters and setters on POJOs get the unused warnings. What’s the problem with those warnings? That warnings are devalued. They don’t mean anything now. There isn’t a “yellow” indicator on the class either, so you don’t actually see the amount of warnings you have. Eclipse displays warnings better, and the false positives are much less.
  • The call hierarchy is slightly worse. But since that’s the most important IDE feature for me (alongside refactoring), it matters. It doesn’t give you the call hierarchy of default constructors that are not explicitly defined. Also, from what I’ve seen IDEA users don’t often use the call hierarchy feature. “Find usage” I think predates the call hierarchy, and is also much more visible through the UI, so some of the IDEA users don’t even know what a call hierarchy is. And repeatedly do “find usage”. That’s only partly the IDE’s fault.
  • No search in the output console. Come one, why I do I have an IDE, where I have to copy the output and paste it in a text editor in order to search. Now, to clarify, the console does have search. But when I run my (spring-boot) application, it outputs stuff in a panel at the bottom that is not the console and doesn’t have search.
  • CTRL+arrows by default jumps over whole words, and not camel cased words. This is configurable, but is yet another odd default. You almost always want to be able to traverse your variables word by word (in camel case), rather than skipping over the whole variable (method/class) name.
  • A few years ago when I used it for Scala, the project never actually compiled. But I guess that’s more Scala’s fault than of the IDE

Apart from the first two, the rest are not major issues, I agree. But they add up. Ultimately, it’s a matter of personal choice whether you can turn a blind eye to these issues. But I’m getting back to Eclipse again. At some point I will propose improvements in the IntelliJ IDEA backlog and will check it again in a few years, I guess.

The post I Still Prefer Eclipse Over IntelliJ IDEA appeared first on Bozho's tech blog.

Ethereum Parity Bug Destroys Over $250 Million In Tokens

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/11/ethereum-parity-bug-destroys-250-million-tokens/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Ethereum Parity Bug Destroys Over $250 Million In Tokens

If you are into cryptocurrency or blockchain at all, you will have heard about the Ethereum Parity Bug that has basically thrown $280 Million value or more of Ethereum tokens in the bin.

It’s a bit of a mess really, and a mistake by the developers who introduced it after fixing another bug back in July to do with multisig wallets (wallets which multiple people have to agree to transactions).

You can see the thread on Github here: anyone can kill your contract #6995

There’s a lot of hair-pulling among Ethereum alt-coin hoarders today – after a programming blunder in Parity’s wallet software let one person bin $280m of the digital currency belonging to scores of strangers, probably permanently.

Read the rest of Ethereum Parity Bug Destroys Over $250 Million In Tokens now! Only available at Darknet.

Me on the Equifax Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/me_on_the_equif.html

Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School
Fellow, Berkman Center for Internet and Society at Harvard Law School

Hearing on “Securing Consumers’ Credit Data in the Age of Digital Commerce”

Before the

Subcommittee on Digital Commerce and Consumer Protection
Committee on Energy and Commerce
United States House of Representatives

1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515

Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015). My popular newsletter CryptoGram and my blog Schneier on Security are read by over 250,000 people.

Additionally, I am a Fellow and Lecturer at the Harvard Kennedy School of Government –where I teach Internet security policy — and a Fellow at the Berkman-Klein Center for Internet and Society at Harvard Law School. I am a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. I am also a special advisor to IBM Security and the Chief Technology Officer of IBM Resilient.

I am here representing none of those organizations, and speak only for myself based on my own expertise and experience.

I have eleven main points:

1. The Equifax breach was a serious security breach that puts millions of Americans at risk.

Equifax reported that 145.5 million US customers, about 44% of the population, were impacted by the breach. (That’s the original 143 million plus the additional 2.5 million disclosed a month later.) The attackers got access to full names, Social Security numbers, birth dates, addresses, and driver’s license numbers.

This is exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, cell phone companies and other businesses vulnerable to fraud. As a result, all 143 million US victims are at greater risk of identity theft, and will remain at risk for years to come. And those who suffer identify theft will have problems for months, if not years, as they work to clean up their name and credit rating.

2. Equifax was solely at fault.

This was not a sophisticated attack. The security breach was a result of a vulnerability in the software for their websites: a program called Apache Struts. The particular vulnerability was fixed by Apache in a security patch that was made available on March 6, 2017. This was not a minor vulnerability; the computer press at the time called it “critical.” Within days, it was being used by attackers to break into web servers. Equifax was notified by Apache, US CERT, and the Department of Homeland Security about the vulnerability, and was provided instructions to make the fix.

Two months later, Equifax had still failed to patch its systems. It eventually got around to it on July 29. The attackers used the vulnerability to access the company’s databases and steal consumer information on May 13, over two months after Equifax should have patched the vulnerability.

The company’s incident response after the breach was similarly damaging. It waited nearly six weeks before informing victims that their personal information had been stolen and they were at increased risk of identity theft. Equifax opened a website to help aid customers, but the poor security around that — the site was at a domain separate from the Equifax domain — invited fraudulent imitators and even more damage to victims. At one point, the official Equifax communications even directed people to that fraudulent site.

This is not the first time Equifax failed to take computer security seriously. It confessed to another data leak in January 2017. In May 2016, one of its websites was hacked, resulting in 430,000 people having their personal information stolen. Also in 2016, a security researcher found and reported a basic security vulnerability in its main website. And in 2014, the company reported yet another security breach of consumer information. There are more.

3. There are thousands of data brokers with similarly intimate information, similarly at risk.

Equifax is more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about us — almost all of them companies you’ve never heard of and have no business relationship with.

The breadth and depth of information that data brokers have is astonishing. Data brokers collect and store billions of data elements covering nearly every US consumer. Just one of the data brokers studied holds information on more than 1.4 billion consumer transactions and 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

These brokers collect demographic information: names, addresses, telephone numbers, e-mail addresses, gender, age, marital status, presence and ages of children in household, education level, profession, income level, political affiliation, cars driven, and information about homes and other property. They collect lists of things we’ve purchased, when we’ve purchased them, and how we paid for them. They keep track of deaths, divorces, and diseases in our families. They collect everything about what we do on the Internet.

4. These data brokers deliberately hide their actions, and make it difficult for consumers to learn about or control their data.

If there were a dozen people who stood behind us and took notes of everything we purchased, read, searched for, or said, we would be alarmed at the privacy invasion. But because these companies operate in secret, inside our browsers and financial transactions, we don’t see them and we don’t know they’re there.

Regarding Equifax, few consumers have any idea what the company knows about them, who they sell personal data to or why. If anyone knows about them at all, it’s about their business as a credit bureau, not their business as a data broker. Their website lists 57 different offerings for business: products for industries like automotive, education, health care, insurance, and restaurants.

In general, options to “opt-out” don’t work with data brokers. It’s a confusing process, and doesn’t result in your data being deleted. Data brokers will still collect data about consumers who opt out. It will still be in those companies’ databases, and will still be vulnerable. It just don’t be included individually when they sell data to their customers.

5. The existing regulatory structure is inadequate.

Right now, there is no way for consumers to protect themselves. Their data has been harvested and analyzed by these companies without their knowledge or consent. They cannot improve the security of their personal data, and have no control over how vulnerable it is. They only learn about data breaches when the companies announce them — which can be months after the breaches occur — and at that point the onus is on them to obtain credit monitoring services or credit freezes. And even those only protect consumers from some of the harms, and only those suffered after Equifax admitted to the breach.

Right now, the press is reporting “dozens” of lawsuits against Equifax from shareholders, consumers, and banks. Massachusetts has sued Equifax for violating state consumer protection and privacy laws. Other states may follow suit.

If any of these plaintiffs win in the court, it will be a rare victory for victims of privacy breaches against the companies that have our personal information. Current law is too narrowly focused on people who have suffered financial losses directly traceable to a specific breach. Proving this is difficult. If you are the victim of identity theft in the next month, is it because of Equifax or does the blame belong to another of the thousands of companies who have your personal data? As long as one can’t prove it one way or the other, data brokers remain blameless and liability free.

Additionally, much of this market in our personal data falls outside the protections of the Fair Credit Reporting Act. And in order for the Federal Trade Commission to levy a fine against Equifax, it needs to have a consent order and then a subsequent violation. Any fines will be limited to credit information, which is a small portion of the enormous amount of information these companies know about us. In reality, this is not an effective enforcement regime.

Although the FTC is investigating Equifax, it is unclear if it has a viable case.

6. The market cannot fix this because we are not the customers of data brokers.

The customers of these companies are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.

Markets work because buyers choose from a choice of sellers, and sellers compete for buyers. None of us are Equifax’s customers. None of us are the customers of any of these data brokers. We can’t refuse to do business with the companies. We can’t remove our data from their databases. With few limited exceptions, we can’t even see what data these companies have about us or correct any mistakes.

We are the product that these companies sell to their customers: those who want to use our personal information to understand us, categorize us, make decisions about us, and persuade us.

Worse, the financial markets reward bad security. Given the choice between increasing their cybersecurity budget by 5%, or saving that money and taking the chance, a rational CEO chooses to save the money. Wall Street rewards those whose balance sheets look good, not those who are secure. And if senior management gets unlucky and the a public breach happens, they end up okay. Equifax’s CEO didn’t get his $5.2 million severance pay, but he did keep his $18.4 million pension. Any company that spends more on security than absolutely necessary is immediately penalized by shareholders when its profits decrease.

Even the negative PR that Equifax is currently suffering will fade. Unless we expect data brokers to put public interest ahead of profits, the security of this industry will never improve without government regulation.

7. We need effective regulation of data brokers.

In 2014, the Federal Trade Commission recommended that Congress require data brokers be more transparent and give consumers more control over their personal information. That report contains good suggestions on how to regulate this industry.

First, Congress should help plaintiffs in data breach cases by authorizing and funding empirical research on the harm individuals receive from these breaches.

Specifically, Congress should move forward legislative proposals that establish a nationwide “credit freeze” — which is better described as changing the default for disclosure from opt-out to opt-in — and free lifetime credit monitoring services. By this I do not mean giving customers free credit-freeze options, a proposal by Senators Warren and Schatz, but that the default should be a credit freeze.

The credit card industry routinely notifies consumers when there are suspicious charges. It is obvious that credit reporting agencies should have a similar obligation to notify consumers when there is suspicious activity concerning their credit report.

On the technology side, more could be done to limit the amount of personal data companies are allowed to collect. Increasingly, privacy safeguards impose “data minimization” requirements to ensure that only the data that is actually needed is collected. On the other hand, Congress should not create a new national identifier to replace the Social Security Numbers. That would make the system of identification even more brittle. Better is to reduce dependence on systems of identification and to create contextual identification where necessary.

Finally, Congress needs to give the Federal Trade Commission the authority to set minimum security standards for data brokers and to give consumers more control over their personal information. This is essential as long as consumers are these companies’ products and not their customers.

8. Resist complaints from the industry that this is “too hard.”

The credit bureaus and data brokers, and their lobbyists and trade-association representatives, will claim that many of these measures are too hard. They’re not telling you the truth.

Take one example: credit freezes. This is an effective security measure that protects consumers, but the process of getting one and of temporarily unfreezing credit is made deliberately onerous by the credit bureaus. Why isn’t there a smartphone app that alerts me when someone wants to access my credit rating, and lets me freeze and unfreeze my credit at the touch of the screen? Too hard? Today, you can have an app on your phone that does something similar if you try to log into a computer network, or if someone tries to use your credit card at a physical location different from where you are.

Moreover, any credit bureau or data broker operating in Europe is already obligated to follow the more rigorous EU privacy laws. The EU General Data Protection Regulation will come into force, requiring even more security and privacy controls for companies collecting storing the personal data of EU citizens. Those companies have already demonstrated that they can comply with those more stringent regulations.

Credit bureaus, and data brokers in general, are deliberately not implementing these 21st-century security solutions, because they want their services to be as easy and useful as possible for their actual customers: those who are buying your information. Similarly, companies that use this personal information to open accounts are not implementing more stringent security because they want their services to be as easy-to-use and convenient as possible.

9. This has foreign trade implications.

The Canadian Broadcast Corporation reported that 100,000 Canadians had their data stolen in the Equifax breach. The British Broadcasting Corporation originally reported that 400,000 UK consumers were affected; Equifax has since revised that to 15.2 million.

Many American Internet companies have significant numbers of European users and customers, and rely on negotiated safe harbor agreements to legally collect and store personal data of EU citizens.

The European Union is in the middle of a massive regulatory shift in its privacy laws, and those agreements are coming under renewed scrutiny. Breaches such as Equifax give these European regulators a powerful argument that US privacy regulations are inadequate to protect their citizens’ data, and that they should require that data to remain in Europe. This could significantly harm American Internet companies.

10. This has national security implications.

Although it is still unknown who compromised the Equifax database, it could easily have been a foreign adversary that routinely attacks the servers of US companies and US federal agencies with the goal of exploiting security vulnerabilities and obtaining personal data.

When the Fair Credit Reporting Act was passed in 1970, the concern was that the credit bureaus might misuse our data. That is still a concern, but the world has changed since then. Credit bureaus and data brokers have far more intimate data about all of us. And it is valuable not only to companies wanting to advertise to us, but foreign governments as well. In 2015, the Chinese breached the database of the Office of Personal Management and stole the detailed security clearance information of 21 million Americans. North Korea routinely engages in cybercrime as way to fund its other activities. In a world where foreign governments use cyber capabilities to attack US assets, requiring data brokers to limit collection of personal data, securely store the data they collect, and delete data about consumers when it is no longer needed is a matter of national security.

11. We need to do something about it.

Yes, this breach is a huge black eye and a temporary stock dip for Equifax — this month. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

Unless Congress acts to protect consumer information in the digital age, these breaches will continue.

Thank you for the opportunity to testify today. I will be pleased to answer your questions.

Concerning a Statement by the Conservancy (Software Freedom Law Center Blog)

Post Syndicated from jake original https://lwn.net/Articles/738279/rss

The Software Freedom Law Center (SFLC) has responded to a recent blog post from the Software Freedom Conservancy (SFC) regarding the SFC’s trademark. SFLC has asked the US Patent and Trademark Office (PTO) to cancel the SFC trademark due to a likelihood of confusion between the two marks; SFC posted about the action on its blog. Now, SFLC is telling its side of the story: “At the end of September, SFLC notified the US Patent and Trademark Office that we have an actual confusion problem caused by the trademark ‘Software Freedom Conservancy,’ which is confusingly similar to our own pre-existing trademark. US trademark law is all about preventing confusion among sources and suppliers of goods and services in the market. Trademark law acts to provide remedies against situations that create likelihood of, as well as actual, confusion. When you are a trademark holder, if a recent mark junior to yours causes likelihood of or actual confusion, you have a right to inform the PTO that the mark has issued in error, because that’s not supposed to happen. This act of notifying the PTO of a subsequently-issued mark that is causing actual confusion is called a petition to cancel the trademark. That’s not some more aggressive choice that the holder has made; it is not an attack, let alone a ‘bizarre’ attack, on anybody. That’s the name of the process by which the trademark holder gets the most basic value of the trademark, which is the right to abate confusion caused by the PTO itself.

MPAA: Almost 70% of 38 Million Kodi Users Are Pirates

Post Syndicated from Andy original https://torrentfreak.com/mpaa-almost-70-of-38-million-kodi-users-are-pirates-171104/

As torrents and other forms of file-sharing resolutely simmer away in the background, it is the streaming phenomenon that’s taking the Internet by storm.

This Tuesday, in a report by Canadian broadband management company Sandvine, it was revealed that IPTV traffic has grown to massive proportions.

Sandvine found that 6.5% of households in North American are now communicating with known TV piracy services. This translates to seven million subscribers and many more potential viewers. There’s little doubt that IPTV and all its variants, Kodi streaming included, are definitely here to stay.

The topic was raised again Wednesday during a panel discussion hosted by the Copyright Alliance in conjunction with the Creative Rights Caucus. Titled “Copyright Pirates’ New Strategies”, the discussion’s promotional graphic indicates some of the industry heavyweights in attendance.

The Copyright Alliance tweeted points from the discussion throughout the day and soon the conversation turned to the streaming phenomenon that has transformed piracy in recent times.

Previously dubbed Piracy 3.0 by the MPAA, Senior Vice President, Government and Regulatory Affairs Neil Fried was present to describe streaming devices and apps as the latest development in TV and movie piracy.

Like many before him, Fried explained that the Kodi platform in its basic form is legal. However, he noted that many of the add-ons for the media player provide access to pirated content, a point proven in a big screen demo.

Kodi demo by the MPAA via Copyright Alliance

According to the Copyright Alliance, Fried then delivered some interesting stats. The MPAA believes that there are around 38 million users of Kodi in the world, which sounds like a reasonable figure given that the system has been around for 15 years in various guises, including during its XBMC branding.

However, he also claimed that of those 38 million, a substantial 26 million users have piracy addons installed. That suggests around 68.5% or seven out of ten of all Kodi users are pirates of movies, TV shows, and other media. Taking the MPAA statement to its conclusion, only 12 million Kodi users are operating the software legitimately.

TorrentFreak contacted XBMC Foundation President Nathan Betzen for his stance on the figures but he couldn’t shine much light on usage.

“Unfortunately I do not have an up to date number on users, and because we don’t watch what our users are doing, we have no way of knowing how many do what with regards to streaming. [The MPAA’s] numbers could be completely correct or totally made up. We have no real way to know,” Betzen said.

That being said, the team does have the capability to monitor overall Kodi usage, even if they don’t publish the stats. This was revealed back in June 2011 when Kodi was still called XBMC.

“The addon system gives us the opportunity to measure the popularity of addons, measure user base, estimate the frequency that people update their systems, and even, ultimately, help users find the more popular addons,” the team wrote.

“Most interestingly, for the purposes of this post, is that we can get a pretty good picture of how many active XBMC installs there are without having to track what each individual user does.”

Using this system, the team concluded there were roughly 435,000 active XBMC instances around the globe in April 2011, but that figure was to swell dramatically. Just three months later, 789,000 XBMC installations had been active in the previous six weeks.

What’s staggering is that in 2017, the MPAA claims that there are now 38 million users of Kodi, of which 26 million are pirates. In the absence of any figures from the Kodi team, TF asked Kodi addon repository TVAddons what they thought of the MPAA’s stats.

“We’ve always banned the use of analytics within Kodi addons, so it’s really impossible to make such an estimate. It seems like the MPAA is throwing around numbers without much statistical evidence while mislabelling Kodi users as ‘pirate’ in the same way that they have mislabelled legitimate services like CloudFlare,” a spokesperson said.

“As far as general addon use goes, before our repository server (which contained hundreds of legitimate addons) was unlawfully seized, it had about 39 million active users per month, but even we don’t know how many users downloaded which addons. We never allowed for addon statistics for users because they are invasive to privacy and breed unhealthy competition.”

So, it seems that while there is some dispute over the number of potential pirates, there does at least appear to be some consensus on the number of users overall. The big question, however, is how groups like the MPAA will deal with this kind of unauthorized infringement in future.

At the moment the big push is to paint pirate platforms as dangerous places to be. Indeed, during the discussion this week, Copyright Alliance CEO Keith Kupferschmid claimed that users of pirate services are “28 times more likely” to be infected with malware.

Whether that strategy will pay off remains unclear but it’s obvious that at least for now, Piracy 3.0 is a massive deal, one that few people saw coming half a decade ago but is destined to keep growing.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Gladys Project: a Raspberry Pi home assistant

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/gladys-project-home-assistant/

If, like me, you’re a pretty poor time-keeper with the uncanny ability to never get up when your alarm goes off and yet still somehow make it to work just in time — a little dishevelled, brushing your teeth in the office bathroom — then you too need Gladys.

Raspberry Pi home assistant

Over the last year, we’ve seen off-the-shelf home assistants make their way onto the Raspberry Pi. With the likes of Amazon Alexa, Google Home, and Siri, it’s becoming ever easier to tell the air around you to “Turn off the bathroom light” or “Resume my audiobook”, and it happens without you lifting a finger. It’s quite wonderful. And alongside these big names are several home-brew variants, such as Jarvis and Jasper, which were developed to run on a Pi in order to perform home automation tasks.

So do we need another such service? Sure! And here’s why…

A Romantic Mode with your Home Assistant Gladys !

A simple romantic mode in Gladys ! See https://gladysproject.com for more informations about the project 🙂 Devices used : – A 5$ Xiaomi Switch Button – A Raspberry Pi 3 with Gladys on it – Connected lights ( Works with Philips Hue, Milight lamp, etc..

Gladys Project

According to the Gladys creators’ website, Gladys Project is ‘an open-source program which runs on your Raspberry Pi. It communicates with all your devices and checks your calendar to help you in your everyday life’.

Gladys does the basic day-to-day life maintenance tasks that I need handled in order to exist without my mum there to remind me to wake up in time for work. And, as you can see from the video above, it also plays some mean George Michael.

A screenshot of a mobile phone showing the Gladys app - Gladys Project home assistant

Gladys can help run your day from start to finish, taking into consideration road conditions and travel time to ensure you’re never late, regardless of external influences. It takes you 30 minutes to get ready and another 30 minutes to drive to work for 9.00? OK, but today there’s a queue on the motorway, and now your drive time is looking to be closer to an hour. Thankfully, Gladys has woken you up a half hour earlier, so you’re still on time. Isn’t that nice of her? And while you’re showering and mourning those precious stolen minutes of sleep, she’s opening the blinds and brewing coffee for you. Thanks, mum!

A screenshot of the Gladys hub on the Raspberry Pi - Gladys Project home assistant

Set the parameters of your home(s) using the dedicated hub.

Detecting your return home at the end of the day, Gladys runs your pre-set evening routine. Then, once you place your phone on an NFC tag to indicate bedtime, she turns off the lights and, if your nighttime preferences dictate it, starts the whale music playlist, sending you into a deep, stressless slumber.

A screenshot of Etcher showing the install process of the Gladys image - Gladys Project home assistant

Gladys comes as a pre-built Raspbian image, ready to be cloned to an SD card.

Gladys is free to download from the Gladys Project website and is compatible with smart devices such as Philips Hue lightbulbs, WeMo Insight Switches, and the ever tricky to control without the official app Sonos speakers!

Automate and chill

Which tasks and devices in your home do you control with a home assistant? Do you love sensor-controlled lighting which helps you save on electricity? How about working your way through an audiobook as you do your housework, requesting a pause every time you turn on the vacuum cleaner?

Share your experiences with us in the comments below, and if you’ve built a home assistant for Raspberry Pi, or use an existing setup to run your household, share that too.

And, as ever, if you want to keep up to date with Raspberry Pi projects from across the globe, be sure to follow us on social media, sign up to our weekly newsletter, the Raspberry Pi Weekly, and check out The MagPi, the official magazine of the Raspberry Pi community, available in stores or as a free PDF download.

The post Gladys Project: a Raspberry Pi home assistant appeared first on Raspberry Pi.

Fraud Detection in Pokémon Go

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/fraud_detection.html

I play Pokémon Go. (There, I’ve admitted it.) One of the interesting aspects of the game I’ve been watching is how the game’s publisher, Niantec, deals with cheaters.

There are three basic types of cheating in Pokémon Go. The first is botting, where a computer plays the game instead of a person. The second is spoofing, which is faking GPS to convince the game that you’re somewhere you’re not. These two cheats are often used together — and you see the results in the many high-level accounts for sale on the Internet. The third type of cheating is the use of third-party apps like trackers to get extra information about the game.

None of this would matter if everyone played independently. The only reason any player cares about whether other players are cheating is that there is a group aspect of the game: gym battling. Everyone’s enjoyment of that part of the game is affected by cheaters who can pretend to be where they’re not, especially if they have lots of powerful Pokémon that they collected effortlessly.

Niantec has been trying to deal with this problem since the game debuted, mostly by banning accounts when it detects cheating. Its initial strategy was basic — algorithmically detecting impossibly fast travel between physical locations or super-human amounts of playing, and then banning those accounts — with limited success. The limiting factor in all of this is false positives. While Niantec wants to stop cheating, it doesn’t want to block or limit any legitimate players. This makes it a very difficult problem, and contributes to the balance in the attacker/defender arms race.

Recently, Niantic implemented two new anti-cheating measures. The first is machine learning to detect cheaters. About this, we know little. The second is to limit the functionality of cheating accounts rather than ban them outright, making it harder for cheaters to know when they’ve been discovered.

“This is may very well be the beginning of Niantic’s machine learning approach to active bot countering,” user Dronpes writes on The Silph Road subreddit. “If the parameters for a shadowban are constantly adjusted server-side, as they can now easily be, then Niantic’s machine learning engineers can train their detection (classification) algorithms in ever-improving, ever more aggressive ways, and botters will constantly be forced to re-evaluate what factors may be triggering the detection.”

One of the expected future features in the game is trading. Creating a market for rare or powerful Pokémon would add a huge additional financial incentive to cheat. Unless Niantec can effectively prevent botting and spoofing, it’s unlikely to implement that feature.

Cheating detection in virtual reality games is going to be a constant problem as these games become more popular, especially if there are ways to monetize the results of cheating. This means that cheater detection will continue to be a critical component of these games’ success. Anything Niantec learns in Pokémon Go will be useful in whatever games come next.

Mystic, level 39 — if you must know.

And, yes, I know the game tracks works by tracking your location. I’m all right with that. As I repeatedly say, Internet privacy is all about trade-offs.

Weekly roundup: Odyssey, you see

Post Syndicated from Eevee original https://eev.ee/dev/2017/11/01/weekly-roundup-odyssey-you-see/

Dammit, another video game came out.

  • fox flux: Some nitpicks to the landing frames, and copying them to every other form (augh). Finished up another form entirely, hallelujah. Very little left now. I think last week is also when I pixeled out a few more experimental characters.

  • cc: More sprite animation UI work, which is incredibly tedious oh my goodness. I spent a day investigating Mecanim’s suitability for sprite animation again, and ultimately concluded… no. Good use of time.

  • blog: I, ah, started on my final October post. Should be done shortly.

  • art: The doodling continues! The best results are NSFW, alas, but I did make this quick relatable comic. Also this good face.

  • writing: I have begun work on a Twine. Okay, well, last week I basically just wrote a bunch of custom JavaScript for it and zero actual prose, but it’s still work.

Kügler: Plasma Mobile Roadmap

Post Syndicated from jake original https://lwn.net/Articles/737828/rss

On his blog, Sebastian Kügler sets out a roadmap for Plasma Mobile, which is a project that “aims to become a complete and open software system for mobile devices“. There is already a prototype version available, the next step is the “feature phone” milestone (which will be followed by the “basic smartphone” and “featured smartphone” milestones). “The feature phone milestone is what we’re working on right now. This involves taking the prototype and fixing all the basic things to turn it into something usable. Usable doesn’t mean ‘usable for everyone’, but it should at least be workable for a subset of people that only rely on basic features — ‘simple’ things.
Core features should work flawlessly once this milestone is achieved. With core features, we’re thinking along the lines of making phone calls, using the address book, manage hardware functions such as network connectivity, volume, screen, time, language, etc.. Aside from these very core things for a phone, we want to provide decent integration with a webbrowser (or provide our own), app store integration likely using store.kde.org, so you can get apps on and off the device, taking photos, recording videos and watching these media. Finally, we want to settle for an SDK which allows third party developers to build apps to run on Plasma Mobile devices.
Getting this to work is no small feat, but it allows us to receive real-world feedback and provide a stable base for third-party products. It makes Plasma Mobile a viable target for future product development.

Assassins Creed Origin DRM Hammers Gamers’ CPUs

Post Syndicated from Andy original https://torrentfreak.com/assassins-creed-origin-drm-hammers-gamers-cpus-171030/

There’s a war taking place on the Internet. On one side: gaming companies, publishers, and anti-piracy outfits. On the other: people who varying reasons want to play and/or test games for free.

While these groups are free to battle it out in a manner of their choosing, innocent victims are getting caught up in the crossfire. People who pay for their games without question should be considered part of the solution, not the problem, but whether they like it or not, they’re becoming collateral damage in an increasingly desperate conflict.

For the past several days, some players of the recently-released Assassin’s Creed Origins have emerged as what appear to be examples of this phenomenon.

“What is the normal CPU usage for this game?” a user asked on Steam forums. “I randomly get between 60% to 90% and I’m wondering if this is too high or not.”

The individual reported running an i7 processor, which is no slouch. However, for those running a CPU with less oomph, matters are even worse. Another gamer, running an i5, reported a 100% load on all four cores of his processor, even when lower graphics settings were selected in an effort to free up resources.

“It really doesn’t seem to matter what kind of GPU you are using,” another complained. “The performance issues most people here are complaining about are tied to CPU getting maxed out 100 percent at all times. This results in FPS [frames per second] drops and stutter. As far as I know there is no workaround.”

So what could be causing these problems? Badly configured machines? Terrible coding on the part of the game maker?

According to Voksi, whose ‘Revolt’ team cracked Wolfenstein II: The New Colossus before its commercial release last week, it’s none of these. The entire problem is directly connected to desperate anti-piracy measures.

As widely reported (1,2), the infamous Denuvo anti-piracy technology has been taking a beating lately. Cracking groups are dismantling it in a matter of days, sometimes just hours, making the protection almost pointless. For Assassin’s Creed Origins, however, Ubisoft decided to double up, Voksi says.

“Basically, Ubisoft have implemented VMProtect on top of Denuvo, tanking the game’s performance by 30-40%, demanding that people have a more expensive CPU to play the game properly, only because of the DRM. It’s anti-consumer and a disgusting move,” he told TorrentFreak.

Voksi says he knows all of this because he got an opportunity to review the code after obtaining the binaries for the game. Here’s how it works.

While Denuvo sits underneath doing its thing, it’s clearly vulnerable to piracy, given recent advances in anti-anti-piracy technology. So, in a belt-and-braces approach, Ubisoft opted to deploy another technology – VMProtect – on top.

VMProtect is software that protects other software against reverse engineering and cracking. Although the technicalities are different, its aims appear to be somewhat similar to Denuvo, in that both seek to protect underlying systems from being subverted.

“VMProtect protects code by executing it on a virtual machine with non-standard architecture that makes it extremely difficult to analyze and crack the software. Besides that, VMProtect generates and verifies serial numbers, limits free upgrades and much more,” the company’s marketing reads.

VMProtect and Denuvo didn’t appear to be getting on all that well earlier this year but they later settled their differences. Now their systems are working together, to try and solve the anti-piracy puzzle.

“It seems that Ubisoft decided that Denuvo is not enough to stop pirates in the crucial first days [after release] anymore, so they have implemented an iteration of VMProtect over it,” Voksi explains.

“This is great if you are looking to save your game from those pirates, because this layer of VMProtect will make Denuvo a lot more harder to trace and keygen than without it. But if you are a legit customer, well, it’s not that great for you since this combo could tank your performance by a lot, especially if you are using a low-mid range CPU. That’s why we are seeing 100% CPU usage on 4 core CPUs right now for example.”

The situation is reportedly so bad that some users are getting the dreaded BSOD (blue screen of death) due to their machines overheating after just an hour or two’s play. It remains unclear whether these crashes are indeed due to the VMProtect/Denuvo combination but the perception is that these anti-piracy measures are at the root of users’ CPU utilization problems.

While gaming companies can’t be blamed for wanting to protect their products, there’s no sense in punishing legitimate consumers with an inferior experience. The great irony, of course, is that when Assassin’s Creed gets cracked (if that indeed happens anytime soon), pirates will be the only ones playing it without the hindrance of two lots of anti-piracy tech battling over resources.

The big question now, however, is whether the anti-piracy wall will stand firm. If it does, it raises the bizarre proposition that future gamers might need to buy better hardware in order to accommodate anti-piracy technology.

And people worry about bitcoin mining……?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.