DevOps Cafe Episode 77 – Damon interviews John

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2018/6/20/devops-cafe-episode-77-damon-interviews-john.html

Can we just go with Dev*Ops?

A new season of DevOps Cafe is here. The topic of this episode is “DevSecOps.” Damon interviews John about what this term means, why it matters now, and the overall state of security.

 

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

[$] Getting along in the Python community

Post Syndicated from jake original https://lwn.net/Articles/757714/rss

In a session with a title that used a common misquote of Rodney
King (“can’t we all just get along?”), several
Python developers wanted to discuss an incident that had recently occurred
on the
python-dev mailing list. A rude posting to the list led to a thread that
got somewhat out of control. Some short tempers among the members of the
Python developer community likely escalated things unnecessarily. The
incident in question was brought up as something of an object lesson;
people should take some time to simmer down before firing off that quick,
but perhaps needlessly confrontational, reply.

[$] PEP 572 and decision-making in Python

Post Syndicated from jake original https://lwn.net/Articles/757713/rss

The “PEP 572 mess” was the topic of a 2018 Python Language Summit session
led by benevolent dictator for life (BDFL) Guido van Rossum. PEP 572 seeks to add
assignment expressions (or “inline assignments”) to the language, but it
has seen a prolonged
discussion over multiple huge threads on the python-dev mailing list—even
after multiple rounds on python-ideas.
Those threads were often contentious and were clearly voluminous to the
point where many probably just tuned them out.
At the summit, Van Rossum gave an overview of the
feature proposal, which he seems inclined toward accepting, but he also
wanted to
discuss how to avoid this kind of thread explosion in the future.

Welcome to Fedora CoreOS

Post Syndicated from ris original https://lwn.net/Articles/757878/rss

Matthew Miller looks at how Red Hat’s acquisition of CoreOS will affect the
Fedora project. “This isn’t the place for technical details — see
“what next?” at the bottom of this message for more. I expect that over the
next year or so, Fedora Atomic Host will be replaced by a new thing
combining the best from Container Linux and Project Atomic. This
new thing will be “Fedora CoreOS” and serve as the upstream to Red
Hat CoreOS.

SCADA Hacking – Industrial Systems Woefully Insecure

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/06/scada-hacking-industrial-systems-woefully-insecure/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

SCADA Hacking – Industrial Systems Woefully Insecure

It seems like SCADA hacking is still a topic in hacker conferences, and it should be with SCADA systems still driving power stations, manufacturing plants, refineries and all kinds of other powerful and dangerous things.

The latest talk given on the subject shows with just 4 lines of code and a small hardware drop device a SCADA based facility can be effectively DoSed by sending repeated shutdown commands to suscpetible systems.

Read the rest of SCADA Hacking – Industrial Systems Woefully Insecure now! Only available at Darknet.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/757876/rss

Security updates have been issued by Arch Linux (pass), Debian (xen), Fedora (chromium, cobbler, gnupg, kernel, LibRaw, mariadb, mingw-libtiff, nikto, and timidity++), Gentoo (chromium, curl, and transmission), Mageia (gnupg, gnupg2, librsvg, poppler, roundcubemail, and xdg-utils), Red Hat (ansible and glusterfs), Slackware (gnupg), SUSE (cobbler, dwr, java-1_8_0-ibm, kernel, microcode_ctl, pam-modules, salt, slf4j, and SMS3.1), and Ubuntu (libgcrypt11, libgcrypt11, libgcrypt20, and mozjs52).

New guide helps explain cloud security with AWS for public sector customers in India

Post Syndicated from Meng Chow Kang original https://aws.amazon.com/blogs/security/new-guide-helps-explain-cloud-security-with-aws-for-public-sector-customers-in-india/

Our teams are continuing to focus on compliance enablement around the world and now that includes a new guide for public sector customers in India. The User Guide for Government Departments and Agencies in India provides information that helps government users at various central, state, district, and municipal agencies understand security and controls available with AWS. It also explains how to implement appropriate information security, risk management, and governance programs using AWS Services, which are offered in India by Amazon Internet Services Private Limited (AISPL).

The guide focuses on the Ministry of Electronics and Information Technology (Meity) requirements that are detailed in Guidelines for Government Departments for Adoption/Procurement of Cloud Services, addressing common issues that public sector customers encounter.

Our newest guide is part of a series diving into customer compliance issues across industries and jurisdictions, such as financial services guides for Singapore, Australia, and Hong Kong. We’ll be publishing additional guides this year to help you understand other regulatory requirements around the world.

Want more AWS Security news? Follow us on Twitter.

New data classification whitepaper available

Post Syndicated from Momena Cheema original https://aws.amazon.com/blogs/security/new-data-classification-whitepaper-available/

We’ve published a new whitepaper, Secure Cloud Adoption: Data Classification, to help governments address data classification. Data classification is a foundational step in cybersecurity risk management. It involves identifying the types of data that are being processed and stored in an information system owned or operated by an organization. It also involves making a determination about the sensitivity of the data and the likely impact arising from compromise, loss, or misuse.

While data classification has been used for decades to help organizations safeguard sensitive or critical data with appropriate levels of protection, some traditional classification approaches lacked specificity and would place large amounts of differing levels of data under the same strict tier. Regardless of whether data is processed or stored in traditional, on-premise systems or the cloud, data classification is a starting point for maintaining the confidentiality—and potentially the integrity and availability—of data based on the data’s risk impact level, so setting the right level of specificity matters.

This whitepaper is focused on best practices and the models governments can use to classify their data so they can more quickly move their computing workloads to the cloud. It describes the practices and models that have been implemented by early adopters, and it recommends practices to meet internationally recognized standards and frameworks.

If you have questions or want to learn more, contact your account executive or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

How AWS uses automated reasoning to help you achieve security at scale

Post Syndicated from Andrew Gacek original https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova/

At AWS, we focus on achieving security at scale to diminish risks to your business. Fundamental to this approach is ensuring your policies are configured in a way that helps protect your data, and the Automated Reasoning Group (ARG), an advanced innovation team at AWS, is using automated reasoning to do it.

What is automated reasoning, you ask? It’s a method of formal verification that automatically generates and checks mathematical proofs which help to prove the correctness of systems; that is, fancy math that proves things are working as expected. If you want a deeper understanding of automated reasoning, check out this re:Invent session. While the applications of this methodology are vast, in this post I’ll explore one specific aspect: analyzing policies using an internal Amazon service named Zelkova.

What is Zelkova? How will it help me?

Zelkova uses automated reasoning to analyze policies and the future consequences of policies. This includes AWS Identity and Access Management (IAM) policies, Amazon Simple Storage Service (S3) policies, and other resource policies. These policies dictate who can (or can’t) do what to which resources. Because Zelkova uses automated reasoning, you no longer need to think about what questions you need to ask about your policies. Using fancy math, as mentioned above, Zelkova will automatically derive the questions and answers you need to be asking about your policies, improving confidence in your security configuration(s).

How does it work?

Zelkova translates policies into precise mathematical language and then uses automated reasoning tools to check properties of the policies. These tools include automated reasoners called Satisfiability Modulo Theories (SMT) solvers, which use a mix of numbers, strings, regular expressions, dates, and IP addresses to prove and disprove logical formulas. Zelkova has a deep understanding of the semantics of the IAM policy language and builds upon a solid mathematical foundation. While tools like the IAM Policy Simulator let you test individual requests, Zelkova is able to use mathematics to talk about all possible requests. Other techniques guess and check, but Zelkova knows.

You may have noticed, as an example, the new “Public / Not public” checks in S3. These are powered by Zelkova:
 

Figure 1: the "public/Not public" checks in S3

Figure 1: the “Public/Not public” checks in S3

S3 uses Zelkova to check each bucket policy and warns you if an unauthorized user is able to read or write to your bucket. When a bucket is flagged as “Public”, there are some public requests that are allowed to access the bucket. However, when a bucket is flagged as “Not public”, all public requests are denied. Zelkova is able to make such statements because it has a precise mathematical representation of IAM policies. In fact, it creates a formula for each policy and proves a theorem about that formula.

Consider the following S3 bucket policy statement where my goal is to disallow a certain principal from accessing the bucket:


{
    "Effect": "Allow",
    "NotPrincipal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

Unfortunately, this policy statement does not capture my intentions. Instead, it allows access for everybody in the world who is not the given principal. This means almost everybody now has access to my bucket, including anonymous unauthorized users. Fortunately, as soon as I attach this policy, S3 flags my bucket as “Public”—warning me that there’s something wrong with the policy I wrote. How did it know?

Zelkova translates this policy into a mathematical formula:

(Resource = “arn:aws:s3:::test-bucket”) ∧ (Principal ≠ 11112222333)

Here, ∧ is the mathematical symbol for “and” which is true only when both its left and right side are true. Resource and Principal are variables just like you would use x and y in algebra class. The above formula is true exactly when my policy allows a request. The precise meaning of my policy has now been defined in the universal language of mathematics. The next step is to decide if this policy formula allows public access, but this is a hard problem. Now Zelkova really goes to work.

A counterintuitive trick sometimes used by mathematicians is to make a problem harder in order to make finding a solution easier. That is, solving a more difficult problem can sometimes lead to a simpler solution. In this case, Zelkova solves the harder problem of comparing two policies against each other to decide which is more permissive. If P1 and P2 are policy formulas, then suppose formula P1 ⇒ P2 is true. This arrow symbol is an implication that means whenever P1 is true, P2 must also be true. So, whenever policy 1 accepts a request, policy 2 must also accept the request. Thus, policy 2 is at least as permissive as policy 1. Suppose also that the converse formula P2 ⇒ P1 is not true. That means there’s a request which makes P2 true and P1 false. This request is allowed by policy 2 and is denied by policy 1. Combining all these results, policy 1 is strictly less permissive than policy 2.

How does this solve the “Public / Not public” problem? Zelkova has a special policy that allows anonymous, unauthorized users to access an S3 resource. It compares your policy against this policy. If your policy is more permissive, then Zelkova says your policy allows public access. If you restrict access—for example, based on source VPC endpoint (aws:SourceVpce) or source IP address (aws:SourceIp)—then your policy is not more permissive than the special policy, and Zelkova says your policy does not allow public access.

For all this to work, Zelkova uses SMT solvers. Using mathematical language, these tools take a formula and either prove it is true for all possible values of the variables, or they return a counterexample that makes the formula false.

To understand SMT solvers better, you can play with them yourself. For example, if asked to prove x+y > xy, an SMT solver will quickly find a counterexample such as x=5,y=-1. To fix this, you could strengthen your formula to assume that y is positive:

(y > 0) ⇒ (x + y > xy)

The SMT solver will now respond that your formula is true for all values of the variables x and y. It does this using the rules of algebra and logic. This same idea carries over into theories like strings. You can ask the SMT solver to prove the formula length(append(a,b)) > length(a) where a and b are string variables. It will find a counterexample such as a=”hello” and b=”” where b is the empty string. This time, you could fix your formula by changing from greater-than to greater-than-or-equal-to:

length(append(a, b)) ≥ length(a)

The SMT solver will now respond that the formula is true for all values of the variables a and b. Here, the solver has combined reasoning about strings (length, append) with reasoning about numbers (greater-than-or-equal-to). SMT solvers are designed for exactly this sort of theory composition.

What about my original policy? Once I see that my bucket is public, I can fix my policy using an explicit “Deny”:


{
    "Effect": "Deny"
    "Principal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

With this policy statement attached, S3 correctly reports my bucket as “Not public”. Zelkova has translated this policy into a mathematical formula, compared it against a special policy, and proved that my policy is less permissive. Fancy math has proved that things are working (or in this case, not working) as expected.

Where else is Zelkova being used?

In addition to S3, several AWS services are using Zelkova:

We have also engaged with a number of enterprise and regulated customers who have adopted Zelkova for their use cases:

“Bridgewater, like many other security-conscious AWS customers, needs to quickly reason about the security posture of our AWS infrastructure, and an integral part of that posture is IAM policies. These govern permissions on everything from individual users, to S3 buckets, KMS keys, and even VPC endpoints, among many others. Bridgewater uses Zelkova to verify and provide assurances that our policies do not allow data exfiltration, misconfigurations, and many other malicious and accidental undesirable behaviors. Zelkova allows our security experts to encode their understanding once and then mechanically apply it to any relevant policies, avoiding error-prone and slow human reviews, while at the same time providing us high confidence in the correctness and security of our IAM policies.”
Dan Peebles, Lead Cloud Security Architect at Bridgewater Associates

Summary

AWS services such as S3 use Zelkova to precisely represent policies and prove that they are secure—improving confidence in your security configurations. Zelkova can make broad statements about all resource requests because it’s based on mathematics and proofs instead of heuristics, pattern matching, or simulation. The ubiquity of policies in AWS means that the value of Zelkova and its benefits will continue to grow as it serves to protect more customers every day.

Want more AWS Security news? Follow us on Twitter.

Perverse Vulnerability from Interaction between 2-Factor Authentication and iOS AutoFill

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/perverse_vulner.html

Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.

Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:

Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.

Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here — which is among the most common methods currently used — the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.

This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user’s phone accesses the bank’s legitimate online banking service.

This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.

#LinkTax, #CensorshipMashine – подкрепени в комисия на ЕП. Какво следва?

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/20/linktax-censorshipmashine/

 

 

How to build a competiton-ready Raspberry Pi robot

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/how-to-build-raspberry-pi-robot/

With the recent announcement of the 2019 Pi Wars dates, we’ve collected some essential online resources to help you get started in the world of competitive robots.

bbc robot wars raspberry pi robot

Robotics 101

Before you can strap chainsaws and flamethrowers to your robot, you need to learn some basics. Sorry.

As part of our mission to put digital making into the hands of people across the globe, the Raspberry Pi Foundation creates free project tutorials for hardware builds, Scratch projects, Python games, and more. And to get you started with robot building, we’ve put together a series of buggy-centric projects!



Begin with our Build a robot buggy project, where you’ll put together a simple buggy using motors, a Raspberry Pi 3, and a few other vital ingredients. From there, move on to the Remotely control your buggy tutorial to learn how to command your robot using an Android phone, a Google AIY Projects Voice Kit, or a home-brew controller. Lastly, train your robot to think for itself using our new Build a line-following robot project.

Prepare your buggy for battle

Put down the chainsaw — we’re not there yet!

raspberry pi robot

For issue 51, The MagPi commissioned ace robot builder Brian Cortiel to create a Build a remote control robot feature. The magazine then continued the feature in issue 52, adding a wealth of sensors to the robot. You can download both issues as free PDFs from The MagPi website. Head here for issue 51 and here for issue 52.

Pi Wars

To test robot makers’ abilities, previous Pi Wars events have included a series of non-destructive challenges: the balloon-popping Pi Noon, the minimal maze, and an obstacle course. Each challenge calls for makers to equip their robot with various abilities, such as speed, manoeuvrability, or line-following functionality.

Tanya Fish on Twitter

Duck shoot, 81 points! Nice one bub. #piwars https://t.co/UCSWaEOJh8

The Pi Wars team has shared a list of hints and tips from Brian Corteil that offer a great place to start your robotics journey. Moreover, many Pi Wars competitors maintain blogs about their build process to document the skills they learn, and the disasters along the way.

raspberry pi robot

This year’s blog category winner, David Pride’s Pi and Chips website, has a wealth of robot-making information.

If you’d like to give your robot a robust, good-looking body, check out PiBorg, robot-makers extraordinaire. Their robot chassis selection can help you get started if you don’t have access to a laser cutter or 3D printer, or if you don’t want to part with one of your Tupperware boxes to house your robot.

And now for the chainsaws!

Robot-building is a great way to learn lots of new skills, and we encourage everyone to give it a go, regardless of your digital making abilities. But please don’t strap chainsaws to your Raspberry Pi–powered robot unless you are trained in the ways of chainsaw-equipped robot building. The same goes for flamethrowers, cattle prods, and anything else that could harm another person, animal, or robot.

Pi Wars raspberry pi robot

Pi Wars 2019 will be taking place on 30 and 31 March in the Cambridge Computer Laboratory William Gates Building. If you’d like to take part, you can find more information here.

The post How to build a competiton-ready Raspberry Pi robot appeared first on Raspberry Pi.

Query for the latest Amazon Linux AMI IDs using AWS Systems Manager Parameter Store

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/query-for-the-latest-amazon-linux-ami-ids-using-aws-systems-manager-parameter-store/

Want a simpler way to query for the latest Amazon Linux AMI? AWS Systems Manager Parameter Store already allows for querying the latest Windows AMI. Now, support has been expanded to include the latest Amazon Linux AMI. Each Amazon Linux AMI now has its own Parameter Store namespace that is public and describable. Upon querying, an AMI namespace returns only its regional ImageID value.

The namespace is made up of two parts:

  • Parameter Store Prefix (tree): /aws/service/ami-amazon-linux-latest/
  • AMI name alias: (example) amzn-ami-hvm-x86_64-gp2

You can determine an Amazon Linux AMI alias by taking the full AMI name property of an Amazon Linux public AMI and removing the date-based version identifier. A list of these AMI name properties can be seen by running one for the following Amazon EC2 queries.

Using the AWS CLI:

aws ec2 describe-images --owners amazon --filters "Name=name,Values=amzn*" --query 'sort_by(Images, &CreationDate)[].Name'

Using PowerShell:

Get-EC2ImageByName -Name amzn* | Sort-Object CreationDate | Select-Object Name

For example, amzn2-ami-hvm-2017.12.0.20171208-x86_64-gp2 without the date-based version becomes amzn2-ami-hvm-x86_64-gp2.

When you add the public Parameter Store prefix namespace to the AMI alias, you have the Parameter Store name of “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2”.

Each unique AMI namespace always remains the same. You no longer need to pattern match on name filters, and you no longer need to sort through CreationDate AMI properties. As Amazon Linux AMIs are patched and new versions are released to the public, AWS updates the Parameter Store value with the latest ImageID value for each AMI namespace in all supported Regions.

Before this release, finding the latest regional ImageID for an Amazon Linux AMI involved a three-step process. First, using an API call to search the list of available public AMIs. Second, filtering the results by a given partial string name. Third, sorting the matches by CreationDate property and selecting the newest ImageID. Querying AWS Systems Manager greatly simplifies this process.
Querying for the latest AMI using public parameters

After you have your target namespace, your query can be created to retrieve the latest Amazon Linux AMI ImageID value. Each Region has an exact replica namespace containing its Region-specific ImageID value.

Using the AWS CLI:

aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --region us-east-1 

Using PowerShell:

Get-SSMParameter -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 -region us-east-1

Always launch new instances with the latest ImageID

After you have created the query, you can embed the command as a command substitution into your new instance launches.

Using the AWS CLI:

 aws ec2 run-instances --image-id $(aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --query 'Parameters[0].[Value]' --output text) --count 1 --instance-type m4.large

Using PowerShell:

New-EC2Instance -ImageId ((Get-SSMParameterValue -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2).Parameters[0].Value) -InstanceType m4.large -AssociatePublicIp $true

This new instance launch always results in the latest publicly available Amazon Linux AMI for amzn2-ami-hvm-x86_64-gp2. Similar embedding can be used in a number of automation process, docs, and coding languages.

Display a complete list of all available Public Parameter Amazon Linux AMIs

You can also query for the complete list of AWS Amazon Linux Parameter Store namespaces available.

Using the AWS CLI:

aws ssm get-parameters-by-path --path "/aws/service/ami-amazon-linux-latest" --region us-east-1

Using PowerShell:

Get-SSMParametersByPath -Path "/aws/service/ami-amazon-linux-latest" -region us-east-1

Here’s an example list retrieved from a get-parameters-by-path call:

 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn2-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-s3
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-s3

Launching latest Amazon Linux AMI in an AWS CloudFormation stack

AWS CloudFormation also supports Parameter Store. For more information, see Integrating AWS CloudFormation with AWS Systems Manager Parameter Store. Here’s an example of how you would reference the latest Amazon Linux AMI in a CloudFormation template.

 # Use public Systems Manager Parameter
 Parameters :
 LatestAmiId :
 Type : 'AWS::SSM::Parameter::Value'
 Default: ‘/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2’

Resources :
 Instance :
 Type : 'AWS::EC2::Instance'
 Properties :
 ImageId : !Ref LatestAmiId

 

About the Author

Arend Castelein is a software development engineer on the Amazon Linux team. Most of his work relates to making Amazon Linux updates available sooner while also reducing the workload for his teammates. Outside of work, he enjoys rock climbing and playing indie games.

Cheezball Rising: A new Game Boy Color game

Post Syndicated from Eevee original https://eev.ee/blog/2018/06/19/cheezball-rising-a-new-game-boy-color-game/

This is a series about Star Anise Chronicles: Cheezball Rising, an expansive adventure game about my cat for the Game Boy Color. Follow along as I struggle to make something with this bleeding-edge console!

source codeprebuilt ROMs (a week early for $4) • works best with mGBA

In this issue, I figure out how to put literally anything on the goddamn screen, then add a splash of color.

The plan

I’m making a Game Boy Color game!

I have no— okay, not much idea what I’m doing, so I’m going to document my progress as I try to forge a 90s handheld game out of nothing.

I do usually try to keep tech stuff accessible, but this is going to get so arcane that that might be a fool’s errand. Think of this as less of an extended tutorial, more of a long-form Twitter.

Also, I’ll be posting regular builds on Patreon for $4 supporters, which will be available a week later for everyone else. I imagine they’ll generally stay in lockstep with the posts, unless I fall behind on the writing part. But when has that ever happened?

Your very own gamedev legend is about to unfold! A world of dreams and adventures with gbz80 assembly awaits! Let’s go!

Prerequisites

First things first. I have a teeny bit of experience with Game Boy hacking, so I know I need:

  • An emulator. I have no way to run arbitrary code on an actual Game Boy Color, after all. I like mGBA, which strives for accuracy and has some debug tools built in.

    There’s already a serious pitfall here: emulators are generally designed to run games that would work correctly on the actual hardware, but they won’t necessarily reject games that wouldn’t work on actual hardware. In other words, something that works in an emulator might still not work on a real GBC. I would of course prefer that this game work on the actual console it’s built for, but I’ll worry about that later.

  • An assembler, which can build Game Boy assembly code into a ROM. I pretty much wrote one of these myself already for the Pokémon shenanigans, but let’s go with something a little more robust here. I’m using RGBDS, which has a couple nice features like macros and a separate linking step. It compiles super easily, too.

    I also hunted down a vim syntax file, uh, somewhere. I can’t remember which one it was now, and it’s kind of glitchy anyway.

  • Some documentation. I don’t know exactly how this surfaced, but the actual official Game Boy programming manual is on archive.org. It glosses over some things and assumes some existing low-level knowledge, but for the most part it’s a very solid reference.

For everything else, there’s Google, and also the curated awesome-gbdev list of resources.

That list includes several skeleton projects for getting started, but I’m not going to use them. I want to be able to account for every byte of whatever I create. I will, however, refer to them if I get stuck early on. (Spoilers: I get stuck early on.)

And that’s it! The rest is up to me.

Making nothing from nothing

Might as well start with a Makefile. The rgbds root documentation leads me to the following incantation:

1
2
3
4
all:
        rgbasm -o main.o main.rgbasm
        rgblink -o gamegirl.gb main.o
        rgbfix -v -p 0 gamegirl.gb

(I, uh, named this project “gamegirl” before I figured out what it was going to be. It’s a sort of witticism, you see.)

This works basically like every C compiler under the sun, as you might expect: every source file compiles to an object file, then a linker bundles all the object files into a ROM. If I only change one source file, I only have to rebuild one object file.

Of course, this Makefile is terrible garbage and will rebuild the entire project unconditionally every time, but at the moment that takes a fraction of a second so I don’t care.

The extra rgbfix step is new, though — it adds the Nintendo logo (the one you see when you start up a Game Boy) to the header at the beginning of the ROM. Without this, the console will assume the cartridge is dirty or missing or otherwise unreadable, and will refuse to do anything at all. (I could also bake the logo into the source itself, but given that it’s just a fixed block of bytes and rgbfix is bundled with the assembler, I see no reason to bother with that.)

All I need now is a source file, main.rgbasm, which I populate with:

1

Nothing! I don’t know what I expect from this, but I’m curious to see what comes out. And what comes out is a working ROM!

A completely blank screen

Maybe “working” is a strong choice of word, given that it doesn’t actually do anything.

Doing something

It would be fantastic to put something on the screen. This turned out to be harder than expected.

First attempt. I know that the Game Boy starts running code at $0150, immediately after the end of the header. So I’ll put some code there.

A brief Game Boy graphics primer: there are two layers, the background and objects. (There’s also a third layer, the window, which I don’t entirely understand yet.) The background is a grid of 8×8 tiles, two bits per pixel, for a total of four shades of gray. Objects can move around freely, but they lose color 0 to transparency, so they can only use three colors.

There are lots more interesting details and restrictions, which I will think about more later.

Drawing objects is complicated, and all I want to do right now is get something. I’m pretty sure the background defaults to showing all tile 0, so I’ll try replacing tile 0 with a gradient and see what happens.

Tiles are 8×8 and two bits per pixel, which means each row takes two bytes, and the whole tile is 16 bytes. Tiles are defined in one big contiguous block starting at $8000 — or, maybe $8800, sometimes — so all I need to do is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
SECTION "main", ROM0[$0150]
    ld hl, $8000
    ld a, %00011011
    REPT 16
    ld [hl+], a
    ENDR

_halt:
    ; Do nothing, forever
    halt
    nop
    jr _halt

If you are not familiar with assembly, this series is going to be a wild ride. But here’s a very very brief primer.

Assembly language — really, an assembly language — is little more than a set of human-readable names for the primitive operations a CPU knows how to do. And those operations, by and large, consist of moving bytes around. The names tend to be very short, because you end up typing them a lot.

Most of the work is done in registers, which are a handful of spaces for storing bytes right on the CPU. At this level, RAM is relatively slow — it’s further away, outside the chip — so you want to do as much work as possible in registers. Indeed, most operations can only be done on registers, so there’s a lot of fetching stuff from RAM and operating on it and then putting it back in RAM.

The Game Boy CPU, a modified Z80, has eight byte-sized registers. They’re often referred to in pairs, because they can be paired up to make a 16-bit values (giving you access to a full 64KB address space). And they are: af, bc, de, hl.

The af pair is special. The f register is used for flags, such as whether the last instruction caused an overflow, so it’s not generally touched directly. The a register is called the accumulator and is most commonly used for math operations — in fact, a lot of math operations can only be done on a. The hl register is most often used for addresses, and there are a couple instructions specific to hl that are convenient for memory access. (The h and l even refer to the high and low byte of an address.) The other two pairs aren’t especially noteworthy.

Also! Not every address is actually RAM; the address space ($0000 through $ffff) is carved into several distinct areas, which we will see as I go along. $8000 is the beginning of display RAM, which the screen reads from asynchronously. Also, a lot of addresses above $ff00 (also called “registers”) are special and control hardware in some way, or even perform some action when written to.

With that in mind, here’s the above code with explanatory comments:

TODO need to change this to write a single byte

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
; This is a directive for the assembler to put the following
; code at $0150 in the final ROM.
SECTION "main", ROM0[$0150]
    ; Put the hex value $8000 into registers hl.  Really, that
    ; means put $80 into h and $00 into l.
    ld hl, $8000

    ; Put this binary value into registers a.
    ; It's just 0 1 2 3, a color gradient.
    ld a, %00011011

    ; This is actually a macro this particular assembler
    ; understands, which will repeat the following code 16
    ; times, exactly as if I'd copy-pasted it.
    REPT 16

    ; The brackets (sometimes written as parens) mean to use hl
    ; as a position in RAM, rather than operating on hl itself.
    ; So this copies a into the position in RAM given by
    ; hl (initially $8000), and the + adds 1 to hl afterwards.
    ; This is one reason hl is nice for storing addresses: the +
    ; variant is handy for writing a sequence of bytes to RAM,
    ; and it only exists for hl.
    ld [hl+], a

    ; End the REPT block
    ENDR

; This is a label, used to refer to some position in the code.
; It only exists in the source file.
_halt:
    ; Stop all CPU activity until there's an interrupt.  I
    ; haven't turned any interrupts on, so this stops forever.
    halt

    ; The Game Boy hardware has a bug where, under rare and
    ; unspecified conditions, the instruction after a halt will
    ; be skipped.  So every halt should be followed by a nop,
    ; "no operation", which does nothing.
    nop

    ; This jumps back up to the label.  It's short for "jump
    ; relative", and will end up as an instruction saying
    ; something like "jump backwards five bytes", or however far
    ; back _halt is.  (Different instructions can be different
    ; lengths.)
    jr _halt

Okay! Glad you’re all caught up. The rgbds documentation includes a list of all the available operations (as well as assembler syntax), and once you get used to the short names, I also like this very compact chart of all the instructions and how they compile to machine code. (Note that that chart spells [hl+] as (HLI), for “increment” — the human-readable names are somewhat arbitrary and can sometimes vary between assemblers.)

Now, let’s see what this does!


A completely blank screen, still

Wow! It’s… still nothing. Hang on.

If I open the debugger and hit Break, I find out that the CPU is at address $0120 — before my code — and is on an instruction DD. What’s DD? Well, according to this convenient chart, it’s… nothing. That’s not an instruction.

Hmm.

Problem solving

Maybe it’s time to look at one of those skeleton projects after all. I crack open the smallest one, gb-template, and it seems to be doing the same thing: its code istarts at $0150.

It takes me a bit to realize my mistake here. Practically every Game Boy game starts its code at $0150, but that’s not what the actual hardware specifies. The real start point is $0100, which is immediately before the header! There are only four bytes before the header, just enough for… a jump instruction.

Okay! No problem.

1
2
3
SECTION "entry point", ROM0[$0100]
    nop
    jp $0150

Why the nop? I have no idea, but all of these boilerplate projects do it.

Black screen with repeating columns of white

Uhh.

Well, that’s weird. Not only is the result black and white when I definitely used all four shades, but the whites aren’t even next to each other. (I also had a strange effect where the screen reverted to all white after a few seconds, but can’t reproduce it now; it was fixed by the same steps, though, so it may have been a quirk of a particular mGBA build.)

I’ll save you my head-scratching. I made two mistakes here. Arguably, three!

First: believe it or not, I have to specify the palette. Even in original uncolored Game Boy mode! I can see how that’s nice for doing simple fade effects or flashing colors, but I didn’t suspect it would be necessary. The monochrome palette lives at $ff47 (one of those special high addresses), so I do this before anything else:

1
2
    ld a, %11100100         ; 3 2 1 0
    ld [$ff47], a

I should really give names to some of these special addresses, but for now I’m more interested in something that works than something that’s nice to read.

Second: I specified the colors wrong. I assumed that eight pixels would fit into two bytes as AaBbCcDd EeFfGgHh, perhaps with some rearrangement, but a closer look at Nintendo’s manual reveals that they need to be ABCDEFGH abcdefgh, with the two bits for each pixel split across each byte! Wild.

Handily, rgbds has syntax for writing out pixel values directly: a backtick followed by eight of 0, 1, 2, and 3. I just have to change my code a bit to write two bytes, eight times each. By putting a 16-bit value in a register pair like bc, I can read its high and low bytes out individually via the b and c registers.

1
2
3
4
5
6
7
8
    ld hl, $8000
    ld bc, `00112233
    REPT 8
    ld a, b
    ld [hl+], a
    ld a, c
    ld [hl+], a
    ENDR

Third: strictly speaking, I don’t think I should be writing to $8000 while the screen is on, because the screen may be trying to read from it at the same time. It does happen to work in this emulator, but I have no idea whether it would work on actual hardware. I’m not going to worry too much about this test code; most likely, tile loading will happen all in one place in the real game, and I can figure out any issues then.

This is one of those places where the manual is oddly vague. It dedicates two whole pages to diagrams of how sprites are drawn when they overlap, yet when I can write to display RAM is left implicit.

Well, whatever. It works on my machine.

Stripes of varying shades of gray

Success! I made a thing for the Game Boy.

Ah, but what I wanted was a thing for the Game Boy Color. That shouldn’t be too much harder.

Now in Technicolor

First I update my Makefile to pass the -C flag to rgbfix. That tells it to set a flag in the ROM header to indicate that this game is only intended for the Game Boy Color, and won’t work on the original Game Boy. (In order to pass Nintendo certification, I’ll need an error screen when the game is run on a non-Color Game Boy, but that can come later. Also, I don’t actually know how to do that.)

Oh, and I’ll change the file extension from .gb to .gbc. And while I’m in here, I might as well repeat myself slightly less in this bad, bad Makefile.

1
2
3
4
5
6
7
8
TARGET := gamegirl.gbc

all: $(TARGET)

$(TARGET):
        rgbasm -o main.o main.rgbasm
        rgblink -o $(TARGET) main.o
        rgbfix -C -v -p 0 $(TARGET)

I think := is the one I want, right? Christ, who can remember how this syntax works.

Next I need to define a palette. Again, everything defaults to palette zero, so I’ll update that and not have to worry about specifying a palette for every tile.

This part is a bit weird. Unlike tiles, there’s not a block of addresses somewhere that contains all the palettes. Instead, I have to write the palette to a single address one byte at a time, and the CPU will put it… um… somewhere.

(I think this is because the entire address space was already carved up for the original Game Boy, and they just didn’t have room to expose palettes, but they still had a few spare high addresses they could use for new registers.)

Two registers are involved here. The first, $ff68, specifies which palette I’m writing to. It has a bunch of parts, but since I’m writing to the first color of palette zero, I can leave it all zeroes. The one exception is the high bit, which I’ll explain in just a moment.

1
2
    ld a, %10000000
    ld [$ff68], a

The other, $ff69, does the actual writing. Each color in a palette is two bytes, and a palette contains four colors, so I need to write eight bytes to this same address. The high bit in $ff68 is helpful here: it means that every time I write to $ff69, it should increment its internal position by one. This is kind of like the [hl+] I used above: after every write, the address increases, so I can just write all the data in sequence.

But first I need some colors! Game Boy Color colors are RGB555, which means each color is five bits (0–31) and a full color fits in two bytes: 0bbbbbgg gggrrrrr.

(I got this backwards initially and thought the left bits were red and the right bits were blue.)

Thus, I present, palette loading by hand. Like before, I put the 16-bit color in bc and then write out the contents of b and c. (Before, the backtick syntax put the bytes in the right order; colors are little-endian, hence why I write c before b.)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
    ld bc, %0111110000000000  ; blue
    ld a, c
    ld [$ff69], a
    ld a, b
    ld [$ff69], a
    ld bc, %0000001111100000  ; green
    ld a, c
    ld [$ff69], a
    ld a, b
    ld [$ff69], a
    ld bc, %0000000000011111  ; red
    ld a, c
    ld [$ff69], a
    ld a, b
    ld [$ff69], a
    ld bc, %0111111111111111  ; white
    ld a, c
    ld [$ff69], a
    ld a, b
    ld [$ff69], a

Rebuild, and:

Same as before, but now the stripes are colored

What a glorious eyesore!

To be continued

That brings us up to commit 212344 and works as a good stopping point.

Next time: sprites! Maybe even some real art?

Running a Power Plant with Grafana

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/06/20/running-a-power-plant-with-grafana/

GrafanaCon Recap: Running a Power Plant with Grafana A water and energy innovation company founded in 2005, Natel Energy builds hydropower turbines and designs resilient and distributed hydropower systems. In his talk at GrafanaCon EU, Natel Developer Ryan McKinley gave us a fascinating look at how the company is using Grafana to help run these next-generation power plants.
“It’s a different model for turbine than you’re used to seeing,” McKinley said.

Create Dynamic Contact Forms for S3 Static Websites Using AWS Lambda, Amazon API Gateway, and Amazon SES

Post Syndicated from Saurabh Shrivastava original https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-gateway-and-amazon-ses/

In the era of the cloud, hosting a static website is cheaper, faster and simpler than traditional on premise hosting, where you always have to maintain a running server.  Basically, no static website is truly static. I can promise you will find at least a “contact us” page in most static websites, which, by their very nature, are dynamically generated. And all businesses need a “contact us” page to help customers connect with business owners for services, inquiries or feedback. In its simplest form, a “contact us” page should collect a user’s basic information (name, e-mail id, phone number, short message and e-mail) and get shared with the business via email when submitted.

AWS provides a simplified way to host your static website in an Amazon S3 bucket using your own custom domain. You can either choose to register a new domain with AWS Route 53 or transfer your domain to Route 53 for hosting in five simple steps.

Obviously, you don’t want to spin-up a server to handle a simple “contact us” form, but it’s a critical element of your website. Luckily, in this post-cloud world, AWS delivers a serverless option. You can use AWS Lambda with Amazon API Gateway to create a serverless backend and use Amazon Simple Email Service to send an e-mail to the business owner whenever a customer submits any inquiry or feedback. Let’s learn how to do it.

Architecture Flow

Here, we are assuming a common website-to-cloud migration scenario, where you have registered your domain name with a 3rd party domain registrar and after migration of your website to Amazon S3. From there, you switched to Amazon Route 53 as your DNS provider. You contacted your DNS provider and updated the name server (NS) record to use the name servers in the delegation that you set in Amazon Route 53 (find step-by-step details in the AWS S3 development guide). Your email server still belongs to your DNS provider as you brought that in the package when you registered your domain with a multi-year contract.

Following is the architecture flow with detailed guidance.

lambdases

In the above diagram, the customer is submitting their inquiry through a “contact us” form, which is hosted in an Amazon S3 bucket as a static website. Information will flow in three simple steps:

  • Your “contact us” form will collect all user information and post to Amazon API Gateway restful service.
  • Amazon API Gateway will pass collected user information to an AWS lambda function.
  • AWS Lambda function will auto generate an e-mail and forward it to your mail server using Amazon SES.

Your “Contact Us” Form

Let’s start with a simple “contact us” form html code snippet:

<form id="contact-form" method="post">
      <h4>Name:</h4>
      <input type="text" style="height:35px;" id="name-input" placeholder="Enter name here…" class="form-control" style="width:100%;" /><br/>
      <h4>Phone:</h4>
      <input type="phone" style="height:35px;" id="phone-input" placeholder="Enter phone number" class="form-control" style="width:100%;"/><br/>
      <h4>Email:</h4>
      <input type="email" style="height:35px;" id="email-input" placeholder="Enter email here…" class="form-control" style="width:100%;"/><br/>
      <h4>How can we help you?</h4>
      <textarea id="description-input" rows="3" placeholder="Enter your message…" class="form-control" style="width:100%;"></textarea><br/>
      <div class="g-recaptcha" data-sitekey="6Lc7cVMUAAAAAM1yxf64wrmO8gvi8A1oQ_ead1ys" class="form-control" style="width:100%;"></div>
      <button type="button" onClick="submitToAPI(event)" class="btn btn-lg" style="margin-top:20px;">Submit</button>
</form>

The above form will ask the user to enter their name, phone, e-mail, and provide a free-form text box to write inquiry/feedback details and includes a submit button.

Later in the post, I’ll share the JQuery code for field validation and the variables to collect values.

Defining AWS Lambda Function

The next step is to create a lambda function, which will get all user information through the API Gateway. The lambda function will look something like this:

The AWS  lambda function mailfwd is triggered from the API Gateway POST method, which we will create the next section and send information to Amazon SES for mail forwarding.

If you are new to AWS Lambda then follow these simple steps to Create a Simple Lambda Function and get yourself familiar.

  1. Go to the console and click on “Create Function” and select blueprints for hello-world nodejs6.10 version as shown in below screenshot and click on configure button at the bottom.
  2. To create your AWS Lambda function,  select the “edit code inline” setting, which will have an editor box with the code in it, and replace that code (making sure to change [email protected] to your real e-mail address and update your actual domain in the response variable):

    var AWS = require('aws-sdk');
    var ses = new AWS.SES();
     
    var RECEIVER = '[email protected]';
    var SENDER = '[email protected]';
    
    var response = {
     "isBase64Encoded": false,
     "headers": { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': 'example.com'},
     "statusCode": 200,
     "body": "{\"result\": \"Success.\"}"
     };
    
    exports.handler = function (event, context) {
        console.log('Received event:', event);
        sendEmail(event, function (err, data) {
            context.done(err, null);
        });
    };
     
    function sendEmail (event, done) {
        var params = {
            Destination: {
                ToAddresses: [
                    RECEIVER
                ]
            },
            Message: {
                Body: {
                    Text: {
                        Data: 'name: ' + event.name + '\nphone: ' + event.phone + '\nemail: ' + event.email + '\ndesc: ' + event.desc,
                        Charset: 'UTF-8'
                    }
                },
                Subject: {
                    Data: 'Website Referral Form: ' + event.name,
                    Charset: 'UTF-8'
                }
            },
            Source: SENDER
        };
        ses.sendEmail(params, done);
    }
    

Now you can execute and test your AWS lambda function as directed in the AWS developer guide. Make sure to update the Lambda execution role and follow the steps provided in the Lambda developer guide to create a basic execution role.

Add following code under policy to allow Amazon SES access to AWS lambda function:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ses:SendEmail",
            "Resource": "*"
        }
    ]
}

Creating the API Gateway

Now, let’s create the API Gateway that will provide a restful API endpoint for our AWS Lambda function, which we are going to create next. We will use this API endpoint to post user-submitted information in the “Contact Us” form — which will also get posted to the AWS Lambda function.

If you are new to API Gateway, follow these simple steps to create and test an API from the example in the API Gateway Console to familiarize yourself.

  1. Login to AWS console and select API Gateway.  Click on create new API and fill your API name.
  2. Now go to your API name — listed in the left-hand navigation — click on the “actions” drop down, and select “create resource.”
  3. Select your newly-created resource and choose “create method.”  Choose a POST.  Here, you will choose our AWS Lambda Function. To do this, select “mailfwd” from the drop down.
  4. After saving the form above, Click on the “action” menu and choose “deploy API.”  You will see final resources and methods something like below:
  5. Now get your Restful API URL from the “stages” tab as shown in the screenshot below. We will use this URL on our “contact us” HTML page to send the request with all user information.
  6. Make sure to Enable CORS in the API Gateway or you’ll get an error:”Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://abc1234.execute-api.us-east-1.amazonaws.com/02/mailme. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).”

Setup Amazon SES

Amazon SES requires that you verify your identities (the domains or email addresses that you send email from) to confirm that you own them, and to prevent unauthorized use. Follow the steps outlined in the Amazon SES user guide to verify your sender e-mail.

Connecting it all Together

Since we created our AWS Lambda function and provided the API-endpoint access using API gateway, it’s time to connect all the pieces together and test them. Put following JQuery code in your ContactUs HTML page <head> section. Replace URL variable with your API Gateway URL. You can change field validation as per your need.

function submitToAPI(e) {
       e.preventDefault();
       var URL = "https://abc1234.execute-api.us-east-1.amazonaws.com/01/contact";

            var Namere = /[A-Za-z]{1}[A-Za-z]/;
            if (!Namere.test($("#name-input").val())) {
                         alert ("Name can not less than 2 char");
                return;
            }
            var mobilere = /[0-9]{10}/;
            if (!mobilere.test($("#phone-input").val())) {
                alert ("Please enter valid mobile number");
                return;
            }
            if ($("#email-input").val()=="") {
                alert ("Please enter your email id");
                return;
            }

            var reeamil = /^([\w-\.][email protected]([\w-]+\.)+[\w-]{2,6})?$/;
            if (!reeamil.test($("#email-input").val())) {
                alert ("Please enter valid email address");
                return;
            }

       var name = $("#name-input").val();
       var phone = $("#phone-input").val();
       var email = $("#email-input").val();
       var desc = $("#description-input").val();
       var data = {
          name : name,
          phone : phone,
          email : email,
          desc : desc
        };

       $.ajax({
         type: "POST",
         url : "https://abc1234.execute-api.us-east-1.amazonaws.com/01/contact",
         dataType: "json",
         crossDomain: "true",
         contentType: "application/json; charset=utf-8",
         data: JSON.stringify(data),

         
         success: function () {
           // clear form and show a success message
           alert("Successfull");
           document.getElementById("contact-form").reset();
       location.reload();
         },
         error: function () {
           // show an error message
           alert("UnSuccessfull");
         }});
     }

Now you should be able to submit your contact form and start receiving email notifications when a form is completed and submitted.

Conclusion

Here we are addressing a common use case — a simple contact form — which is important for any small business hosting their website on Amazon S3. This post should help make your static website more dynamic without spinning up any server.

Have you had challenges adding a “contact us” form to your small business website?

About the author

Saurabh Shrivastava is a Solutions Architect working with global systems integrators. He works with our partners and customers to provide them architectural guidance for building scalable architecture in hybrid and AWS environment. In his spare time, he enjoys spending time with his family, hiking, and biking.

ЕСПЧ: проследяване на съобщенията на правозащитна организация

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/19/echr_8-3/

Днес стана известно решение по дело Centrum För Rättvisa v. Sweden на ЕСПЧ.

Център за правосъдие е шведска организация с нестопанска цел. Тя смята, че съществува опасност нейната комуникация чрез мобилни телефони и мобилни широколентови услуги да бъде следена и електронните сйобщения да са прихващани.

В Решение по дело Roman Zakharov v. Russia, ЕСПЧ  установява  минимални гаранции при проследяване на съобщенията, за да се сведе до минимум рискът от злоупотреби.

По настоящото дело ЕСПЧ съответно установява:

  •  Всички законови разпоредби по темата са официално публикувани и достъпни за обществеността.
  • Целите, за постигането на които може да се извършва проследяване, са адекватно посочени в закона  с достатъчна яснота.
  • Законът   ясно посочва периода, след който изтича разрешението, и условията, при които може да бъде подновено, макар  не и обстоятелствата, при които трябва  да бъде прекратено следене. Въпреки това, всяко разрешение е валидно за максимум шест месеца, а подновяването изисква преглед дали условията все още са изпълнени. Съществуващите предпазни мерки адекватно уреждат продължителността, подновяването и отмяната на мерките за прихващане.
  • Наблюдението е   предмет на система за предварително разрешение. Задачата   е възложена на орган, чиито председатели са били или са били съдии.
  • Процедурите, които трябва да се следват за съхраняване, достъп, разглеждане, използване и унищожаване на задържаните данни, са адекватни. Трябва  да се положат всички разумни усилия, за да се коригират, блокират и унищожат личните данни, които са неправилни или непълни по отношение на целта. Законодателството осигурява адекватни предпазни мерки срещу злоупотребата с третирането на лични данни и по този начин служи за защита на личната неприкосновеност на личността.
  • Условия за предаване на задържаните данни на други страни – има основания за безпокойство,  законът посочва, че данните могат да бъдат съобщени на “други държави или международни организации” и няма разпоредба, която да изисква от получателя да защитава данните с подобни предпазни мерки като тези, приложими съгласно шведското законодателство.
  • Надзорът върху прилагането на мерките  е ефективен, отворен за публичен контрол.
  • Уведомяване за мерките за   наблюдение и наличните средства за защита: въпреки че изискването за уведомяване е при проследяване на  физически лица и по този начин не е било приложимо за Центъра,    съществуват някои средства за защита, чрез които дадено лице може да инициира проверка на законосъобразността на мерките, предприети по време на функционирането на сигнализиращата система.

Накратко, анализът  не разкрива  значителни недостатъци в  структурата и функционирането на системата, мерките са пропорционални на преследваната цел и предоставят адекватни и достатъчни гаранции срещу произвол и риска от злоупотреби.
Няма нарушение по чл. 8 ЕКПЧ

,

How Security Mindfulness Can Help Prevent Data Disasters

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/what-is-cyber-security/

A locked computer screen

A few years ago, I was surprised by a request to consult with the Pentagon on cybersecurity. It surprised me because I have no military background, and it was the Pentagon, whom I suspected already knew a thing or two about security.

I learned that the consulting project was to raise the awareness of cybersecurity among the people who work at the Pentagon and on military bases. The problem they were having was that some did not sufficiently consider the issue of cybersecurity when they dealt with email, file attachments, and passwords, and in their daily interactions with fellow workers and outside vendors and consultants. If these sound like the same vulnerabilities that the rest of us have, you’re right. It turned out that the military was no different than we are in tackling the problem of cybersecurity in their day-to-day tasks.

That’s a problem. These are the people whose primary job requirement is to be vigilant against threats, and yet some were less than vigilant with their computer and communications systems.

But, more than highlighting a problem with just the military, it made me realize that this problem likely extended beyond the military. If the people responsible for defending the United States can’t take cybersecurity seriously, then how can the rest of us be expected to do so?

And, perhaps even more challenging: how do those of us in the business of protecting data and computer assets fix this problem?

I believe that the campaign I created to address this problem for the Pentagon also has value for other organizations and businesses. We all need to understand how to maintain and encourage security mindfulness as we interact with computer systems and other people.

Technology is Not Enough

We continually focus on what we can do with software and hardware to fight against cyber attacks. “Fighting fire with fire” is a natural and easy way of thinking.

The problem is that the technology used to attack us will continually evolve, which means that our technological responses must similarly evolve. The attackers have the natural advantage. They can innovate and we, the defenders, can only respond. It will continue like that, with attacks and defenses leapfrogging each other over and over while we, the defenders, try to keep up. It’s a game where we can never get ahead because the attackers have a multitude of weaknesses to exploit while the defenders have to guess which vulnerability will be exploited next. It’s enough to want to put the challenge out of your mind completely.

So, what’s the answer?

Let’s go back to the Pentagon’s request. It struck me that what the Pentagon was asking me to do was a classic marketing branding campaign. They wanted to make people more aware of something and to think in a certain manner about it. In this case, instead of making people think that using a certain product would make them happier and more successful, the task was to take a vague threat that wasn’t high on people’s list of things to worry about and turn into something that engaged them sufficiently that they changed their behavior.

I didn’t want to try to make cyber attacks more scary — an idea that I rejected outright — but I did want to try to make people understand the real threat of cyber attacks to themselves, their families, and their livelihoods.

Managers and sysadmins face this challenge daily. They make systems as secure as possible, they install security updates, they create policies for passwords, email, and file handling, yet breaches still happen. It’s not that workers are oblivious to the problem, or don’t care about it. It’s just that they have plenty of other things to worry about, and it’s easy to forget about what they should be doing to thwart cyber attacks. They aren’t being mindful of the possibility of intrusions.

Raising Cybersecurity Awareness

People respond most effectively to challenges that are immediate and present. Abstract threats and unlikely occurrences don’t rise sufficiently above the noise level to register in our consciousness. When a flood is at your door, the threat is immediate and we respond. Our long-term health is important enough that we take action to protect it through insurance, check-ups, and taking care of ourselves because we have been educated or seen what happens if we neglect those preparations.

Both of the examples above — one immediate and one long-term — have gained enough mindfulness that we do something about them.

The problem is that there are so many possible threats to us that to maintain our sanity we ignore all but the most immediate and known threats. A threat becomes real once we’ve experienced it as a real danger. If someone has experienced a cyber attack, the experience likely resulted in a change in behavior. A shift in mindfulness made it less likely that the event would occur again due to a new level of awareness of the threat.

Making Mindfulness Work

One way to make an abstract threat seem more real and more possible is to put it into a context that the person is already familiar with. It then becomes more real and more of a possibility.

That’s what I did for the Pentagon. I put together a campaign to raise the level of mindfulness of the threat of cyberattack by associating it with something they were already familiar with considered serious.

I chose the physical battlefield. I branded the threat of cyber attack as the “Silent Battlefield.” This took something that was not a visible, physical threat and turned it into something that was already perceived as a place where actual threats exist: the battlefield. Cyber warfare is silent compared to physical combat, of course, so the branding associated it with the field of combat. At the same time it perhaps also made the threat more insidious; cyber warfare is silent. You don’t hear a shell whistling through the air to warn you of the coming damage. When the enemy is silent, your only choice is be mindful of the threat and therefore, prepared.

Can this approach work in other contexts, say, a business office, an IT department, a school, or a hospital? I believe it can if the right cultural context is found to increase mindfulness of the problem and how to combat it.

First, find a correlative for the threat that makes it real in that particular environment. For the military, it was the battlefield. For a hospital, the correlative might be a disease attempting to invade a body.

Second, use a combination of messages using words, pictures, audio, and video to get the concept across. This is a branding campaign, so just like a branding campaign for a product or service, multiple exposure and multiple delivery mechanisms will increase the effectiveness of the campaign.

Third, frame security measures as positive rather than negative. Focus on the achievement of a positive outcome rather than the avoidance of a negative result. Examples of positive framing of security measures include:

  • backing up regularly enabled the restoration of an important document that was lost or an earlier draft of a plan containing important information
  • recognizing suspicious emails and attachments avoided malware and downtime
  • showing awareness of various types of phishing campaigns enabled the productive continuation of business
  • creating and using unique and strong passwords and multi-factor verification for accounts avoided having to recreate accounts, credentials, and data
  • showing insight into attempts at social engineering and manipulation was evidence of intelligence and value to the organization

Fourth, demonstrate successful outcomes by highlighting thwarted cyber incursions. Give credit to those who are modeling a proactive attitude. Everyone in the organization should reinforce the messages and give positive reinforcement to effective measures when they are employed.

Other things to do to increase mindfulness are:

Reduce stress
A stressful workplace reduces anyone’s ability to be mindful.
Remove other threats so there are fewer things to worry about.
Encourage a “do one thing now” attitude
Be very clear about what’s important. Make sure that security mindfulness is considered important enough to devote time to.
Show positive results and emphasize victories
Highlight behaviors and actions that defeated attempts to breach security and resulted in good outcomes. Make it personal by giving credit to individuals who have done something specific that worked.

You don’t have to study at a zendō to develop the prerequisite mindfulness to improve computer security. If you’re the person whose job it is to instill mindfulness, you need to understand how to make the threats of malware, ransomware, and other security vectors real to the people who must be vigilant against them every day, and find the cultural and psychological context that works in their environment.

If you can find a way to encourage that security mindfulness, you’ll create an environment where a concern for security is part of the culture and thereby greatly increase the resistance of your organization against cyber attacks.

The post How Security Mindfulness Can Help Prevent Data Disasters appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon EC2 Update – Additional Instance Types, Nitro System, and CPU Options

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-update-additional-instance-types-nitro-system-and-cpu-options/

I have a backlog of EC2 updates to share with you. We’ve been releasing new features and instance types at a rapid clip and it is time to catch up. Here’s a quick peek at where we are and where we are going…

Additional Instance Types
Here’s a quick recap of the most recent EC2 instance type announcements:

Compute-Intensive – The compute-intensive C5d instances provide a 25% to 50% performance improvement over the C4 instances. They are available in 5 regions and offer up to 72 vCPUs, 144 GiB of memory, and 1.8 TB of local NVMe storage.

General Purpose – The general purpose M5d instances are also available in 5 regions. They offer up to 96 vCPUs, 384 GiB of memory, and 3.6 TB of local NVMe storage.

Bare Metal – The i3.metal instances became generally available in 5 regions a couple of weeks ago. You can run performance analysis tools that are hardware-dependent, workloads that require direct access to bare-metal infrastructure, applications that need to run in non-virtualized environments for licensing or support reasons, and container environments such as Clear Containers, while you take advantage of AWS features such as Elastic Block Store (EBS), Elastic Load Balancing, and Virtual Private Clouds. Bare metal instances with 6 TB, 9 TB, 12 TB, and more memory are in the works, all designed specifically for SAP HANA and other in-memory workloads.

Innovation and the Nitro System
The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. We will deliver new instance types more quickly than ever in the months to come, with the goal of helping you to build, migrate, and run even more types of workloads.

Local NVMe Storage – The new C5d, M5d, and bare metal EC2 instances feature our Nitro local NVMe storage building block, which is also used in the Xen-virtualized I3 and F1 instances. This building block provides direct access to high-speed local storage over a PCI interface and transparently encrypts all data using dedicated hardware. It also provides hardware-level isolation between storage devices and EC2 instances so that bare metal instances can benefit from local NVMe storage.

Nitro Security Chip – A component that is part of our AWS server designs that continuously monitors and protects hardware resources and independently verifies firmware each time a system boots.

Nitro Hypervisor – A thin, quiescent hypervisor that manages memory and CPU allocation, and delivers performance that is indistinguishable from bare metal for most workloads (Brendan Gregg of Netflix benchmarked it at less than 1%).

Networking – Hardware support for the software defined network inside of each Virtual Private Cloud (VPC), Enhanced Networking, and Elastic Network Adapter.

Elastic Block Storage – Hardware EBS processing including CPU-intensive cryptographic operations.

Moving storage, networking, and security functions to hardware has important consequences for both bare metal and virtualized instance types:

Virtualized instances can make just about all of the host’s CPU power and memory available to the guest operating systems since the hypervisor plays a greatly diminished role.

Bare metal instances have full access to the hardware, but also have the same the flexibility and feature set as virtualized EC2 instances including CloudWatch metrics, EBS, and VPC.

To learn more about the hardware and software that make up the Nitro system, watch Amazon EC2 Bare Metal Instances or C5 Instances and the Evolution of Amazon EC2 Virtualization and take a look at The Nitro Project: Next-Generation EC2 Infrastructure.

CPU Options
This feature provides you with additional control over your EC2 instances and lets you optimize your instance for a particular workload. First, you can specify the desired number of vCPUs at launch time. This allows you to control the vCPU to memory ratio for Oracle and SQL Server workloads that need high memory, storage, and I/O but perform well with a low vCPU count. As a result, you can optimize your vCPU-based licensing costs when you Bring Your Own License (BYOL). Second, you can disable Intel® Hyper-Threading Technology (Intel® HT Technology) on instances that run compute-intensive workloads. These workloads sometimes exhibit diminished performance when Intel HT is enabled. Both of these options are available when you launch an instance using the AWS Command Line Interface (CLI) or one of the AWS SDKs. You simply specify the total number of cores and the number of threads per core using values chosen from the CPU Cores and Threads per CPU Core Per Instance Type table. Here’s how you would launch an instance with 6 CPU cores and Intel® HT Technology disabled:

$ aws ec2 run-instances --image-id ami-1a2b3c4d --instance-type r4.4xlarge --cpu-options "CoreCount=6,ThreadsPerCore=1"

To learn more, read about Optimizing CPU Options.

Help Wanted
The EC2 team is always hiring! Here are a few of their open positions:

Jeff;

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close