Tag Archives: Singapore

Cyber hygiene and MAS Notice 655

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/cyber-hygiene-and-mas-notice-655/

In this post, I will provide guidance and resources that will help you align to the expectations of the Monetary Authority of Singapore (MAS) Notice 655 – Notice on Cyber Hygiene.

The Monetary Authority of Singapore (MAS) issued Notice 655 – Notice on Cyber Hygiene on 6 Aug 2019. This notice is applicable to all banks in Singapore and takes effect from 6 Aug 2020. The notice sets out the requirements on cyber hygiene for banks across the following six categories: administrative accounts, security patches, security standards, network perimeter defense, malware protection, and multi-factor authentication.

Whilst Notice 655 is specific to all banks in Singapore, the AWS security guidance I provide in this post is based on consistent best practices. As always, it’s important to note that security and compliance is a shared responsibility between AWS and you as our customer. AWS is responsible for the security of the cloud, but you are responsible for your security in the cloud.

To aid in your alignment to Notice 655, AWS has developed a MAS Notice 655 – Cyber Hygiene – Workbook, which is available in AWS Artifact. The workbook covers each of the six categories of cyber hygiene in Notice 655 and maps to the following:

The downloadable workbook contains two embedded formats:

  • Microsoft Excel – coverage includes AWS responsibility control statements, and Well-Architected Framework best practices.
  • Dynamic HTML – same as Microsoft Excel, with the added feature that the Well-Architected Framework best practices are mapped to AWS Config managed rules and Amazon GuardDuty findings, where available or applicable.

Administrative accounts

“4.1. A relevant entity must ensure that every administrative account in respect of any operating system, database, application, security appliance or network device, is secured to prevent any unauthorised access to or use of such account.”

For administrative accounts, it is important to follow best practices for the privileged accounts, keeping in mind both human and programmatic access.

The most privileged user account in an AWS account is the root user. When you first create an AWS account (unless you create it with AWS Organizations), this is the initial user account created. The root user is associated with the provided email address and password used to create the account. The root user account has access to every resource in the account—including the ability to close it. To align to the principle of least privilege, the root user account should not be used for everyday tasks. Instead, AWS Identity and Access Management (IAM) roles should be created and scoped to particular roles and functions within your organization. Furthermore, AWS strongly recommends that you integrate with a centralized identity provider, or a directory service, to authenticate all users in a centralized place. This reduces the requirement for multiple credentials and reduces management complexity.

There are some additional key steps that you should do to further protect your root user account.

Ensure that you have a very long and complex password, and if necessary you should change the root user password to meet this recommendation.

  • Put the root user password in a secure location, and consider a physical or virtual password vault with strong multi-party access protocol.
  • Delete any access keys, and remove any programmatic access keys from the root user account.
  • Enable multi-factor authentication (MFA), and consider a hardware-based token that is stored in a physical vault or safe with a strong multi-party access protocol. Consider using a separate secure vault store for the password and the MFA token, with separate permissions for access.
  • A simple but hugely important step is to ensure your account information is correct, which includes the assigned root email address, so that AWS Support can contact you.

Do keep in mind that there are a few AWS tasks that require root user.

You should use IAM roles for programmatic or system-to-system access to AWS resources that are integrated with IAM. For example, you should use roles for applications that run on Amazon Elastic Compute Cloud (Amazon EC2) instances. Ensure the principle of least privilege is being applied for the IAM policies that are attached to the roles.

For cases where credentials that are not from AWS IAM, such as database credentials, need to be used by your application, you should not hard-code these credentials in the application source code, or stored in an un-encrypted state. It is recommended that you use a secrets management solution. For example, AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. Secrets Manager enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications can retrieve secrets with a call to Secrets Manager APIs, which eliminates the need to hardcode sensitive information in plain text.

For more information, see 4.1 Administrative Accounts in the MAS Notice 655 workbook on AWS Artifact.

Security Patches

“4.2 (a) A relevant entity must ensure that security patches are applied to address vulnerabilities to every system, and apply such security patches within a timeframe that is commensurate with the risks posed by each vulnerability.”

Consider the various categories of security patches you need to manage, based on the AWS Shared Responsibility Model and the AWS services you are using.

Here are some common examples, but does not represent an exhaustive list:

Operating system

When using services from AWS where you have control over the operating system, it is your responsibility to perform patching on these services. For example, if you use Amazon EC2 with Linux, applying security patches for Linux is your responsibility. To help you with this, AWS publishes security patches for Amazon Linux at the Amazon Linux Security Center.

AWS Inspector allows you to run scheduled vulnerability assessments on your Amazon EC2 instances, and provides a report of findings against rules packages that include operating system configuration benchmarks for common vulnerabilities and exposures (CVEs) and Center for Internet Security (CIS) guidelines. To see if AWS Inspector is available in your AWS Region, see the list of supported Regions for Amazon Inspector.

For managing patching activity at scale, consider AWS Systems Manager Patch Manager to automate the process of patching managed instances with both security-related patches and other types of updates.

Container orchestration and containers

If you are running and managing your own container orchestration capability, it is your responsibility to perform patching for both the master and worker nodes. If you are using Amazon Elastic Kubernetes Service (Amazon EKS), then AWS manages the patching of the control plane, and publishes EKS-optimized Amazon Machine Images (AMIs) that include the necessary worker node binaries (Docker and Kubelet). This AMI is updated regularly and includes the most up to date version of these components. You can update your EKS managed nodes to the latest versions of the EKS-optimized AMIs with a single command in the EKS console, API, or CLI. If you are building your own custom AMIs to use for EKS worker nodes, AWS also publishes Packer scripts that document the AWS build steps, to allow you to identify the binaries included in each version of the AMI.

AWS Fargate provides the option of serverless compute for containers, so you can avoid the operational overhead of scaling, patching, securing, and managing servers.

For the container images, you do need to ensure these are scanned for vulnerabilities and patched. The Amazon Elastic Container Registry (Amazon ECR) service offers the ability to scan container images for common vulnerabilities and exposures (CVEs).

Database engines

If you are running and managing your own databases on top of an AWS service such as Amazon EC2, it is your responsibility to perform patching of the database engine.

If you are using Amazon Relational Database Service (Amazon RDS), then AWS will automatically perform the patching of the database engine. This is done within the configurable Amazon RDS maintenance window, and is your opportunity to control when DB instance modifications, database engine version upgrades, and software patching occurs.

In cases where you are using fully-managed AWS database services such as Amazon DynamoDB, AWS takes care of the underlying patching requirements.

Application

For application code and dependencies that you run on AWS services, you own and manage the patching. This applies to applications that your organization has built themselves, or applications from a third-party software vendor. You should make sure that you have a mechanism for ensuring that vulnerabilities in the application code you run are regularly identified & patched.

For more information, see 4.2 Security Patches in the MAS Notice 655 workbook on AWS Artifact.

Security Standards

“4.3. (b)… a relevant entity must ensure that every system conforms to the set of security standards.”

After you have defined your organizational security standards, the next step is to verify your conformance to these standards. In my consultation with customers, I advise that it is a best practice to enforce these security standards as early in the development lifecycle possible. For example, you may have a standard requiring that data of a specific data classification must be encrypted at rest with a AWS Key Management Service (AWS KMS) customer-managed customer master key (CMK). The way this is typically achieved is by defining your Infrastructure-as-Code (IaC), for example using AWS CloudFormation. As your projects move through the various stages of development in your pipeline, you can automatically and programmatically check your IaC templates against codified security standards that you have defined. AWS has a number of tools that assist you with defining you rules and evaluating your IaC templates.

In the case of AWS CloudFormation, you may want to consider the tools AWS CloudFormation Guard, cfn-lint or cfn_nag. Enforcing your security standards as early in the development lifecycle as possible has some key benefits. It instills a culture and practice of creating deployments that are aligned to your standards from the outset, and allows developers to move fast by using the tools and workflow that work best for their team, while providing feedback early enough so they have time to resolve any issues & meet security standards during the development process.

It’s important to complement this IaC pipeline approach with additional controls to ensure that security standards remain in place after it is deployed. You should make sure to look at both preventative controls and detective controls.

For preventative controls, the focus is on IAM permissions. You can use these fine-grained permissions to enforce at the level of the IAM principal (such as user or role) to control what actions can or cannot be taken on AWS resources. You can make use of AWS Organizations service control policies (SCPs) to enforce permission guardrails globally across the entire organization, across an organizational unit, or across individual AWS accounts. Some example SCPs that may align to your security standards include the following: Prevent any virtual private cloud (VPC) that doesn’t already have internet access from getting it, Prevent users from disabling Amazon GuardDuty or modifying its configuration. Additionally, you can use the SCPs described in the AWS Control Tower Guardrail Reference, which you can implement with or without using AWS Control Tower.

For detective controls, after your infrastructure is deployed, you should make use of the AWS Security Hub and/or AWS Config rules to help you meet your compliance needs. You should ensure that the findings from these services are integrated with your technology operations processes to take action, or you can use automated remediation.

For more information, see 4.3 Security Standards in the MAS Notice 655 workbook on AWS Artifact.

Network Perimeter Defense

“4.4. A relevant entity must implement controls at its network perimeter to restrict all unauthorised network traffic.”

Having a layered security strategy is a best practice, and this applies equally to your network. AWS provides a number of complimentary network configuration options that you can implement to add network protection to your resources. You should consider using all of the options I describe here for your AWS workload. You can implement multiple strategies together where possible, to provide network defense in depth.

For network layer protection, you can use security groups for your VPC. Security groups act as a virtual firewall for members of the group to control inbound and outbound traffic. For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic. You can attach Security groups to EC2 instances and other AWS services that use elastic network interfaces, including RDS instances, VPC endpoints, AWS Lambda functions, and Amazon SageMaker notebooks.

You can also use network access control lists (ACLs) as an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets, and supports allow rules and deny rules. Network ACLs are a good option for controlling traffic at the subnet level.

For application-layer protection against common web exploits, you can use AWS WAF. You use AWS WAF on the Application Load Balancer that fronts your web servers or origin servers that are running on Amazon EC2, on Amazon API Gateway for your APIs, or you can use AWS WAF together with Amazon CloudFront. This allows you to strengthen security at the edge, filtering more of the unwanted traffic out before it reaches your critical content, data, code, and infrastructure.

For distributed denial of service (DDoS) protection, you can use AWS Shield, which is a managed service to provide protection against DDoS attacks for applications running on AWS. AWS Shield is available in two tiers: AWS Shield Standard and AWS Shield Advanced. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks. AWS Shield Advanced provides advanced attack mitigation, visibility and attack notification, DDoS cost protection, and specialist support.

AWS Firewall Manager allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Also in AWS Firewall Manager, you can centrally manage and deploy security groups, AWS WAF rules, and AWS Shield Advanced protections.

There are many AWS Partner Network (APN) solutions that provide additional capabilities and protection that work alongside the AWS solutions discussed, in categories such as intrusion detection systems (IDS). For more information, find an APN Partner.

For more information, see 4.4 Network Perimeter Defence in the MAS Notice 655 workbook on AWS Artifact for additional information.

Malware protection

“4.5. A relevant entity must ensure that one or more malware protection measures are implemented on every system, to mitigate the risk of malware infection, where such malware protection measures are available and can be implemented.”

Malware protection requires a multi-faceted approach, including all of the following:

  • Training your employees in security awareness
  • Finding and patching vulnerabilities within your AWS workloads
  • Segmenting your networks
  • Limiting access to your critical systems and data
  • Having a comprehensive backup and restore strategy
  • Detection of malware
  • Creating incident response plans

In the previous sections of this post, I covered security patching, network segmentation, and limiting access. Now I’ll review the remaining elements.

Employee security awareness is crucial, because it is generally accepted that the primary vector by which malware is installed within your organization is by phishing (or spear phishing), where an employee is misled into installing malware, or opens an attachment that uses a vulnerability in software to install itself.

For backup and restore, a comprehensive and tested strategy is crucial, especially when the motivation of the malware is deletion, modification, or mass encryption (ransomware). You can review the AWS backup and restore solutions and leverage the various high-durability storage classes provided by Amazon Simple Storage Service (Amazon S3).

For malware protection, as with other security domains, it is important to have complementary detective controls along with the preventative ones, it is important to have systems for early detection of malware, or of the activity indicative of malware presence. Across your AWS account, when you understanding what typical activity looks like, that gives you a baseline of network and user activity that you can continuously monitor for anomalies.

Amazon GuardDuty is a threat detection service that continuously monitors and compares activity within your AWS environment for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. Amazon GuardDuty uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.

Continuing on the topic of malware detection, you should consider other approaches as well, including endpoint detection and response (EDR) solutions. AWS has a number of partners that specialize in this space. For more information, find an APN Partner.

Finally, you should make sure that you have a security incident response plan to help you respond to an incident, communicate during an incident, and recover from it. AWS recommends that you create these response plans in the form of playbooks. A good way to create a playbook is to start off simple and iterate to improve your plan. Before you need to respond to an actual event, you should consider the tasks that you can do ahead of time to improve your recovery timeframes. Some of the issues to consider include pre-provisioning access to your responders, and pre-deploying the tools that the responders or forensic teams will need. Importantly, do not wait for an actual incident to test your response and recovery plans. You should run game days to practice, improve and iterate.

For more information, see 4.5 Malware protection in the MAS Notice 655 workbook on AWS Artifact.

Multi-factor authentication

“4.6. … a relevant entity must ensure that multi-factor authentication is implemented for the following:
(a)all administrative accounts in respect of any operating system, database, application, security appliance or network device that is a critical system; and
(b)all accounts on any system used by the relevant entity to access customer information through the internet.“

When using multi-factor authentication (MFA), it’s important to for you to think of the various layers that you need to implement.

For access to the AWS API, AWS Management Console, and AWS resources that use AWS Identity and Access Management (IAM), you can configure MFA with a number of different form factors, and apply it to users within your AWS accounts. As I mentioned in the Administrative accounts section, AWS recommends that you apply MFA to the root account user. Where possible, you should not use IAM users, but instead use identity federation with IAM roles. By using identity federation with IAM roles, you can apply and enforce MFA at the level of your identity provider, for example Active Directory Federation Services (AD FS) or AWS Single Sign-On. For highly privileged actions, you may want to configure MFA-protected API access to only allow the action if MFA authentication has been performed.

With regards to third-party applications, which includes software as a service (SaaS), you should consider integration with AWS services or third-party services to provide MFA protection. For example, AWS Single Sign-On (SSO) includes built-in integrations to many business applications, including Salesforce, Office 365, and others.

For your own in-house applications, you may want to consider solutions such as Amazon Cognito. Amazon Cognito goes beyond standard MFA (which use SMS or TOTP), and includes the option of adaptive authentication when using the advanced security features. With this feature enabled, when Amazon Cognito detects unusual sign-in activity, such as attempts from new locations and unknown devices, it can challenge the user with additional verification checks.

For more information, see 4.6 Multi-Factor authentication in the MAS Notice 655 workbook on AWS Artifact.

Conclusion

AWS products and services have security features designed to help you improve the security of your workloads, and meet your compliance requirements. Many AWS services are reviewed by independent third-party auditors, and these audit reports are available on AWS Artifact. You can use AWS services, tools, and guidance to address your side of the shared responsibility model to align with the requirements stated in Notice 655 – Notice on Cyber Hygiene.

Review the MAS Notice 655 – Cyber Hygiene – Workbook on AWS Artifact to understand both the AWS control environment (the AWS side of the shared responsibility model) and the guidance AWS provides to help you with your side of the shared responsibility model. You will find AWS guidance in the AWS Well-Architected Framework best practices, and where available or applicable to detective controls, in AWS Config rules and Amazon GuardDuty findings.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond.

Reflecting on my first year at Cloudflare as a Field Marketer in APAC

Post Syndicated from Els Shek original https://blog.cloudflare.com/reflecting-on-my-first-year-at-cloudflare/

Reflecting on my first year at Cloudflare as a Field Marketer in APAC

Reflecting on my first year at Cloudflare as a Field Marketer in APAC

Hey there! I am Els (short form for Elspeth) and I am the Field Marketing and Events Manager for APAC. I am responsible for building brand awareness and supporting our lovely sales team in acquiring new logos across APAC.

I was inspired to write about my first year in Cloudflare, because John, our CTO, encouraged more women to write for our Cloudflare blog after reviewing our blogging statistics and found out that more men than women blog for Cloudflare. I jumped at the chance because I thought this is a great way to share many side stories as people might not know about how it feels to work in Cloudflare.

Why Cloudflare?

Before I continue, I must mention that I really wanted to join Cloudflare after reading our co-founder Michelle’s reply on Quora regarding “What is it like to work in Cloudflare?.” Michelle’s answer as follows:

“my answer is ‘adult-like.’ While we haven’t adopted this as our official company-wide mantra, I like the simplicity of that answer. People work hard, but go home at the end of the day. People care about their work and want to do a great job. When someone does a good job, their teammate tells them. When someone falls short, their colleague will let them know. I like that we communicate directly, no matter what seniority level you are.”

The main themes were centered around High Curiosity, Ability to get things done, and Empathy.

The answer took me by surprise. I have read so many replies by top leaders of leading companies in the world, and I have never seen such a down to earth reply!

I was eager to join the company and test it out.

Day 1 – Onboarding in our San Francisco Headquarters

Every new hire in Cloudflare will have to attend a two week orientation in San Francisco (well, they used to until COVID-19 hit and orientation has gone virtual), where they have a comprehensive program that exposes them to all the different functions of the company. My most memorable session was the one conducted by Matthew Prince, where he delivered a very engaging and theatrical crash course on the origins of Cloudflare and competitive landscape surrounding cloud computing. Even though the session took 1.5 hours, I enjoyed every second of it and I was very impressed with Matthew’s passion and conviction behind Cloudflare’s mission to build a better Internet.

There was also a very impressive session conducted by Joe Sullivan, our Chief Security Officer. Joe introduced us to the importance of cybersecurity through several real life examples and guided us through some key steps to protect ourselves. Joe left a very deep impression on me as he spoke in a very simple manner. This is important for someone like myself who didn’t come from a security background as I felt that it is important for me to understand why I am joining this company and why my contribution matters.

I also had the chance to meet the broader members of my marketing team. I had about twenty meetings arranged in the span of one week and I am thankful to everyone who took time out of their busy schedule to help me understand how the global team worked together. Needless to say everyone was really smart, nice, and down to earth. I left the San Francisco office feeling really good about my start in Cloudflare, but little did I know that was just the tip of the iceberg.

Back to Singapore, where the fun happens!

Reflecting on my first year at Cloudflare as a Field Marketer in APAC

After I returned to Singapore, Krishna, my manager, quickly put me to work to focus on building a pipeline for the APAC region. In a short span of six months, I had to quickly bring myself up to speed to understand the systems and processes in place, in addition to executing events across the region to ensure that we have a continuous pipeline for our ever-growing sales team. I am going to be completely transparent here, it was overwhelming, stressful and I was expected to deliver results in a short period of time. However, it has also been the most exciting period of personal and professional growth for me, and I am so grateful for the opportunity to join an amazing team in one of the most exciting companies of the century.

As a new team member, I had to quickly understand the needs of the sales leaders from the ASEAN countries, ANZ, the Greater China Region, India, Japan, and Korea. There were so many things to learn and everyone was very supportive and helpful. More importantly, there were many challenges and mistakes made along the way I felt supported by the entire team throughout.

In my first six months, I had to immediately plan and execute an average of 28 events per quarter, ranging from flagship events like Gartner Security Risk Management conferences in Sydney and Mumbai, the largest gaming conference ChinaJoy in Shanghai, AWS series across the ASEAN countries and leading security conferences in Korea and Japan. When Cloudflare IPO-ed on September 13, 2019, I was tasked to organize an IPO party for over 150 people in our Singapore over a short span of 3 weeks. What an adventure!

Reflecting on my first year at Cloudflare as a Field Marketer in APAC
At our largest event in Singapore, where over 30 Cloudflarians from the Singapore team took time to help out.

Just when I thought 28 events per quarter is an achievement (for myself), my team and I were given once in a lifetime opportunity to lead a series of projects related to our Japan office opening.  

“As the third largest economy, and one of the most Internet-connected countries in the world, Japan was a clear choice when considering expansion locations for our next APAC office,” said Matthew Prince, co-founder and CEO of Cloudflare. “Our new facility and team in Tokyo present a unique opportunity to be closer to our customers, and help even more businesses and users experience a better Internet across Japan and throughout the world.”

Japan is a new market for me and I had to start everything from scratch. I started off with launching our very first Japan brand campaign where the team worked closely with leading Japanese media companies to launch digital advertisements, advertorials, video campaigns to spread our awareness across Japan in just under 3 months. While it is a complete unknown path for us, the team was really good at experimenting with new ideas, analysis results, iterating and improving on our campaigns week by week.

Reflecting on my first year at Cloudflare as a Field Marketer in APAC
Check out our amazing Japan city cloud designed by our very talented team 

I also had the opportunity to be part of our very first hybrid (physical and virtual) press conference that was held across Singapore and Tokyo, where we had 35 journalists participate (with 6 top-tier media in attendance and 29 journalists online). News of the office opening/event was covered in Japan’s most influential business newspaper, Nikkei, in an article titled, “US IT giant Cloudflare establishes Japanese corporation.“. I cannot wait to tell you more about what’s coming down the line!

Career Planning – Take charge of your career!

With so many things going on, it is easy to lose sight of the long term goal. Jake, our CMO is very focused on ensuring the team remains engaged and motivated throughout their time in Cloudflare. He launched a mandatory career conversations program where the team had to have at least one discussion with their respective managers on how they would envision their future to be within the company. This is a very useful exercise for me as I was able to have an open discussion with my manager on the various options that I could consider as Cloudflare is a company which supports cross departmental/borders transitions. It is beneficial to know that I am able to explore different opportunities going forward and lock down some next steps on how I will get there. Exciting times!

Inclusivity – Women for Women and Diversity

As a young woman, I am very fortunate to be part of the APAC team led by Aliza Knox. Aliza is extremely passionate about encouraging women to pursue opportunities in business and tech. As a woman, I have never felt more comfortable under her leadership as gender discrimination is real and most companies are predominantly led by men. With Aliza, all opinions and ideas are strongly welcomed and I never felt bound by my age, seniority, experience to reach for the skies. It is ok to be ambitious, to do more, to ask questions, or something as simple as getting 15 mins of her time to ask if I should pursue an online course at MIT (and I did!).

Reflecting on my first year at Cloudflare as a Field Marketer in APAC

Did I also mention Cloudflare’s Employee Resource Group (ERG)? I am the APAC lead for Womenflare where our mission is to cultivate an inclusive, inspiring, and safe environment that supports, elevates, and ensures equal opportunities for success to all who identify as women at Cloudflare. As part of our global Womenflare initiative, I organised an International Women’s Day luncheon in March this year where we had members of our APAC leadership team share about their experiences on how they have managed their career and family commitments. Other ERG in Cloudflare includes Proudflare, where we support and provide resources for the LGBTQIA+ community, Afroflare, where we aim to build a better global Afro-community at Cloudflare and beyond, and many more!

COVID-19

I am writing this blogpost as we all embrace the challenges and opportunities present during COVID-19. When COVID-19 first hit APAC,  I was very impressed with how the global team exhibited flexibility to adapt to everyday challenges, with great empathy that it might be challenging to work from home, to how it is ok to try new things and make mistakes as long as we can learn from it.

Reflecting on my first year at Cloudflare as a Field Marketer in APAC

Our Business Continuity Team provided regular employee communication on local guidelines and Work From Home next steps. Our office support team immediately supplied computer equipment/office chairs that employees can bring home for their remote working needs. Our Site Leads came up with different initiatives to ensure the team remains connected through a series of virtual yoga sessions, Friday wine down, and lunch and games. The latest activity we ran was Activeflare, where a group of us from the Singapore and Australia offices exercised together on a Saturday and drew a map of our activities using tracking technology. That was fun!

The global team has also launched a series of fireside chats where we get to hear from leaders of leading companies, which is a really nice touch where we get to gain exposure to the mind of great leaders which we otherwise would not have the opportunity to. My favourite so far is from Doug, our Chief Legal Officer and Katrin Suder, one of our Board Members.

Reflecting on my first year at Cloudflare as a Field Marketer in APAC
My very first experience as a TV host on Cloudflare TV

Matthew, Cloudflare co-founder and CEO, recently launched Cloudflare TV for the team to experiment and connect with the Cloudflare community, even while we’re locked down. And that community shares common interests in topics like web performance, Internet security, edge computing, and network reliability. Aliza and myself are hosting a series of Zoomelier in APAC soon to connect with winemakers and sommeliers across the region and share some interesting wine recommendations that one can drink with technology. So hope you’ll tune in, geek out, feel part of our community, and learn more about Cloudflare and the people who are building it. Check out the Cloudflare TV Guide: cloudflare.tv/schedule

Going forward, second year in Cloudflare, what’s next?

I am at the point where I feel like I have a good amount of experience to do a good job, but not good enough to be where I want to be. In Cloudflare, I strongly feel that “The more I learn, the less I realise I know” (Socrates). I aim to continuously learn and build up my capabilities to strategize and deliver results for the present and the future, and I must end this blogpost with my learnings from John, “overnight success takes at least 10 years, I read a lot to stay up to date on what’s happening internally and externally. The gym (exercise) is really important to me. It’s challenging and takes my mind off everything. Many people seem to view the gym as dead time to fill with TED videos, podcasts or other “useless” activities. I love the fact that it’s the one time I stop thinking.” I have applied this learning to both my personal and professional life and it made a huge difference. Thank you John.

If you’re willing to join an impressive team and work for a very dynamic company to help create a better Internet, we’re looking for many different profiles in our different offices all over the planet! Let’s have a look!

OSPAR 2020 report now available with 105 services in scope

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/ospar-2020-report-now-available-with-105-services-in-scope/

We are excited to announce the addition of 41 new services in the scope of our latest Outsourced Service Provider Audit Report (OSPAR) audit cycle, for a total of 105 services in the Asia Pacific (Singapore) Region. The newly added services include:

  • AWS Security Hub, which gives you a comprehensive view of high-priority security alerts and security posture across your AWS accounts.
  • AWS Organizations, which helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts.
  • Amazon CloudFront, which provides a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds, all within a developer-friendly environment.
  • AWS Backup, which is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services.

You can download our latest OSPAR report available in AWS Artifact.

An independent third-party auditor performs the OSPAR assessment. The assessment demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers. Our alignment with the ABS guidelines demonstrates to customers our commitment to meet the security expectations for cloud service providers set by the financial services industry in Singapore. You can leverage the OSPAR assessment report to conduct due diligence, minimizing the effort and costs required for compliance.

As always, we are committed to bringing new services into the scope of our OSPAR program based on your architectural and regulatory needs. Please reach out to your AWS account team if you have questions about the OSPAR report.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is the Security Audit Program Manager for the Asia Pacific and Japan Region, leading multiple security certification programs across these Regions. Throughout his career, he has helped numerous cloud service providers obtain global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

Virtual Interning Offers Unique Challenges and Opportunities

Post Syndicated from Cate Danielson original https://blog.cloudflare.com/virtual-interning-offers-unique-challenges-and-opportunities/

Virtual Interning Offers Unique Challenges and Opportunities

Virtual Interning Offers Unique Challenges and Opportunities

I am in my third year at Northeastern University, pursuing an undergraduate degree in Marketing and Psychology. Five months ago I joined Cloudflare as an intern on the APAC Marketing team in the beautiful Singapore office. When searching for internships Cloudflare stood out as a place I could gain skills in marketing, learn from amazing mentors, and have space to take ownership in projects. As a young, but well-established company, Cloudflare provides the resources for their interns to work cross functionally and creatively and truly be a part of the exponential growth of the company.

My experience at Cloudflare

Earlier this week, I hopped on a virtual meeting with a few coworkers, thinking everything was set to record a webinar. As I shared my screen to explain how to navigate the platform I realised the set up was incorrect and we couldn’t start on time. Due to the virtual nature of the meeting, my coworkers didn’t see the panic on my face and had no idea what was going on. I corrected the issue and set up an additional trial run session, issuing apologies to both coworkers. They both took it in stride and expressed that it happens to the best of us. At Cloudflare, everyone is understanding of hiccups and encourages me to find a solution. This understanding attitude has allowed me to reach out of my comfort zone and work on new skills. Still, there is no doubt that working remotely can lead to additional stressors for employees. For interns, who are prone to making mistakes since it is often our first exposure to the workplace, having limited access to coworkers increases our challenges.

Though there have been some challenges, virtual interning still provides many opportunities. Over my time here, I have worked with my team to develop the trust and autonomy to lead projects and learn new systems and softwares. I had the opportunity to create and run campaigns, including setup, execution, and promotion. I took charge of our recent APAC-wide webinars. I promoted the webinars on social platforms and worked with vendors. Through this process, I learned to analyse the quality of leads from different sources which gave me the ability to develop post-quarter analyses looking at webinar performance and discerning lessons we can take into future quarters

I also conducted various data analysis projects, beginning with data extraction and leading to the analysis of the holistic business impact. For instance, I led a detailed data analysis project looking into the performance of events and how they may be improved. I learned new software, such as Salesforce and how to tell a story with data. Through analysis of the sales cycle and conversion rates, we were able to pinpoint key improvement areas to the execution of events.

Among these many exciting projects, I have also learned from my experienced teammates about how to work smart and I have been lucky to be part of a great company. As I come up on my final month as an intern at Cloudflare, I am excited to take the lessons I have learned over the past five months into my final years in school and to whatever I end up doing after.

A guide for those beginning their virtual intern experience

Cloudflare has provided a seamless transition to remote work for full-time employees, interns, and new hires. They have provided resources, such as virtual fitness classes and fireside chats, for us to stay healthy mentally, physically, and professionally. Even so, during these tumultuous times, it can be stressful to start an internship (possibly your first) in a remote setting.

With one month left and seeing many of my fellow college students begin their own summer internship, I’m reflecting on the multitude of lessons I have learned at Cloudflare. While I was lucky to have three months working with the team in the office, I know many interns are worried about starting internships that are now fully remote. As I have been working from home for the past two months, I hope to provide incoming interns with some guidance how to excel during a remote internship.

Set up a LOT of meetings and expand your network

Recently, I was curious to learn more about what the different teams were doing without being able to make in-person sales calls. I asked my manager if I could listen in to a few more meetings and he quickly agreed. I have since created a better picture of the different teams’ activities and initiated conversations with my manager that led to a deeper understanding of the sales cycle. Being engaged, interested, and forward with my request to attend more meetings provided me with additional learning experiences.

Don’t wait around for people to set up meetings with you or give you tasks. Your co-workers still have a full time job to do so finding time to train you might slip their mind, especially since they can’t see you. When I first started my internship, my manager encouraged me to reach out to my team (and other teams) and come prepared with lots of questions. I started filling my calendar with short 15-30 minute meetings to get to know the different teams in the office.

This is even more crucial for those working remotely. You may not have the opportunity to speak with co-workers in the elevator or the All Hands room. Make up for this by setting up introductory meetings in your first few weeks and don’t be afraid to ask to be part of meetings. You will be able to learn more about your organisation and what interests you.

Speak up and don’t stay on mute

As an intern, I am usually the most inexperienced individual in the meeting, which can make it nerve-wracking to unmute myself and speak up. With all meetings now in a video conference format, it can be easy to say “hi,” mute yourself, and spend the rest of the time listening to everyone else speak. I have learned that I won’t get the most out of my experience unless I offer my opinion and ask questions. Often, I am wrong, but my teammates explain why. For example, I came prepared with a draft of an email to a meeting with my manager. He was able to help me edit it and make it even more effective. He then provided me with extra reading materials and templates to help me improve in the future. Because of the questions or opinions I share during these meetings, I now have a greater understanding of branding and how to position a company in the market.

As an intern starting out in a virtual environment, be fully engaged in meetings so your team can learn from your opinions and vice versa. Work to overcome the intimidation you may be feeling and take initiative to show your team what you have to offer. Making sure your video is on during every meeting can help you stay present and focused.

Everyone is dealing with unique circumstances; use this to get to know your coworkers

In many companies, almost all employees are working from home providing a unique commonality. It is an easy talking point to start with in any meeting and helps you get to know your coworkers. Use this as an opportunity to get to know them on a deeper level and share something about yourself. You can discuss interesting books you have read or TV shows you love. It is also a great opportunity to set up fun virtual activities. My manager recently set up a “Fancy Dress Happy Hour” where we all dressed up as our favourite fictional characters and chatted about life stuck at home. Don’t be afraid to set activities like this up. Chances are, the rest of your team is just as tired of being stuck at home as you are.

Recognising this could be the new working reality (for a while more)

The events of 2020 have led to drastic changes in the business world. Everyone is learning a new way to work and adapting to change. It may be too soon to know what a fully remote internship will look like, but it is a great opportunity to find new and innovative ways to intern. Being an intern is a unique experience where you are not only allowed, but encouraged to try new things, even those not included in your job description. Virtual interning offers many unique challenges, but also provides the opportunity to learn how to quickly adapt and find new opportunities.

Cloudflare is a company that has urged me to gain a better grasp of my goals and provided me with opportunities to act towards fulfilling them. It is a great place to understand what a post-university job will look like and exemplifies how much fun it can be. This summer, they have doubled their intern class and work to amplify interns’ voices so they are a meaningful part of the company. If you are interested in being part of an innovative, collaborative environment, consider applying for an internship experience at Cloudflare here.

AWS achieves OSPAR outsourcing standard for Singapore financial industry

Post Syndicated from Brandon Lim original https://aws.amazon.com/blogs/security/aws-achieves-ospar-outsourcing-standard-for-singapore-financial-industry/

AWS has achieved the Outsourced Service Provider Audit Report (OSPAR) attestation for 66 services in the Asia Pacific (Singapore) Region. The OSPAR assessment is performed by an independent third party auditor. AWS’s OSPAR demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines).

The ABS Guidelines are intended to assist financial institutions in understanding approaches to due diligence, vendor management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The ABS Guidelines are closely aligned with the Monetary Authority of Singapore’s Outsourcing Guidelines, and they’re one of the standards that the financial services industry in Singapore uses to assess the capability of their outsourced service providers (including cloud service providers).

AWS’s alignment with the ABS Guidelines demonstrates to customers AWS’s commitment to meeting the high expectations for cloud service providers set by the financial services industry in Singapore. Customers can leverage OSPAR to conduct their due diligence, minimizing the effort and costs required for compliance. AWS’s OSPAR report is now available in AWS Artifact.

You can find additional resources about regulatory requirements in the Singapore financial industry at the AWS Compliance Center. If you have questions about AWS’s OSPAR, or if you’d like to inquire about how to use AWS for your material workloads, please contact your AWS account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Brandon Lim

Brandon is the Head of Security Assurance for Financial Services, Asia-Pacific. Brandon leads AWS’s regulatory and security engagement efforts for the Financial Services industry across the Asia Pacific region. He is passionate about working with Financial Services Regulators in the region to drive innovation and cloud adoption for the financial industry.

Singapore financial services: new resources for customer side of the shared responsibility model

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/singapore-financial-services-new-resources-for-customer-side-of-shared-responsibility-model/

Based on customer feedback, we’ve updated our AWS User Guide to Financial Services Regulations and Guidelines in Singapore whitepaper, as well as our AWS Monetary Authority of Singapore Technology Risk Management Guidelines (MAS TRM Guidelines) Workbook, which is available for download via AWS Artifact. Both resources now include considerations and best practices for the customer portion of the AWS Shared Responsibility Model.

The whitepaper provides considerations for financial institutions as they assess their responsibilities when using AWS services with regard to the MAS Outsourcing Guidelines, MAS TRM Guidelines, and Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide.

The MAS TRM Workbook provides best practices for the customer portion of the AWS Shared Responsibility Model—that is, guidance on how you can manage security in the AWS Cloud. The guidance and best practices are sourced from the AWS Well-Architected Framework.

The Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions, and is not an audit mechanism. We believe that having well-architected systems greatly increases the likelihood of business success. For more information, see the AWS Well-Architected homepage.

The compliance controls provided by the workbook also continue to address the AWS side of the Shared Responsibility Model (security of the AWS Cloud).

View the updated whitepaper here, or download the updated AWS MAS TRM Guidelines Workbook via AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond… Cx0 to <code>

HackSpace magazine 7: Internet of Everything

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-7-internet-of-everything/

We’re usually averse to buzzwords at HackSpace magazine, but not this month: in issue 7, we’re taking a deep dive into the Internet of Things.HackSpace magazine issue 7 cover

Internet of Things (IoT)

To many people, IoT is a shady term used by companies to sell you something you already own, but this time with WiFi; to us, it’s a way to make our builds smarter, more useful, and more connected. In HackSpace magazine #7, you can join us on a tour of the boards that power IoT projects, marvel at the ways in which other makers are using IoT, and get started with your first IoT project!

Awesome projects

DIY retro computing: this issue, we’re taking our collective hat off to Spencer Owen. He stuck his home-brew computer on Tindie thinking he might make a bit of beer money — now he’s paying the mortgage with his making skills and inviting others to build modules for his machine. And if that tickles your fancy, why not take a crack at our Z80 tutorial? Get out your breadboard, assemble your jumper wires, and prepare to build a real-life computer!

Inside HackSpace magazine issue 7

Shameless patriotism: combine Lego, Arduino, and the car of choice for 1960 gold bullion thieves, and you’ve got yourself a groovy weekend project. We proudly present to you one man’s epic quest to add LED lights (controllable via a smartphone!) to his daughter’s LEGO Mini Cooper.

Makerspaces

Patriotism intensifies: for the last 200-odd years, the Black Country has been a hotbed of making. Urban Hax, based in Walsall, is the latest makerspace to show off its riches in the coveted Space of the Month pages. Every space has its own way of doing things, but not every space has a portrait of Rob Halford on the wall. All hail!

Inside HackSpace magazine issue 7

Diversity: advice on diversity often boils down to ‘Be nice to people’, which might feel more vague than actionable. This is where we come in to help: it is truly worth making the effort to give people of all backgrounds access to your makerspace, so we take a look at why it’s nice to be nice, and at the ways in which one makerspace has put niceness into practice — with great results.

And there’s more!

We also show you how to easily calculate the size and radius of laser-cut gears, use a bank of LEDs to etch PCBs in your own mini factory, and use chemistry to mess with your lunch menu.

Inside HackSpace magazine issue 7
Helen Steer inside HackSpace magazine issue 7
Inside HackSpace magazine issue 7

All this plus much, much more waits for you in HackSpace magazine issue 7!

Get your copy of HackSpace magazine

If you like the sound of that, you can find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

And if you can’t get to the shops, fear not: you can subscribe from £4 an issue from our online shop. And if you’d rather try before you buy, you can always download the free PDF. Happy reading, and happy making!

The post HackSpace magazine 7: Internet of Everything appeared first on Raspberry Pi.

Hackspace magazine 6: Paper Engineering

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-6/

HackSpace magazine is back with our brand-new issue 6, available for you on shop shelves, in your inbox, and on our website right now.

Inside Hackspace magazine 6

Paper is probably the first thing you ever used for making, and for good reason: in no other medium can you iterate through 20 designs at the cost of only a few pennies. We’ve roped in Rob Ives to show us how to make a barking paper dog with moveable parts and a cam mechanism. Even better, the magazine includes this free paper automaton for you to make yourself. That’s right: free!

At the other end of the scale, there’s the forge, where heat, light, and noise combine to create immutable steel. We speak to Alec Steele, YouTuber, blacksmith, and philosopher, about his amazingly beautiful Damascus steel creations, and about why there’s no difference between grinding a knife and blowing holes in a mountain to build a road through it.

HackSpace magazine 6 Alec Steele

Do it yourself

You’ve heard of reading glasses — how about glasses that read for you? Using a camera, optical character recognition software, and a text-to-speech engine (and of course a Raspberry Pi to hold it all together), reader Andrew Lewis has hacked together his own system to help deal with age-related macular degeneration.

It’s the definition of hacking: here’s a problem, there’s no solution in the shops, so you go and build it yourself!

Radio

60 years ago, the cutting edge of home hacking was the transistor radio. Before the internet was dreamt of, the transistor radio made the world smaller and brought people together. Nowadays, the components you need to build a radio are cheap and easily available, so if you’re in any way electronically inclined, building a radio is an ideal excuse to dust off your soldering iron.

Tutorials

If you’re a 12-month subscriber (if you’re not, you really should be), you’ve no doubt been thinking of all sorts of things to do with the Adafruit Circuit Playground Express we gave you for free. How about a sewable circuit for a canvas bag? Use the accelerometer to detect patterns of movement — walking, for example — and flash a series of lights in response. It’s clever, fun, and an easy way to add some programmable fun to your shopping trips.


We’re also making gin, hacking a children’s toy car to unlock more features, and getting started with robot sumo to fill the void left by the cancellation of Robot Wars.

HackSpace magazine 6

All this, plus an 11-metre tall mechanical miner, in HackSpace magazine issue 6 — subscribe here from just £4 an issue or get the PDF version for free. You can also find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

The post Hackspace magazine 6: Paper Engineering appeared first on Raspberry Pi.

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall

AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-secrets-manager-store-distribute-and-rotate-credentials-securely/

Today we’re launching AWS Secrets Manager which makes it easy to store and retrieve your secrets via API or the AWS Command Line Interface (CLI) and rotate your credentials with built-in or custom AWS Lambda functions. Managing application secrets like database credentials, passwords, or API Keys is easy when you’re working locally with one machine and one application. As you grow and scale to many distributed microservices, it becomes a daunting task to securely store, distribute, rotate, and consume secrets. Previously, customers needed to provision and maintain additional infrastructure solely for secrets management which could incur costs and introduce unneeded complexity into systems.

AWS Secrets Manager

Imagine that I have an application that takes incoming tweets from Twitter and stores them in an Amazon Aurora database. Previously, I would have had to request a username and password from my database administrator and embed those credentials in environment variables or, in my race to production, even in the application itself. I would also need to have our social media manager create the Twitter API credentials and figure out how to store those. This is a fairly manual process, involving multiple people, that I have to restart every time I want to rotate these credentials. With Secrets Manager my database administrator can provide the credentials in secrets manager once and subsequently rely on a Secrets Manager provided Lambda function to automatically update and rotate those credentials. My social media manager can put the Twitter API keys in Secrets Manager which I can then access with a simple API call and I can even rotate these programmatically with a custom lambda function calling out to the Twitter API. My secrets are encrypted with the KMS key of my choice, and each of these administrators can explicitly grant access to these secrets with with granular IAM policies for individual roles or users.

Let’s take a look at how I would store a secret using the AWS Secrets Manager console. First, I’ll click Store a new secret to get to the new secrets wizard. For my RDS Aurora instance it’s straightforward to simply select the instance and provide the initial username and password to connect to the database.

Next, I’ll fill in a quick description and a name to access my secret by. You can use whatever naming scheme you want here.

Next, we’ll configure rotation to use the Secrets Manager-provided Lambda function to rotate our password every 10 days.

Finally, we’ll review all the details and check out our sample code for storing and retrieving our secret!

Finally I can review the secrets in the console.

Now, if I needed to access these secrets I’d simply call the API.

import json
import boto3
secrets = boto3.client("secretsmanager")
rds = json.dumps(secrets.get_secrets_value("prod/TwitterApp/Database")['SecretString'])
print(rds)

Which would give me the following values:


{'engine': 'mysql',
 'host': 'twitterapp2.abcdefg.us-east-1.rds.amazonaws.com',
 'password': '-)Kw>THISISAFAKEPASSWORD:lg{&sad+Canr',
 'port': 3306,
 'username': 'ranman'}

More than passwords

AWS Secrets Manager works for more than just passwords. I can store OAuth credentials, binary data, and more. Let’s look at storing my Twitter OAuth application keys.

Now, I can define the rotation for these third-party OAuth credentials with a custom AWS Lambda function that can call out to Twitter whenever we need to rotate our credentials.

Custom Rotation

One of the niftiest features of AWS Secrets Manager is custom AWS Lambda functions for credential rotation. This allows you to define completely custom workflows for credentials. Secrets Manager will call your lambda with a payload that includes a Step which specifies which step of the rotation you’re in, a SecretId which specifies which secret the rotation is for, and importantly a ClientRequestToken which is used to ensure idempotency in any changes to the underlying secret.

When you’re rotating secrets you go through a few different steps:

  1. createSecret
  2. setSecret
  3. testSecret
  4. finishSecret

The advantage of these steps is that you can add any kind of approval steps you want for each phase of the rotation. For more details on custom rotation check out the documentation.

Available Now
AWS Secrets Manager is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo). Secrets are priced at $0.40 per month per secret and $0.05 per 10,000 API calls. I’m looking forward to seeing more users adopt rotating credentials to secure their applications!

Randall

HackSpace magazine 5: Inside Adafruit

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-5/

There’s a new issue of HackSpace magazine on the shelves today, and as usual it’s full of things to make and do!

HackSpace magazine issue 5 Adafruit

Adafruit

We love making hardware, and we’d also love to turn this hobby into a way to make a living. So in the hope of picking up a few tips, we spoke to the woman behind Adafruit: Limor Fried, aka Ladyada.

HackSpace magazine issue 5 Adafruit

Adafruit has played a massive part in bringing the maker movement into homes and schools, so we’re chuffed to have Limor’s words of wisdom in the magazine.

Raspberry Pi 3B+

As you may have heard, there’s a new Pi in town, and that can only mean one thing for HackSpace magazine: let’s test it to its limits!

HackSpace magazine issue 5 Adafruit

The Raspberry Pi 3 Model B+ is faster, better, and stronger, but what does that mean in practical terms for your projects?

Toys

Kids are amazing! Their curious minds, untouched by mundane adulthood, come up with crazy stuff that no sensible grown-up would think to build. No sensible grown-up, that is, apart from the engineers behind Kids Invent Stuff, the brilliant YouTube channel that takes children’s inventions and makes them real.

So what is Kids Invent Stuff?!

Kids Invent Stuff is the YouTube channel where kids’ invention ideas get made into real working inventions. Learn more about Kids Invent Stuff at www.kidsinventstuff.com Have you seen Connor’s Crazy Car invention? https://youtu.be/4_sF6ZFNzrg Have you seen our Flamethrowing piano?

We spoke to Ruth Amos, entrepreneur, engineer, and one half of the Kids Invent Stuff team.

Buggy!

It shouldn’t just be kids who get to play with fun stuff! This month, in the name of research, we’ve brought a Stirling engine–powered buggy from Shenzhen.

HackSpace magazine issue 5 Adafruit

This ingenious mechanical engine is the closest you’ll get to owning a home-brew steam engine without running the risk of having a boiler explode in your face.

Tutorials

In this issue, turn a Dremel multitool into a workbench saw with some wood, perspex, and a bit of laser cutting; make a Starfleet com-badge and pretend you’re Captain Jean-Luc Picard (shaving your hair off not compulsory); add intelligence to builds the easy way with Node-RED; and get stuck into Cheerlights, one of the world’s biggest IoT project.


All this, plus your ultimate guide to blinkenlights, and the only knot you’ll ever need, in HackSpace magazine issue 5.

Subscribe, save, and get free stuff

Save up to 35% on the retail price by signing up to HackSpace magazine today. When you take out a 12-month subscription, you’ll also get a free Adafruit Circuit Playground Express!

HackSpace magazine issue 5 Adafruit

Individual copies of HackSpace magazine are available in selected stockists across the UK, including Tesco, WHSmith, and Sainsbury’s. They’ll also be making their way across the globe to USA, Canada, Australia, Brazil, Hong Kong, Singapore, and Belgium in the coming weeks, so ask your local retailer whether they’re getting a delivery.

You can also purchase your copy on the Raspberry Pi Press website, and browse our complete collection of other Raspberry Pi publications, such as The MagPi, Hello World, and Raspberry Pi Projects Books.

The post HackSpace magazine 5: Inside Adafruit appeared first on Raspberry Pi.

Message Filtering Operators for Numeric Matching, Prefix Matching, and Blacklisting in Amazon SNS

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/message-filtering-operators-for-numeric-matching-prefix-matching-and-blacklisting-in-amazon-sns/

This blog was contributed by Otavio Ferreira, Software Development Manager for Amazon SNS

Message filtering simplifies the overall pub/sub messaging architecture by offloading message filtering logic from subscribers, as well as message routing logic from publishers. The initial launch of message filtering provided a basic operator that was based on exact string comparison. For more information, see Simplify Your Pub/Sub Messaging with Amazon SNS Message Filtering.

Today, AWS is announcing an additional set of filtering operators that bring even more power and flexibility to your pub/sub messaging use cases.

Message filtering operators

Amazon SNS now supports both numeric and string matching. Specifically, string matching operators allow for exact, prefix, and “anything-but” comparisons, while numeric matching operators allow for exact and range comparisons, as outlined below. Numeric matching operators work for values between -10e9 and +10e9 inclusive, with five digits of accuracy right of the decimal point.

  • Exact matching on string values (Whitelisting): Subscription filter policy   {"sport": ["rugby"]} matches message attribute {"sport": "rugby"} only.
  • Anything-but matching on string values (Blacklisting): Subscription filter policy {"sport": [{"anything-but": "rugby"}]} matches message attributes such as {"sport": "baseball"} and {"sport": "basketball"} and {"sport": "football"} but not {"sport": "rugby"}
  • Prefix matching on string values: Subscription filter policy {"sport": [{"prefix": "bas"}]} matches message attributes such as {"sport": "baseball"} and {"sport": "basketball"}
  • Exact matching on numeric values: Subscription filter policy {"balance": [{"numeric": ["=", 301.5]}]} matches message attributes {"balance": 301.500} and {"balance": 3.015e2}
  • Range matching on numeric values: Subscription filter policy {"balance": [{"numeric": ["<", 0]}]} matches negative numbers only, and {"balance": [{"numeric": [">", 0, "<=", 150]}]} matches any positive number up to 150.

As usual, you may apply the “AND” logic by appending multiple keys in the subscription filter policy, and the “OR” logic by appending multiple values for the same key, as follows:

  • AND logic: Subscription filter policy {"sport": ["rugby"], "language": ["English"]} matches only messages that carry both attributes {"sport": "rugby"} and {"language": "English"}
  • OR logic: Subscription filter policy {"sport": ["rugby", "football"]} matches messages that carry either the attribute {"sport": "rugby"} or {"sport": "football"}

Message filtering operators in action

Here’s how this new set of filtering operators works. The following example is based on a pharmaceutical company that develops, produces, and markets a variety of prescription drugs, with research labs located in Asia Pacific and Europe. The company built an internal procurement system to manage the purchasing of lab supplies (for example, chemicals and utensils), office supplies (for example, paper, folders, and markers) and tech supplies (for example, laptops, monitors, and printers) from global suppliers.

This distributed system is composed of the four following subsystems:

  • A requisition system that presents the catalog of products from suppliers, and takes orders from buyers
  • An approval system for orders targeted to Asia Pacific labs
  • Another approval system for orders targeted to European labs
  • A fulfillment system that integrates with shipping partners

As shown in the following diagram, the company leverages AWS messaging services to integrate these distributed systems.

  • Firstly, an SNS topic named “Orders” was created to take all orders placed by buyers on the requisition system.
  • Secondly, two Amazon SQS queues, named “Lab-Orders-AP” and “Lab-Orders-EU” (for Asia Pacific and Europe respectively), were created to backlog orders that are up for review on the approval systems.
  • Lastly, an SQS queue named “Common-Orders” was created to backlog orders that aren’t related to lab supplies, which can already be picked up by shipping partners on the fulfillment system.

The company also uses AWS Lambda functions to automatically process lab supply orders that don’t require approval or which are invalid.

In this example, because different types of orders have been published to the SNS topic, the subscribing endpoints have had to set advanced filter policies on their SNS subscriptions, to have SNS automatically filter out orders they can’t deal with.

As depicted in the above diagram, the following five filter policies have been created:

  • The SNS subscription that points to the SQS queue “Lab-Orders-AP” sets a filter policy that matches lab supply orders, with a total value greater than $1,000, and that target Asia Pacific labs only. These more expensive transactions require an approver to review orders placed by buyers.
  • The SNS subscription that points to the SQS queue “Lab-Orders-EU” sets a filter policy that matches lab supply orders, also with a total value greater than $1,000, but that target European labs instead.
  • The SNS subscription that points to the Lambda function “Lab-Preapproved” sets a filter policy that only matches lab supply orders that aren’t as expensive, up to $1,000, regardless of their target lab location. These orders simply don’t require approval and can be automatically processed.
  • The SNS subscription that points to the Lambda function “Lab-Cancelled” sets a filter policy that only matches lab supply orders with total value of $0 (zero), regardless of their target lab location. These orders carry no actual items, obviously need neither approval nor fulfillment, and as such can be automatically canceled.
  • The SNS subscription that points to the SQS queue “Common-Orders” sets a filter policy that blacklists lab supply orders. Hence, this policy matches only office and tech supply orders, which have a more streamlined fulfillment process, and require no approval, regardless of price or target location.

After the company finished building this advanced pub/sub architecture, they were then able to launch their internal procurement system and allow buyers to begin placing orders. The diagram above shows six example orders published to the SNS topic. Each order contains message attributes that describe the order, and cause them to be filtered in a different manner, as follows:

  • Message #1 is a lab supply order, with a total value of $15,700 and targeting a research lab in Singapore. Because the value is greater than $1,000, and the location “Asia-Pacific-Southeast” matches the prefix “Asia-Pacific-“, this message matches the first SNS subscription and is delivered to SQS queue “Lab-Orders-AP”.
  • Message #2 is a lab supply order, with a total value of $1,833 and targeting a research lab in Ireland. Because the value is greater than $1,000, and the location “Europe-West” matches the prefix “Europe-“, this message matches the second SNS subscription and is delivered to SQS queue “Lab-Orders-EU”.
  • Message #3 is a lab supply order, with a total value of $415. Because the value is greater than $0 and less than $1,000, this message matches the third SNS subscription and is delivered to Lambda function “Lab-Preapproved”.
  • Message #4 is a lab supply order, but with a total value of $0. Therefore, it only matches the fourth SNS subscription, and is delivered to Lambda function “Lab-Cancelled”.
  • Messages #5 and #6 aren’t lab supply orders actually; one is an office supply order, and the other is a tech supply order. Therefore, they only match the fifth SNS subscription, and are both delivered to SQS queue “Common-Orders”.

Although each message only matched a single subscription, each was tested against the filter policy of every subscription in the topic. Hence, depending on which attributes are set on the incoming message, the message might actually match multiple subscriptions, and multiple deliveries will take place. Also, it is important to bear in mind that subscriptions with no filter policies catch every single message published to the topic, as a blank filter policy equates to a catch-all behavior.

Summary

Amazon SNS allows for both string and numeric filtering operators. As explained in this post, string operators allow for exact, prefix, and “anything-but” comparisons, while numeric operators allow for exact and range comparisons. These advanced filtering operators bring even more power and flexibility to your pub/sub messaging functionality and also allow you to simplify your architecture further by removing even more logic from your subscribers.

Message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). SNS filtering operators for numeric matching, prefix matching, and blacklisting are available now in all AWS Regions, for no extra charge.

To experiment with these new filtering operators yourself, and continue learning, try the 10-minute Tutorial Filter Messages Published to Topics. For more information, see Filtering Messages with Amazon SNS in the SNS documentation.

Now Available – AWS Serverless Application Repository

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-aws-serverless-application-repository/

Last year I suggested that you Get Ready for the AWS Serverless Application Repository and gave you a sneak peek. The Repository is designed to make it as easy as possible for you to discover, configure, and deploy serverless applications and components on AWS. It is also an ideal venue for AWS partners, enterprise customers, and independent developers to share their serverless creations.

Now Available
After a well-received public preview, the AWS Serverless Application Repository is now generally available and you can start using it today!

As a consumer, you will be able to tap in to a thriving ecosystem of serverless applications and components that will be a perfect complement to your machine learning, image processing, IoT, and general-purpose work. You can configure and consume them as-is, or you can take them apart, add features, and submit pull requests to the author.

As a publisher, you can publish your contribution in the Serverless Application Repository with ease. You simply enter a name and a description, choose some labels to increase discoverability, select an appropriate open source license from a menu, and supply a README to help users get started. Then you enter a link to your existing source code repo, choose a SAM template, and designate a semantic version.

Let’s take a look at both operations…

Consuming a Serverless Application
The Serverless Application Repository is accessible from the Lambda Console. I can page through the existing applications or I can initiate a search:

A search for “todo” returns some interesting results:

I simply click on an application to learn more:

I can configure the application and deploy it right away if I am already familiar with the application:

I can expand each of the sections to learn more. The Permissions section tells me which IAM policies will be used:

And the Template section displays the SAM template that will be used to deploy the application:

I can inspect the template to learn more about the AWS resources that will be created when the template is deployed. I can also use the templates as a learning resource in preparation for creating and publishing my own application.

The License section displays the application’s license:

To deploy todo, I name the application and click Deploy:

Deployment starts immediately and is done within a minute (application deployment time will vary, depending on the number and type of resources to be created):

I can see all of my deployed applications in the Lambda Console:

There’s currently no way for a SAM template to indicate that an API Gateway function returns binary media types, so I set this up by hand and then re-deploy the API:

Following the directions in the Readme, I open the API Gateway Console and find the URL for the app in the API Gateway Dashboard:

I visit the URL and enter some items into my list:

Publishing a Serverless Application
Publishing applications is a breeze! I visit the Serverless App Repository page and click on Publish application to get started:

Then I assign a name to my application, enter my own name, and so forth:

I can choose from a long list of open-source friendly SPDX licenses:

I can create an initial version of my application at this point, or I can do it later. Either way, I simply provide a version number, a URL to a public repository containing my code, and a SAM template:

Available Now
The AWS Serverless Application Repository is available now and you can start using it today, paying only for the AWS resources consumed by the serverless applications that you deploy.

You can deploy applications in the US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo) Regions. You can publish from the US East (N. Virginia) or US East (Ohio) Regions for global availability.

Jeff;

 

New AWS Auto Scaling – Unified Scaling For Your Cloud Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-auto-scaling-unified-scaling-for-your-cloud-applications/

I’ve been talking about scalability for servers and other cloud resources for a very long time! Back in 2006, I wrote “This is the new world of scalable, on-demand web services. Pay for what you need and use, and not a byte more.” Shortly after we launched Amazon Elastic Compute Cloud (EC2), we made it easy for you to do this with the simultaneous launch of Elastic Load Balancing, EC2 Auto Scaling, and Amazon CloudWatch. Since then we have added Auto Scaling to other AWS services including ECS, Spot Fleets, DynamoDB, Aurora, AppStream 2.0, and EMR. We have also added features such as target tracking to make it easier for you to scale based on the metric that is most appropriate for your application.

Introducing AWS Auto Scaling
Today we are making it easier for you to use the Auto Scaling features of multiple AWS services from a single user interface with the introduction of AWS Auto Scaling. This new service unifies and builds on our existing, service-specific, scaling features. It operates on any desired EC2 Auto Scaling groups, EC2 Spot Fleets, ECS tasks, DynamoDB tables, DynamoDB Global Secondary Indexes, and Aurora Replicas that are part of your application, as described by an AWS CloudFormation stack or in AWS Elastic Beanstalk (we’re also exploring some other ways to flag a set of resources as an application for use with AWS Auto Scaling).

You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.

If you have tried to use any of our Auto Scaling options in the past, you undoubtedly understand the trade-offs involved in choosing scaling thresholds. AWS Auto Scaling gives you a variety of scaling options: You can optimize for availability, keeping plenty of resources in reserve in order to meet sudden spikes in demand. You can optimize for costs, running close to the line and accepting the possibility that you will tax your resources if that spike arrives. Alternatively, you can aim for the middle, with a generous but not excessive level of spare capacity. In addition to optimizing for availability, cost, or a blend of both, you can also set a custom scaling threshold. In each case, AWS Auto Scaling will create scaling policies on your behalf, including appropriate upper and lower bounds for each resource.

AWS Auto Scaling in Action
I will use AWS Auto Scaling on a simple CloudFormation stack consisting of an Auto Scaling group of EC2 instances and a pair of DynamoDB tables. I start by removing the existing Scaling Policies from my Auto Scaling group:

Then I open up the new Auto Scaling Console and selecting the stack:

Behind the scenes, Elastic Beanstalk applications are always launched via a CloudFormation stack. In the screen shot above, awseb-e-sdwttqizbp-stack is an Elastic Beanstalk application that I launched.

I can click on any stack to learn more about it before proceeding:

I select the desired stack and click on Next to proceed. Then I enter a name for my scaling plan and choose the resources that I’d like it to include:

I choose the scaling strategy for each type of resource:

After I have selected the desired strategies, I click Next to proceed. Then I review the proposed scaling plan, and click Create scaling plan to move ahead:

The scaling plan is created and in effect within a few minutes:

I can click on the plan to learn more:

I can also inspect each scaling policy:

I tested my new policy by applying a load to the initial EC2 instance, and watched the scale out activity take place:

I also took a look at the CloudWatch metrics for the EC2 Auto Scaling group:

Available Now
We are launching AWS Auto Scaling today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore) Regions today, with more to follow. There’s no charge for AWS Auto Scaling; you pay only for the CloudWatch Alarms that it creates and any AWS resources that you consume.

As is often the case with our new services, this is just the first step on what we hope to be a long and interesting journey! We have a long roadmap, and we’ll be adding new features and options throughout 2018 in response to your feedback.

Jeff;

A New Guide to Banking Regulations and Guidelines in India

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/a-new-guide-to-banking-regulations-and-guidelines-in-india/

Indian flag

The AWS User Guide to Banking Regulations and Guidelines in India was published in December 2017 and includes information that can help banks regulated by the Reserve Bank of India (RBI) assess how to implement an appropriate information security, risk management, and governance program in the AWS Cloud.

The guide focuses on the following key considerations:

  • Outsourcing guidelines – Guidance for banks entering an outsourcing arrangement, including risk-management practices such as conducting due diligence and maintaining effective oversight. Learn how to conduct an assessment of AWS services and align your governance requirements with the AWS Shared Responsibility Model.
  • Information security – Detailed requirements to help banks identify and manage information security in the cloud.

This guide joins the existing Financial Services guides for other jurisdictions, such as Singapore, Australia, and Hong Kong. AWS will publish additional guides in 2018 to help you understand regulatory requirements in other markets around the world.

– Oliver

HackSpace magazine 2: 3D printing and cheese making

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-issue-2/

After an incredible response to our first issue of HackSpace magazine last month, we’re excited to announce today’s release of issue 2, complete with cheese making, digital braille, and…a crochet Cthulhu?
HackSpace magazine issue 2 cover

Your spaces

This issue, we visit Swansea Hackspace to learn how to crochet, we hear about the superb things that Birmingham’s fizzPOP maker space is doing, and we’re extremely impressed by the advances in braille reader technology that are coming out of Bristol Hackspace. People are amazing.

Your projects

We’ve also collected page upon page of projects for you to try your hand at. Fancy an introduction to laser cutting? A homemade sine wave stylophone? Or how about our first foray into Adafruit’s NeoPixels, adding blinkenlights to a pair of snowboarding goggles?

And (much) older technology gets a look in too, including a tutorial showing you how to make a knife in your own cheap and cheerful backyard forge.



As always, issue 2 of HackSpace magazine is available as a free PDF download, but we’ll also be publishing online versions of selected articles for easier browsing, so be sure to follow us on Facebook and Twitter. And, of course, we want to hear your thoughts – contact us to let us know what you like and what else you’d like to see, or just to demand that we feature your project, interest or current curiosity in the next issue.

Get your copy

You can grab issue 2 of HackSpace magazine right now from WHSmith, Tesco, Sainsbury’s, and independent newsagents. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

Alternatively, you can get the new issue online from our store, or digitally via our Android or iOS apps. And don’t forget, as with all our publications, a free PDF of HackSpace magazine is available from release day.

That’s it from us for this year; see you in 2018 for a ton of new things to make and do!

The post HackSpace magazine 2: 3D printing and cheese making appeared first on Raspberry Pi.

AWS Cloud9 – Cloud Developer Environments

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-cloud9-cloud-developer-environments/

One of the first things you learn when you start programming is that, just like any craftsperson, your tools matter. Notepad.exe isn’t going to cut it. A powerful editor and testing pipeline supercharge your productivity. I still remember learning to use Vim for the first time and being able to zip around systems and complex programs. Do you remember how hard it was to setup all your compilers and dependencies on a new machine? How many cycles have you wasted matching versions, tinkering with configs, and then writing documentation to onboard a new developer to a project?

Today we’re launching AWS Cloud9, an Integrated Development Environment (IDE) for writing, running, and debugging code, all from your web browser. Cloud9 comes prepackaged with essential tools for many popular programming languages (Javascript, Python, PHP, etc.) so you don’t have to tinker with installing various compilers and toolchains. Cloud9 also provides a seamless experience for working with serverless applications allowing you to quickly switch between local and remote testing or debugging. Based on the popular open source Ace Editor and c9.io IDE (which we acquired last year), AWS Cloud9 is designed to make collaborative cloud development easy with extremely powerful pair programming features. There are more features than I could ever cover in this post but to give a quick breakdown I’ll break the IDE into 3 components: The editor, the AWS integrations, and the collaboration.

Editing


The Ace Editor at the core of Cloud9 is what lets you write code quickly, easily, and beautifully. It follows a UNIX philosophy of doing one thing and doing it well: writing code.

It has all the typical IDE features you would expect: live syntax checking, auto-indent, auto-completion, code folding, split panes, version control integration, multiple cursors and selections, and it also has a few unique features I want to highlight. First of all, it’s fast, even for large (100000+ line) files. There’s no lag or other issues while typing. It has over two dozen themes built-in (solarized!) and you can bring all of your favorite themes from Sublime Text or TextMate as well. It has built-in support for 40+ language modes and customizable run configurations for your projects. Most importantly though, it has Vim mode (or emacs if your fingers work that way). It also has a keybinding editor that allows you to bend the editor to your will.

The editor supports powerful keyboard navigation and commands (similar to Sublime Text or vim plugins like ctrlp). On a Mac, with ⌘+P you can open any file in your environment with fuzzy search. With ⌘+. you can open up the command pane which allows you to do invoke any of the editor commands by typing the name. It also helpfully displays the keybindings for a command in the pane, for instance to open to a terminal you can press ⌥+T. Oh, did I mention there’s a terminal? It ships with the AWS CLI preconfigured for access to your resources.

The environment also comes with pre-installed debugging tools for many popular languages – but you’re not limited to what’s already installed. It’s easy to add in new programs and define new run configurations.

The editor is just one, admittedly important, component in an IDE though. I want to show you some other compelling features.

AWS Integrations

The AWS Cloud9 IDE is the first IDE I’ve used that is truly “cloud native”. The service is provided at no additional charge, and you only charged for the underlying compute and storage resources. When you create an environment you’re prompted for either: an instance type and an auto-hibernate time, or SSH access to a machine of your choice.

If you’re running in AWS the auto-hibernate feature will stop your instance shortly after you stop using your IDE. This can be a huge cost savings over running a more permanent developer desktop. You can also launch it within a VPC to give it secure access to your development resources. If you want to run Cloud9 outside of AWS, or on an existing instance, you can provide SSH access to the service which it will use to create an environment on the external machine. Your environment is provisioned with automatic and secure access to your AWS account so you don’t have to worry about copying credentials around. Let me say that again: you can run this anywhere.

Serverless Development with AWS Cloud9

I spend a lot of time on Twitch developing serverless applications. I have hundreds of lambda functions and APIs deployed. Cloud9 makes working with every single one of these functions delightful. Let me show you how it works.


If you look in the top right side of the editor you’ll see an AWS Resources tab. Opening this you can see all of the lambda functions in your region (you can see functions in other regions by adjusting your region preferences in the AWS preference pane).

You can import these remote functions to your local workspace just by double-clicking them. This allows you to edit, test, and debug your serverless applications all locally. You can create new applications and functions easily as well. If you click the Lambda icon in the top right of the pane you’ll be prompted to create a new lambda function and Cloud9 will automatically create a Serverless Application Model template for you as well. The IDE ships with support for the popular SAM local tool pre-installed. This is what I use in most of my local testing and serverless development. Since you have a terminal, it’s easy to install additional tools and use other serverless frameworks.

 

Launching an Environment from AWS CodeStar

With AWS CodeStar you can easily provision an end-to-end continuous delivery toolchain for development on AWS. Codestar provides a unified experience for building, testing, deploying, and managing applications using AWS CodeCommit, CodeBuild, CodePipeline, and CodeDeploy suite of services. Now, with a few simple clicks you can provision a Cloud9 environment to develop your application. Your environment will be pre-configured with the code for your CodeStar application already checked out and git credentials already configured.

You can easily share this environment with your coworkers which leads me to another extremely useful set of features.

Collaboration

One of the many things that sets AWS Cloud9 apart from other editors are the rich collaboration tools. You can invite an IAM user to your environment with a few clicks.

You can see what files they’re working on, where their cursors are, and even share a terminal. The chat features is useful as well.

Things to Know

  • There are no additional charges for this service beyond the underlying compute and storage.
  • c9.io continues to run for existing users. You can continue to use all the features of c9.io and add new team members if you have a team account. In the future, we will provide tools for easy migration of your c9.io workspaces to AWS Cloud9.
  • AWS Cloud9 is available in the US West (Oregon), US East (Ohio), US East (N.Virginia), EU (Ireland), and Asia Pacific (Singapore) regions.

I can’t wait to see what you build with AWS Cloud9!

Randall

T2 Unlimited – Going Beyond the Burst with High Performance

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyond-the-burst-with-high-performance/

I first wrote about the T2 instances in the summer of 2014, and talked about how many workloads have a modest demand for continuous compute power and an occasional need for a lot more. This model resonated with our customers; the T2 instances are very popular and are now used to host microservices, low-latency interactive applications, virtual desktops, build & staging environments, prototypes, and the like.

New T2 Unlimited
Today we are extending the burst model that we pioneered with the T2, giving you the ability to sustain high CPU performance over any desired time frame while still keeping your costs as low as possible. You simply enable this feature when you launch your instance; you can also enable it for an instance that is already running. The hourly T2 instance price covers all interim spikes in usage if the average CPU utilization is lower than the baseline over a 24-hour window. There’s a small hourly charge if the instance runs at higher CPU utilization for a prolonged period of time. For example, if you run a t2.micro instance at an average of 15% utilization (5% above the baseline) for 24 hours you will be charged an additional 6 cents (5 cents per vCPU-hour * 1 vCPU * 5% * 24 hours).

To launch a T2 Unlimited instance from the EC2 Console, select any T2 instance and then click on Enable next to T2 Unlimited:

And here’s how to switch a running instance from T2 Standard to T2 Unlimited:

Behind the Scenes
As I described in my original post, each T2 instance accumulates CPU Credits as it runs and consumes them while it is running at full-core speed, decelerating to a baseline level when the supply of Credits is exhausted. T2 Unlimited instances have the ability to borrow an entire day’s worth of future credits, allowing them to perform additional bursting. This borrowing is tracked by the new CPUSurplusCreditBalance CloudWatch metric. When this balance rises to the level where it represents an entire day’s worth of future credits, the instance continues to deliver full-core performance, charged at the rate of $0.05 per vCPU per hour for Linux and $0.096 for Windows. These charged surplus credits are tracked by the new CPUSurplusCreditsCharged metric. You will be charged on a per-millisecond basis for partial hours of bursting (further reducing your costs) if you exhaust your surplus late in a given hour.

The charge for any remaining CPUSurplusCreditBalance is processed when the instance is terminated or configured as a T2 Standard. Any accumulated CPUCreditBalance carries over during the transition to T2 Standard.

The T2 Unlimited model is designed to spare you the trouble of watching the CloudWatch metrics, but (if you are like me) you will do it anyway. Let’s take a quick look at a t2.nano and watch the credits over time. First, CPU utilization grows to 100% and the instance begins to consume 5 credits every 5 minutes (one credit is equivalent to a VCPU-minute):

The CPU credit balance remains at 0 because the credits are being produced and consumed at the same rate. The surplus credit balance (tracked by the CPUSurplusCreditBalance metric) ramps up to 72, representing the credits that are being borrowed from the future:

Once the surplus credit balance hits 72, there’s nothing more to borrow from the future, and any further CPU usage is charged at the end of the hour, tracked with the CPUSurplusCreditsCharged metric. The instance consumes 5 credits every 5 minutes and earns 0.25, resulting in a net charge of 4.75 VCPU-minutes for each 5 minutes of bursting:

You can switch each of your instances back and forth between T2 Standard and T2 Unlimited at any time; all credit balances except CPUSurplusCreditsCharged remain and are carried over. Because T2 Unlimited instances have the ability to burst at any time, they do not receive the 30 minutes of credits given to newly launched T2 Standard instances. Also, since each AWS account can launch a limited number of T2 Standard instances with initial CPU credits each day, T2 Unlimited instances can be a better fit for use in Auto Scaling Groups and other scenarios where large numbers of instances come and go each day.

Available Now
You can launch T2 Unlimited instances today in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Seoul), EU (Frankfurt), EU (Ireland), and EU (London) Regions today.

Jeff;