Tag Archives: Uncategorized

The US Has a Shortage of Bomb-Sniffing Dogs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/the-us-has-a-shortage-of-bomb-sniffing-dogs.html

Nothing beats a dog’s nose for detecting explosives. Unfortunately, there aren’t enough dogs:

Last month, the US Government Accountability Office (GAO) released a nearly 100-page report about working dogs and the need for federal agencies to better safeguard their health and wellness. The GOA says that as of February the US federal government had approximately 5,100 working dogs, including detection dogs, across three federal agencies. Another 420 dogs “served the federal government in 24 contractor-managed programs within eight departments and two independent agencies,” the GAO report says.

The report also underscores the demands placed on detection dogs and the potential for overwork if there aren’t enough dogs available. “Working dogs might need the strength to suddenly run fast, or to leap over a tall barrier, as well as the physical stamina to stand or walk all day,” the report says. “They might need to search over rubble or in difficult environmental conditions, such as extreme heat or cold, often wearing heavy body armor. They also might spend the day detecting specific scents among thousands of others, requiring intense mental concentration. Each function requires dogs to undergo specialized training.”

A decade and a half ago I was optimistic about bomb-sniffing bees and wasps, but nothing seems to have come of that.

Our guide to AWS Compute at re:Invent 2022

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/our-guide-to-aws-compute-at-reinvent-2022/

This blog post is written by Shruti Koparkar, Senior Product Marketing Manager, Amazon EC2.

AWS re:Invent is the most transformative event in cloud computing and it is starting on November 28, 2022. AWS Compute team has many exciting sessions planned for you covering everything from foundational content, to technology deep dives, customer stories, and even hands on workshops. To help you build out your calendar for this year’s re:Invent, let’s look at some highlights from the AWS Compute track in this blog. Please visit the session catalog for a full list of AWS Compute sessions.

Learn what powers AWS Compute

AWS offers the broadest and deepest functionality for compute. Amazon Elastic Cloud Compute (Amazon EC2) offers granular control for managing your infrastructure with the choice of processors, storage, and networking.

The AWS Nitro System is the underlying platform for our all our modern EC2 instances. It enables AWS to innovate faster, further reduce cost for our customers, and deliver added benefits like increased security and new instance types.

Discover the benefits of AWS Silicon

AWS has invested years designing custom silicon optimized for the cloud. This investment helps us deliver high performance at lower costs for a wide range of applications and workloads using AWS services.

  • Explore the AWS journey into silicon innovation with our “CMP201: Silicon Innovation at AWS” session. We will cover some of the thought processes, learnings, and results from our experience building silicon for AWS Graviton, AWS Nitro System, and AWS Inferentia.
  • To learn about customer-proven strategies to help you make the move to AWS Graviton quickly and confidently while minimizing uncertainty and risk, attend “CMP410: Framework for adopting AWS Graviton-based instances”.

 Explore different use cases

Amazon EC2 provides secure and resizable compute capacity for several different use-cases including general purpose computing for cloud native and enterprise applications, and accelerated computing for machine learning and high performance computing (HPC) applications.

High performance computing

  • HPC on AWS can help you design your products faster with simulations, predict the weather, detect seismic activity with greater precision, and more. To learn how to solve world’s toughest problems with extreme-scale compute come join us for “CMP205: HPC on AWS: Solve complex problems with pay-as-you-go infrastructure”.
  • Single on-premises general-purpose supercomputers can fall short when solving increasingly complex problems. Attend “CMP222: Redefining supercomputing on AWS” to learn how AWS is reimagining supercomputing to provide scientists and engineers with more access to world-class facilities and technology.
  • AWS offers many solutions to design, simulate, and verify the advanced semiconductor devices that are the foundation of modern technology. Attend “CMP320: Accelerating semiconductor design, simulation, and verification” to hear from ARM and Marvel about how they are using AWS to accelerate EDA workloads.

Machine Learning

Cost Optimization

Hear from our customers

We have several sessions this year where AWS customers are taking the stage to share their stories and details of exciting innovations made possible by AWS.

Get started with hands-on sessions

Nothing like a hands-on session where you can learn by doing and get started easily with AWS compute. Our speakers and workshop assistants will help you every step of the way. Just bring your laptop to get started!

You’ll get to meet the global cloud community at AWS re:Invent and get an opportunity to learn, get inspired, and rethink what’s possible. So build your schedule in the re:Invent portal and get ready to hit the ground running. We invite you to stop by the AWS Compute booth and chat with our experts. We look forward to seeing you in Las Vegas!

Apple’s Device Analytics Can Identify iCloud Users

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/apples-device-analytics-can-identify-icloud-users.html

Researchers claim that supposedly anonymous device analytics information can identify users:

On Twitter, security researchers Tommy Mysk and Talal Haj Bakry have found that Apple’s device analytics data includes an iCloud account and can be linked directly to a specific user, including their name, date of birth, email, and associated information stored on iCloud.

Apple has long claimed otherwise:

On Apple’s device analytics and privacy legal page, the company says no information collected from a device for analytics purposes is traceable back to a specific user. “iPhone Analytics may include details about hardware and operating system specifications, performance statistics, and data about how you use your devices and applications. None of the collected information identifies you personally,” the company claims.

Apple was just sued for tracking iOS users without their consent, even when they explicitly opt out of tracking.

Govern and manage permissions of Amazon QuickSight assets with the new centralized asset management console

Post Syndicated from Srikanth Baheti original https://aws.amazon.com/blogs/big-data/govern-and-manage-permissions-of-amazon-quicksight-assets-with-the-new-centralized-asset-management-console/

Amazon QuickSight is a fully-managed, cloud-native business intelligence (BI) service that makes it easy to connect to your data, create interactive dashboards, and share these with tens of thousands of users, either within the QuickSight interface or embedded in software as a service (SaaS) applications or web portals. With QuickSight providing insights to power daily decisions across the organization, it becomes more important than ever for administrators to ensure they can easily govern and manage permissions of all the assets in their account.

We recently announced the launch of a new admin asset management console in QuickSight, which enables administrators at enterprises and independent software vendors (ISVs) to govern their QuickSight account at scale and have self-service support capabilities by providing easy visibility and access to all the assets across the entire account, including in a multi-tenant setup. In addition, admins can perform actions that were previously possible only via API, such as bulk transfer of assets from one user or group to another, share multiple assets with someone at once, or revoke a user’s access to an asset.

This launch also supports APIs for searching assets which allows administrators to automate and govern at scale. Administrators and developers can programmatically search for assets a user or group has access to and search for assets by name. Additionally, they can describe and manage assets permissions.

In this post, we show how to access this console and some of the administration and governance use cases that you can achieve.

Feature overview

The QuickSight admin asset management console is available for admins with AWS Identity and Access Management (IAM) permissions who have access to QuickSight admin console pages. The following IAM policy allows an IAM user get access to all the features in the asset management console:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [          
                "quicksight:SearchGroups",
                "quicksight:SearchUsers",            
                "quicksight:ListNamespaces",            
                "quicksight:DescribeAnalysisPermissions",
                "quicksight:DescribeDashboardPermissions",
                "quicksight:DescribeDataSetPermissions",
                "quicksight:DescribeDataSourcePermissions",
                "quicksight:DescribeFolderPermissions",
                "quicksight:ListAnalyses",
                "quicksight:ListDashboards",
                "quicksight:ListDataSets",
                "quicksight:ListDataSources",
                "quicksight:ListFolders",
                "quicksight:SearchAnalyses",
                "quicksight:SearchDashboards",
                "quicksight:SearchFolders",
                "quicksight:SearchDataSets",
                "quicksight:SearchDataSources",
                "quicksight:UpdateAnalysisPermissions",
                "quicksight:UpdateDashboardPermissions",
                "quicksight:UpdateDataSetPermissions",
                "quicksight:UpdateDataSourcePermissions",
                "quicksight:UpdateFolderPermissions"
            ],
            "Resource": "*"
        }
    ]
}

APIs

Assets can be searched by using the following public APIs:

Permissions of the assets can be described and managed by using the following public APIs:

Access the QuickSight asset management console

To access the new QuickSight asset management console, complete the following steps:

  1. On the QuickSight console, navigate to the user menu and choose Manage QuickSight.
  2. In the navigation pane, choose Manage assets.

The landing page presents three ways to list assets:

  • Search for assets owned by a user or a group in a namespace
  • Search for assets by name
  • Browse all assets or filter by asset type in the account

If you have only one namespace, you won’t see namespace drop-down, as shown in the following screenshot.

Use case overview

Let’s consider a fictional company, AnyCompany, which is an ISV that provides services to thousands of customers across the globe. QuickSight is one of the services used by AnyCompany for providing multi-tenant BI and analytics solutions. They have already implemented multi-tenancy in QuickSight using namespaces to isolate users and groups. Within each tenant, assets are organized using folders.

Previously, there was no single pane of glass view in the QuickSight user interface that could show them all the assets by tenant users or groups and associated permissions. To get a holistic view, they were dependent on IT administrators to run tenant-specific API calls and export that information on a regular basis to validate the asset permissions.

With this feature, AnyCompany is no longer dependent on IT administrators for the asset information, and doesn’t have to go through the tedious task of reconciliation and access validation. This not only removes a dependency on IT administrators’ availability, but also provides a centralized solution for asset governance.

AnyCompany has the following key administration and governance needs that they deem critical:

  • Transfer assets – They want to be able to quickly transfer assets from one user or group to another in case the original owner is leaving the company or is on an extended leave
  • Onboard new employees – They want to be able to speed up onboarding of new employees by giving them access to assets their teammates have
  • Support authors – They want their in-house BI engineers to be able to easily and quickly support authors in other tenants by getting access to their dashboards
  • Revoke access – They want the capability to quickly audit and revoke permissions when changes occur

In the following sections, we discuss how AnyCompany meets their asset management needs in more detail.

Transfer assets

One of the business analysts, who was responsible for authoring some the key dashboards for use within the management team in headquarters and common dashboards that were being shared with all the tenants, recently switched organizations within AnyCompany. The central administrator wants to transfer all the assets to another team member and to maintain continuity.

To transfer assets, complete the following steps:

  1. Log in to QuickSight and navigate to Manage assets.
  2. Choose the namespace of the business analyst who left.
  3. Enter at least the first three characters of the username or the email of the analyst who left and choose the user from the search results.

A list of all the assets that the analyst is owner or viewer of is displayed.

  1. Use the filters to list assets of which the analyst is the sole owner.
  2. You can also choose to list only a single type of asset, such as dashboards.
  3. Select all the assets on the first page.
  4. On the Actions menu, choose Transfer.
  5. Choose the namespace the new user belongs to.
  6. Search for the analyst to whom all the assets will be transferred to by entering at least the first three characters of the username or the email.
  7. Choose the appropriate user from the search results.
  8. For Permissions, you can choose to replicate permissions that the analyst had to the new user, or make the new user owner or viewer of all assets being transferred.
  9. Choose Transfer.
  10. When the transfer is complete, choose Done.
  11. Repeat these steps if there is more than one page of assets listed.

Onboard new employees

A new analyst has joined AnyCompany, and the manager wants this analyst to have access to all QuickSight assets as one of the existing analyst.

To share assets, the administrator takes the following steps:

  1. Log in to QuickSight and navigate to Manage assets.
  2. Choose the namespace the existing business analyst belongs to.
  3. Search for the existing analyst by entering at least the first three characters of the username or the email and choose the user from the search results.

A list of all the assets that the analyst is owner or viewer of is displayed.

  1. Select all the assets on the first page.
  2. On the Actions menu, choose Share.
  3. Choose the namespace the new user belongs to.
  4. Search for the analyst who just joined the team by entering at least the first three characters of the username or the email and choose the appropriate user from the search results.
  5. You can choose to replicate permissions that the analyst had to the new user, or make the new user the owner or viewer of all assets being shared.
  6. Choose Share.
  7. When the share is complete, choose Done.

Support authors

AnyCompany often receives support requests from their tenant authors who are creating and sharing dashboards within the boundary of their tenant, which is achieved by namespaces in QuickSight. AnyCompany’s support team wants to get easy access to other tenant authors’ assets and provide the necessary support quickly.

To get access to an author’s assets, complete the following steps:

  1. Log in to QuickSight and navigate to Manage assets.
  2. For Search by asset name, enter the name of the asset that the support team wants to get access to.

A list of assets that contain the search text is displayed.

  1. Select the assets you want to give the support team access to.
  2. Choose Share.
  3. Choose the namespace the support team belongs to.
  4. Choose the group the support team belongs to.
  5. Choose the Owner permission in order for the support team to have complete access to the asset.
  6. Choose Share.
  7. When the share is complete, choose Done.

Revoke access

In case of policy changes or if the central administrator discovers that a QuickSight user shouldn’t have access to certain assets, you can revoke asset access.

To revoke a user’s access to an asset, complete the following steps:

  1. Log in to QuickSight and navigate to Manage assets.
  2. Choose the namespace the existing business analyst belongs to.
  3. Search for the user you want to remove access to by entering at least the first three characters of the username or the email and choose the appropriate user from the search results.

A list of all the assets that the analyst is owner or viewer of is displayed.

  1. Choose the menu icon (three vertical dots) in the Actions column of the assets you want to revoke access to and choose Revoke access.
  2. Choose Revoke.
  3. After access has been revoked, choose Done.

Conclusion

With the asset management console, admins now have easy visibility to all the assets in an account and can govern and manage permissions of all the assets in an account. Try out the asset management console for your centralized governance in QuickSight and share your feedback and questions in the comments. For more information, refer to Asset Management Console user guide.

Stay tuned for more new admin capabilities, and follow What’s New with Analytics for the latest on QuickSight.


About the Authors

Srikanth Baheti is a Specialized World Wide Sr. Solution Architect for Amazon QuickSight. He started his career as a consultant and worked for multiple private and government organizations. Later he worked for PerkinElmer Health and Sciences & eResearch Technology Inc, where he was responsible for designing and developing high traffic web applications, highly scalable and maintainable data pipelines for reporting platforms using AWS services and Serverless computing.

Raji Sivasubramaniam is a Sr. Solutions Architect at AWS, focusing on Analytics. Raji is specialized in architecting end-to-end Enterprise Data Management, Business Intelligence and Analytics solutions for Fortune 500 and Fortune 100 companies across the globe. She has in-depth experience in integrated healthcare data and analytics with wide variety of healthcare datasets including managed market, physician targeting and patient analytics.

Mayank Agarwal is a product manager for Amazon QuickSight, AWS’ cloud-native, fully managed BI service. He focuses on account administration, governance and developer experience. He started his career as an embedded software engineer developing handheld devices. Prior to QuickSight he was leading engineering teams at Credence ID, developing custom mobile embedded device and web solutions using AWS services that make biometric enrollment and identification fast, intuitive, and cost-effective for Government sector, healthcare and transaction security applications.

Breaking the Zeppelin Ransomware Encryption Scheme

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/breaking-the-zeppelin-ransomware-encryption-scheme.html

Brian Krebs writes about how the Zeppelin ransomware encryption scheme was broken:

The researchers said their break came when they understood that while Zeppelin used three different types of encryption keys to encrypt files, they could undo the whole scheme by factoring or computing just one of them: An ephemeral RSA-512 public key that is randomly generated on each machine it infects.

“If we can recover the RSA-512 Public Key from the registry, we can crack it and get the 256-bit AES Key that encrypts the files!” they wrote. “The challenge was that they delete the [public key] once the files are fully encrypted. Memory analysis gave us about a 5-minute window after files were encrypted to retrieve this public key.”

Unit 221B ultimately built a “Live CD” version of Linux that victims could run on infected systems to extract that RSA-512 key. From there, they would load the keys into a cluster of 800 CPUs donated by hosting giant Digital Ocean that would then start cracking them. The company also used that same donated infrastructure to help victims decrypt their data using the recovered keys.

A company offered recovery services based on this break, but was reluctant to advertise because it didn’t want Zeppelin’s creators to fix their encryption flaw.

Technical details.

Friday Squid Blogging: Squid Brains

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/friday-squid-blogging-squid-brains.html

Researchers have new evidence of how squid brains develop:

Researchers from the FAS Center for Systems Biology describe how they used a new live-imaging technique to watch neurons being created in the embryo in almost real-time. They were then able to track those cells through the development of the nervous system in the retina. What they saw surprised them.

The neural stem cells they tracked behaved eerily similar to the way these cells behave in vertebrates during the development of their nervous system.

It suggests that vertebrates and cephalopods, despite diverging from each other 500 million years ago, not only are using similar mechanisms to make their big brains but that this process and the way the cells act, divide, and are shaped may essentially layout the blueprint required develop this kind of nervous system.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

First Review of A Hacker’s Mind

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/first-review-of-a-hackers-mind.html

Kirkus reviews A Hacker’s Mind:

A cybersecurity expert examines how the powerful game whatever system is put before them, leaving it to others to cover the cost.

Schneier, a professor at Harvard Kennedy School and author of such books as Data and Goliath and Click Here To Kill Everybody, regularly challenges his students to write down the first 100 digits of pi, a nearly impossible task­—but not if they cheat, concerning which he admonishes, “Don’t get caught.” Not getting caught is the aim of the hackers who exploit the vulnerabilities of systems of all kinds. Consider right-wing venture capitalist Peter Thiel, who located a hack in the tax code: “Because he was one of the founders of PayPal, he was able to use a $2,000 investment to buy 1.7 million shares of the company at $0.001 per share, turning it into $5 billion—all forever tax free.” It was perfectly legal—and even if it weren’t, the wealthy usually go unpunished. The author, a fluid writer and tech communicator, reveals how the tax code lends itself to hacking, as when tech companies like Apple and Google avoid paying billions of dollars by transferring profits out of the U.S. to corporate-friendly nations such as Ireland, then offshoring the “disappeared” dollars to Bermuda, the Caymans, and other havens. Every system contains trap doors that can be breached to advantage. For example, Schneier cites “the Pudding Guy,” who hacked an airline miles program by buying low-cost pudding cups in a promotion that, for $3,150, netted him 1.2 million miles and “lifetime Gold frequent flier status.” Since it was all within the letter if not the spirit of the offer, “the company paid up.” The companies often do, because they’re gaming systems themselves. “Any rule can be hacked,” notes the author, be it a religious dietary restriction or a legislative procedure. With technology, “we can hack more, faster, better,” requiring diligent monitoring and a demand that everyone play by rules that have been hardened against tampering.

An eye-opening, maddening book that offers hope for leveling a badly tilted playing field.

I got a starred review. Libraries make decisions on what to buy based on starred reviews. Publications make decisions about what to review based on starred reviews. This is a big deal.

Book’s webpage.

Successful Hack of Time-Triggered Ethernet

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/successful-hack-of-time-triggered-ethernet.html

Time-triggered Ethernet (TTE) is used in spacecraft, basically to use the same hardware to process traffic with different timing and criticality. Researchers have defeated it:

On Tuesday, researchers published findings that, for the first time, break TTE’s isolation guarantees. The result is PCspooF, an attack that allows a single non-critical device connected to a single plane to disrupt synchronization and communication between TTE devices on all planes. The attack works by exploiting a vulnerability in the TTE protocol. The work was completed by researchers at the University of Michigan, the University of Pennsylvania, and NASA’s Johnson Space Center.

“Our evaluation shows that successful attacks are possible in seconds and that each successful attack can cause TTE devices to lose synchronization for up to a second and drop tens of TT messages—both of which can result in the failure of critical systems like aircraft or automobiles,” the researchers wrote. “We also show that, in a simulated spaceflight mission, PCspooF causes uncontrolled maneuvers that threaten safety and mission success.”

Much more detail in the article—and the research paper.

Failures in Twitter’s Two-Factor Authentication System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/failures-in-twitters-two-factor-authentication-system.html

Twitter is having intermittent problems with its two-factor authentication system:

Not all users are having problems receiving SMS authentication codes, and those who rely on an authenticator app or physical authentication token to secure their Twitter account may not have reason to test the mechanism. But users have been self-reporting issues on Twitter since the weekend, and WIRED confirmed that on at least some accounts, authentication texts are hours delayed or not coming at all. The meltdown comes less than two weeks after Twitter laid off about half of its workers, roughly 3,700 people. Since then, engineers, operations specialists, IT staff, and security teams have been stretched thin attempting to adapt Twitter’s offerings and build new features per new owner Elon Musk’s agenda.

On top of that, it seems that the system has a new vulnerability:

A researcher contacted Information Security Media Group on condition of anonymity to reveal that texting “STOP” to the Twitter verification service results in the service turning off SMS two-factor authentication.

“Your phone has been removed and SMS 2FA has been disabled from all accounts,” is the automated response.

The vulnerability, which ISMG verified, allows a hacker to spoof the registered phone number to disable two-factor authentication. That potentially exposes accounts to a password reset attack or account takeover through password stuffing.

This is not a good sign.

Russian Software Company Pretending to Be American

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/russian-software-company-pretending-to-be-american.html

Computer code developed by a company called Pushwoosh is in about 8,000 Apple and Google smartphone apps. The company pretends to be American when it is actually Russian.

According to company documents publicly filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk, where it is registered as a software company that also carries out data processing. It employs around 40 people and reported revenue of 143,270,000 rubles ($2.4 mln) last year. Pushwoosh is registered with the Russian government to pay taxes in Russia.

On social media and in US regulatory filings, however, it presents itself as a US company, based at various times in California, Maryland, and Washington, DC, Reuters found.

What does the code do? Spy on people:

Pushwoosh provides code and data processing support for software developers, enabling them to profile the online activity of smartphone app users and send tailor-made push notifications from Pushwoosh servers.

On its website, Pushwoosh says it does not collect sensitive information, and Reuters found no evidence Pushwoosh mishandled user data. Russian authorities, however, have compelled local companies to hand over user data to domestic security agencies.

I have called supply chain security “an insurmountably hard problem,” and this is just another example of that.

Introducing our final AWS Heroes of the year – November 2022

Post Syndicated from Taylor Lacy original https://aws.amazon.com/blogs/aws/introducing-our-final-aws-heroes-of-the-year-november-2022/

The AWS Heroes program celebrates and recognizes builders who are making an impact within the global AWS community. As we come to the end of 2022, the program is recognizing seven individuals who are passionate about AWS, and focused on organizing and speaking at community events, mentoring, authoring content, and even preserving wildlife. Please meet the newest AWS Heroes!

Ed Miller – San Jose, USA

Machine Learning Hero Ed Miller is a Senior Principal Engineer at Arm where he leads technical engagements with strategic partners around machine learning and IoT. He also volunteers with the BearID Project, developing open source, machine learning solutions for non-invasive wildlife monitoring. Ed is working on a human-in-the-loop machine learning application for identifying the famous fat bears on Explore.org’s Brooks Falls Brown Bears webcam. The serverless application, Bearcam Companion, is built using AWS Amplify and various AWS AI services. You can read about it and other projects on Ed’s blogs at dev.to, Hashnode, and the BearID Project.

Jones Zachariah Noel N – Karnataka, India

Serverless Hero Jones Zachariah Noel N is a Senior Developer Advocate in the Developer Relations ecospace at Freshworks, and has previously worked as a Cloud Architect – Serverless where he was focused on designing and architecting solutions built with the AWS Serverless tech stack. Jones is a tech enthusiast who loves to interact with the community, which has helped him learn and share his knowledge, as he also co-organizes AWS User Group Bengaluru. He writes regularly about AWS Serverless and talks about new features and different Serverless services, which can help you level up your Serverless applications’ architecture on dev.to. Additionally, Jones co-runs a YouTube podcast called The Zacs’ Show Talking AWS about DevOps and Serverless practices along with another Zack whom he met through the AWS Community Builder program.

Luciano Mammino – Dublin, Ireland

Serverless Hero Luciano Mammino is a full-stack web developer and a senior cloud architect at fourTheorem. He is a co-author of the book Node.js Design Patterns and co-host of the podcast AWS Bites. Luciano is one of the creators of Middy, one of the most adopted middleware-based Node.js frameworks for AWS Lambda. Through fourTheorem, he also contributes to several other open-source projects in the serverless space, such as SLIC Watch for automated observability. Finally, he is also an eager tech speaker who has evangelized the adoption of serverless from the very early days.

Madhu Kumar – Budapest, Hungary

Container Hero Madhu Kumar is a Principal Cloud Architect and Product Owner (Container Services) working for T-Systems International with over 22 years of IT experience working across multiple regions, including Asia, the Middle East, the US, Europe, and the UK. He is an AWS User Group Leader, DevSecCon Chapter Leader for Hungary, DevOps Institute Brand Ambassador and Chapter Leader, HashiCorp User Group Leader for Hungary, and formally an AWS Community Builder. Madhu is passionate about organizing meetups, driving and assisting global and local communities to come together, and sharing knowledge. He is also a regular speaker at container conferences and AWS events.

Paweł Zubkiewicz – Wroclaw, Poland

Serverless Hero Paweł Zubkiewicz works as a Cloud Architect and Consultant who helps companies build products on AWS. In 2018, Paweł started Serverless Polska, an online community for serverless enthusiasts where he shares his technical knowledge and introduces serverless to a broader audience. Shortly after, he began publishing a newsletter about serverless and AWS cloud. He continuously shares his expertise and insights with the Polish-speaking community to this day, both online and as a conference speaker. Before becoming an AWS Hero, he was an AWS Community Builder since 2020, and shares serverless tutorials on dev.to. He lives in Wroclaw, Poland with his wife and his dog named Pixel. He’s an avid mountain biker and a traveler.

Rossana Suarez – Resistencia, Argentina

Container Hero Rossana Suarez is a DevOps consultant and trainer. She started the ‘295devops’ channel to share her expertise about various DevOps topics, and to help enthusiasts get into the field more easily and with more motivation. She consults with teams of developers and DevOps engineers to help them improve their existing processes for automations, CI/CD, containerization, and orchestration. Rossana presents at Women in Technology’s local meetups to encourage more women to pursue careers in DevOps, is a volunteer with AWS Girls Argentina, and is a frequent speaker about container technologies at AWS Community Days, ContainersDays, and more.

TaeSeong Park – Seoul, Korea

Community Hero TaeSeong Park is a front-end engineer and Unity mobile developer working at IDEASAM. He’s spoken at major AWSKRUG community events, and has led hands-on labs specific to a front-end and mobile app on AWS Amplify. For the past 5 years, TaeSeong has been an organizer of the AWSKRUG Group and was an AWS Community Builder for 2-years. Not only he did he organize the AWSKRUG Gudi meetup, but he’s been a speaker and supporter of other AWSKRUG meetups.

Learn More

If you’d like to learn more about the new Heroes or connect with a Hero near you, please visit the AWS Heroes website or browse the AWS Heroes Content Library.

Taylor

Introducing the price-capacity-optimized allocation strategy for EC2 Spot Instances

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/introducing-price-capacity-optimized-allocation-strategy-for-ec2-spot-instances/

This blog post is written by Jagdeep Phoolkumar, Senior Specialist Solution Architect, Flexible Compute and Peter Manastyrny, Senior Product Manager Tech, EC2 Core.

Amazon EC2 Spot Instances are unused Amazon Elastic Compute Cloud (Amazon EC2) capacity in the AWS Cloud available at up to a 90% discount compared to On-Demand prices. One of the best practices for using EC2 Spot Instances is to be flexible across a wide range of instance types to increase the chances of getting the aggregate compute capacity. Amazon EC2 Auto Scaling and Amazon EC2 Fleet make it easy to configure a request with a flexible set of instance types, as well as use a Spot allocation strategy to determine how to fulfill Spot capacity from the Spot Instance pools that you provide in your request.

The existing allocation strategies available in Amazon EC2 Auto Scaling and Amazon EC2 Fleet are called “lowest-price” and “capacity-optimized”. The lowest-price allocation strategy allocates Spot Instance pools where the Spot price is currently the lowest. Customers told us that in some cases the lowest-price strategy picks the Spot Instance pools that are not optimized for capacity availability and results in more frequent Spot Instance interruptions. As an improvement over lowest-price allocation strategy, in August 2019 AWS launched the capacity-optimized allocation strategy for Spot Instances, which helps customers tap into the deepest Spot Instance pools by analyzing capacity metrics. Since then, customers have seen a significantly lower interruption rate with capacity-optimized strategy when compared to the lowest-price strategy. You can read more about these customer stories in the Capacity-Optimized Spot Instance Allocation in Action at Mobileye and Skyscanner blog post. The capacity-optimized allocation strategy strictly selects the deepest pools. Therefore, sometimes it can pick high-priced pools even when there are low-priced pools available with marginally less capacity. Customers have been telling us that, for an optimal experience, they would like an allocation strategy that balances the best trade-offs between lowest-price and capacity-optimized.

Today, we’re excited to share the new price-capacity-optimized allocation strategy that makes Spot Instance allocation decisions based on both the price and the capacity availability of Spot Instances. The price-capacity-optimized allocation strategy should be the first preference and the default allocation strategy for most Spot workloads.

This post illustrates how the price-capacity-optimized allocation strategy selects Spot Instances in comparison with lowest-price and capacity-optimized. Furthermore, it discusses some common use cases of the price-capacity-optimized allocation strategy.

Overview

The price-capacity-optimized allocation strategy makes Spot allocation decisions based on both capacity availability and Spot prices. In comparison to the lowest-price allocation strategy, the price-capacity-optimized strategy doesn’t always attempt to launch in the absolute lowest priced Spot Instance pool. Instead, price-capacity-optimized attempts to diversify as much as possible across the multiple low-priced pools with high capacity availability. As a result, the price-capacity-optimized strategy in most cases has a higher chance of getting Spot capacity and delivers lower interruption rates when compared to the lowest-price strategy. If you factor in the cost associated with retrying the interrupted requests, then the price-capacity-optimized strategy becomes even more attractive from a savings perspective over the lowest-price strategy.

We recommend the price-capacity-optimized allocation strategy for workloads that require optimization of cost savings, Spot capacity availability, and interruption rates. For existing workloads using lowest-price strategy, we recommend price-capacity-optimized strategy as a replacement. The capacity-optimized allocation strategy is still suitable for workloads that either use similarly priced instances, or ones where the cost of interruption is so significant that any cost saving is inadequate in comparison to a marginal increase in interruptions.

Walkthrough

In this section, we illustrate how the price-capacity-optimized allocation strategy deploys Spot capacity when compared to the other two allocation strategies. The following example configuration shows how Spot capacity could be allocated in an Auto Scaling group using the different allocation strategies:

{
    "AutoScalingGroupName": "myasg ",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "LaunchTemplateSpecification": {
                "LaunchTemplateId": "lt-abcde12345"
            },
            "Overrides": [
                {
                    "InstanceRequirements": {
                        "VCpuCount": {
                            "Min": 4,
                            "Max": 4
                        },
                        "MemoryMiB": {
                            "Min": 0,
                            "Max": 16384
                        },
                        "InstanceGenerations": [
                            "current"
                        ],
                        "BurstablePerformance": "excluded",
                        "AcceleratorCount": {
                            "Max": 0
                        }
                    }
                }
            ]
        },
        "InstancesDistribution": {
            "OnDemandPercentageAboveBaseCapacity": 0,
            "SpotAllocationStrategy": "spot-allocation-strategy"
        }
    },
    "MinSize": 10,
    "MaxSize": 100,
    "DesiredCapacity": 60,
    "VPCZoneIdentifier": "subnet-a12345a,subnet-b12345b,subnet-c12345c"
}

First, Amazon EC2 Auto Scaling attempts to balance capacity evenly across Availability Zones (AZ). Next, Amazon EC2 Auto Scaling applies the Spot allocation strategy using the 30+ instances selected by attribute-based instance type selection, in each Availability Zone. The results after testing different allocation strategies are as follows:

  • Price-capacity-optimized strategy diversifies over multiple low-priced Spot Instance pools that are optimized for capacity availability.
  • Capacity-optimize strategy identifies Spot Instance pools that are only optimized for capacity availability.
  • Lowest-price strategy by default allocates the two lowest priced Spot Instance pools that aren’t optimized for capacity availability

To find out how each allocation strategy fares regarding Spot savings and capacity, we compare ‘Cost of Auto Scaling group’ (number of instances x Spot price/hour for each type of instance) and ‘Spot interruptions rate’ (number of instances interrupted/number of instances launched) for each allocation strategy. We use fictional numbers for the purpose of this post. However, you can use the Cloud Intelligence Dashboards to find the actual Spot Saving, and the Amazon EC2 Spot interruption dashboard to log Spot Instance interruptions. The example results after a 30-day period are as follows:

Allocation strategy

Instance allocation

Cost of Auto Scaling group

Spot interruptions rate

price-capacity-optimized

40 c6i.xlarge

20 c5.xlarge

$4.80/hour 3%

capacity-optimized

60 c5.xlarge

$5.00/hour

2%

lowest-price

30 c5a.xlarge

30 m5n.xlarge

$4.75/hour

20%

As per the above table, with the price-capacity-optimized strategy, the cost of the Auto Scaling group is only 5 cents (1%) higher, whereas the rate of Spot interruptions is six times lower (3% vs 20%) than the lowest-price strategy. In summary, from this exercise you learn that the price-capacity-optimized strategy provides the optimal Spot experience that is the best of both the lowest-price and capacity-optimized allocation strategies.

Common use-cases of price-capacity-optimized allocation strategy

Earlier we mentioned that the price-capacity-optimized allocation strategy is recommended for most Spot workloads. To elaborate further, in this section we explore some of these common workloads.

Stateless and fault-tolerant workloads

Stateless workloads that can complete ongoing requests within two minutes of a Spot interruption notice, and the fault-tolerant workloads that have a low cost of retries, are the best fit for the price-capacity-optimized allocation strategy. This category has workloads such as stateless containerized applications, microservices, web applications, data and analytics jobs, and batch processing.

Workloads with a high cost of interruption

Workloads that have a high cost of interruption associated with an expensive cost of retries should implement checkpointing to lower the cost of interruptions. By using checkpointing, you make the price-capacity-optimized allocation strategy a good fit for these workloads, as it allocates capacity from the low-priced Spot Instance pools that offer a low Spot interruptions rate. This category has workloads such as long Continuous Integration (CI), image and media rendering, Deep Learning, and High Performance Compute (HPC) workloads.

Conclusion

We recommend that customers use the price-capacity-optimized allocation strategy as the default option. The price-capacity-optimized strategy helps Amazon EC2 Auto Scaling groups and Amazon EC2 Fleet provision target capacity with an optimal experience. Updating to the price-capacity-optimized allocation strategy is as simple as updating a single parameter in an Amazon EC2 Auto Scaling group and Amazon EC2 Fleet.

To learn more about allocation strategies for Spot Instances, visit the Spot allocation strategies documentation page.

Reducing Your Organization’s Carbon Footprint with Amazon CodeGuru Profiler

Post Syndicated from Isha Dua original https://aws.amazon.com/blogs/devops/reducing-your-organizations-carbon-footprint-with-codeguru-profiler/

It is crucial to examine every functional area when firms reorient their operations toward sustainable practices. Making informed decisions is necessary to reduce the environmental effect of an IT stack when creating, deploying, and maintaining it. To build a sustainable business for our customers and for the world we all share, we have deployed data centers that provide the efficient, resilient service our customers expect while minimizing our environmental footprint—and theirs. While we work to improve the energy efficiency of our datacenters, we also work to help our customers improve their operations on the AWS cloud. This two-pronged approach is based on the concept of the shared responsibility between AWS and AWS’ customers. As shown in the diagram below, AWS focuses on optimizing the sustainability of the cloud, while customers are responsible for sustainability in the cloud, meaning that AWS customers must optimize the workloads they have on the AWS cloud.

Figure 1. Shared responsibility model for sustainability

Figure 1. Shared responsibility model for sustainability

Just by migrating to the cloud, AWS customers become significantly more sustainable in their technology operations. On average, AWS customers use 77% fewer servers, 84% less power, and a 28% cleaner power mix, ultimately reducing their carbon emissions by 88% compared to when they ran workloads in their own data centers. These improvements are attributable to the technological advancements and economies of scale that AWS datacenters bring. However, there are still significant opportunities for AWS customers to make their cloud operations more sustainable. To uncover this, we must first understand how emissions are categorized.

The Greenhouse Gas Protocol organizes carbon emissions into the following scopes, along with relevant emission examples within each scope for a cloud provider such as AWS:

  • Scope 1: All direct emissions from the activities of an organization or under its control. For example, fuel combustion by data center backup generators.
  • Scope 2: Indirect emissions from electricity purchased and used to power data centers and other facilities. For example, emissions from commercial power generation.
  • Scope 3: All other indirect emissions from activities of an organization from sources it doesn’t control. AWS examples include emissions related to data center construction, and the manufacture and transportation of IT hardware deployed in data centers.

From an AWS customer perspective, emissions from customer workloads running on AWS are accounted for as indirect emissions, and part of the customer’s Scope 3 emissions. Each workload deployed generates a fraction of the total AWS emissions from each of the previous scopes. The actual amount varies per workload and depends on several factors including the AWS services used, the energy consumed by those services, the carbon intensity of the electric grids serving the AWS data centers where they run, and the AWS procurement of renewable energy.

At a high level, AWS customers approach optimization initiatives at three levels:

  • Application (Architecture and Design): Using efficient software designs and architectures to minimize the average resources required per unit of work.
  • Resource (Provisioning and Utilization): Monitoring workload activity and modifying the capacity of individual resources to prevent idling due to over-provisioning or under-utilization.
  • Code (Code Optimization): Using code profilers and other tools to identify the areas of code that use up the most time or resources as targets for optimization.

In this blogpost, we will concentrate on code-level sustainability improvements and how they can be realized using Amazon CodeGuru Profiler.

How CodeGuru Profiler improves code sustainability

Amazon CodeGuru Profiler collects runtime performance data from your live applications and provides recommendations that can help you fine-tune your application performance. Using machine learning algorithms, CodeGuru Profiler can help you find your most CPU-intensive lines of code, which contribute the most to your scope 3 emissions. CodeGuru Profiler then suggests ways to improve the code to make it less CPU demanding. CodeGuru Profiler provides different visualizations of profiling data to help you identify what code is running on the CPU, see how much time is consumed, and suggest ways to reduce CPU utilization. Optimizing your code with CodeGuru profiler leads to the following:

  • Improvements in application performance
  • Reduction in cloud cost, and
  • Reduction in the carbon emissions attributable to your cloud workload.

When your code performs the same task with less CPU, your applications run faster, customer experience improves, and your cost reduces alongside your cloud emission. CodeGuru Profiler generates the recommendations that help you make your code faster by using an agent that continuously samples stack traces from your application. The stack traces indicate how much time the CPU spends on each function or method in your code—information that is then transformed into CPU and latency data that is used to detect anomalies. When anomalies are detected, CodeGuru Profiler generates recommendations that clearly outline you should do to remediate the situation. Although CodeGuru Profiler has several visualizations that help you visualize your code, in many cases, customers can implement these recommendations without reviewing the visualizations. Let’s demonstrate this with a simple example.

Demonstration: Using CodeGuru Profiler to optimize a Lambda function

In this demonstration, the inefficiencies in a AWS Lambda function will be identified by CodeGuru Profiler.

Building our Lambda Function (10mins)

To keep this demonstration quick and simple, let’s create a simple lambda function that display’s ‘Hello World’. Before writing the code for this function, let’s review two important concepts. First, when writing Python code that runs on AWS and calls AWS services, two critical steps are required:

The Python code lines (that will be part of our function) that execute these steps listed above are shown below:

import boto3 #this will import AWS SDK library for Python
VariableName = boto3.client('dynamodb’) #this will create the AWS SDK service client

Secondly, functionally, AWS Lambda functions comprise of two sections:

  • Initialization code
  • Handler code

The first time a function is invoked (i.e., a cold start), Lambda downloads the function code, creates the required runtime environment, runs the initialization code, and then runs the handler code. During subsequent invocations (warm starts), to keep execution time low, Lambda bypasses the initialization code and goes straight to the handler code. AWS Lambda is designed such that the SDK service client created during initialization persists into the handler code execution. For this reason, AWS SDK service clients should be created in the initialization code. If the code lines for creating the AWS SDK service client are placed in the handler code, the AWS SDK service client will be recreated every time the Lambda function is invoked, needlessly increasing the duration of the Lambda function during cold and warm starts. This inadvertently increases CPU demand (and cost), which in turn increases the carbon emissions attributable to the customer’s code. Below, you can see the green and brown versions of the same Lambda function.

Now that we understand the importance of structuring our Lambda function code for efficient execution, let’s create a Lambda function that recreates the SDK service client. We will then watch CodeGuru Profiler flag this issue and generate a recommendation.

  1. Open AWS Lambda from the AWS Console and click on Create function.
  2. Select Author from scratch, name the function ‘demo-function’, select Python 3.9 under runtime, select x86_64 under Architecture.
  3. Expand Permissions, then choose whether to create a new execution role or use an existing one.
  4. Expand Advanced settings, and then select Function URL.
  5. For Auth type, choose AWS_IAM or NONE.
  6. Select Configure cross-origin resource sharing (CORS). By selecting this option during function creation, your function URL allows requests from all origins by default. You can edit the CORS settings for your function URL after creating the function.
  7. Choose Create function.
  8. In the code editor tab of the code source window, copy and paste the code below:
#invocation code
import json
import boto3

#handler code
def lambda_handler(event, context):
  client = boto3.client('dynamodb') #create AWS SDK Service client’
  #simple codeblock for demonstration purposes  
  output = ‘Hello World’
  print(output)
  #handler function return

  return output

Ensure that the handler code is properly indented.

  1. Save the code, Deploy, and then Test.
  2. For the first execution of this Lambda function, a test event configuration dialog will appear. On the Configure test event dialog window, leave the selection as the default (Create new event), enter ‘demo-event’ as the Event name, and leave the hello-world template as the Event template.
  3. When you run the code by clicking on Test, the console should return ‘Hello World’.
  4. To simulate actual traffic, let’s run a curl script that will invoke the Lambda function every 0.2 seconds. On a bash terminal, run the following command:
while true; do curl {Lambda Function URL]; sleep 0.06; done

If you do not have git bash installed, you can use AWS Cloud 9 which supports curl commands.

Enabling CodeGuru Profiler for our Lambda function

We will now set up CodeGuru Profiler to monitor our Lambda function. For Lambda functions running on Java 8 (Amazon Corretto), Java 11, and Python 3.8 or 3.9 runtimes, CodeGuru Profiler can be enabled through a single click in the configuration tab in the AWS Lambda console.  Other runtimes can be enabled following a series of steps that can be found in the CodeGuru Profiler documentation for Java and the Python.

Our demo code is written in Python 3.9, so we will enable Profiler from the configuration tab in the AWS Lambda console.

  1. On the AWS Lambda console, select the demo-function that we created.
  2. Navigate to Configuration > Monitoring and operations tools, and click Edit on the right side of the page.

  1.  Scroll down to Amazon CodeGuru Profiler and click the button next to Code profiling to turn it on. After enabling Code profiling, click Save.

Note: CodeGuru Profiler requires 5 minutes of Lambda runtime data to generate results. After your Lambda function provides this runtime data, which may need multiple runs if your lambda has a short runtime, it will display within the Profiling group page in the CodeGuru Profiler console. The profiling group will be given a default name (i.e., aws-lambda-<lambda-function-name>), and it will take approximately 15 minutes after CodeGuru Profiler receives the runtime data for this profiling group to appear. Be patient. Although our function duration is ~33ms, our curl script invokes the application once every 0.06 seconds. This should give profiler sufficient information to profile our function in a couple of hours. After 5 minutes, our profiling group should appear in the list of active profiling groups as shown below.

Depending on how frequently your Lambda function is invoked, it can take up to 15 minutes to aggregate profiles, after which you can see your first visualization in the CodeGuru Profiler console. The granularity of the first visualization depends on how active your function was during those first 5 minutes of profiling—an application that is idle most of the time doesn’t have many data points to plot in the default visualization. However, you can remedy this by looking at a wider time period of profiled data, for example, a day or even up to a week, if your application has very low CPU utilization. For our demo function, a recommendation should appear after about an hour. By this time, the profiling groups list should show that our profiling group now has one recommendation.

Profiler has now flagged the repeated creation of the SDK service client with every invocation.

From the information provided, we can see that our CPU is spending 5x more computing time than expected on the recreation of the SDK service client. The estimated cost impact of this inefficiency is also provided. In production environments, the cost impact of seemingly minor inefficiencies can scale very quickly to several kilograms of CO2 and hundreds of dollars as invocation frequency, and the number of Lambda functions increase.

CodeGuru Profiler integrates with Amazon DevOps Guru, a fully managed service that makes it easy for developers and operators to improve the performance and availability of their applications. Amazon DevOps Guru analyzes operational data and application metrics to identify behaviors that deviate from normal operating patterns. Once these operational anomalies are detected, DevOps Guru presents intelligent recommendations that address current and predicted future operational issues. By integrating with CodeGuru Profiler, customers can now view operational anomalies and code optimization recommendations on the DevOps Guru console. The integration, which is enabled by default, is only applicable to Lambda resources that are supported by CodeGuru Profiler and monitored by both DevOps Guru and CodeGuru.

We can now stop the curl loop (Control+C) so that the Lambda function stops running. Next, we delete the profiling group that was created when we enabled profiling in Lambda, and then delete the Lambda function or repurpose as needed.

Conclusion

Cloud sustainability is a shared responsibility between AWS and our customers. While we work to make our datacenter more sustainable, customers also have to work to make their code, resources, and applications more sustainable, and CodeGuru Profiler can help you improve code sustainability, as demonstrated above. To start Profiling your code today, visit the CodeGuru Profiler documentation page. To start monitoring your applications, head over to the Amazon DevOps Guru documentation page.

About the authors:

Isha Dua

Isha Dua is a Senior Solutions Architect based in San Francisco Bay Area. She helps AWS Enterprise customers grow by understanding their goals and challenges, and guiding them on how they can architect their applications in a cloud native manner while making sure they are resilient and scalable. She’s passionate about machine learning technologies and Environmental Sustainability.

Christian Tomeldan

Christian Tomeldan is a DevOps Engineer turned Solutions Architect. Operating out of San Francisco, he is passionate about technology and conveys that passion to customers ensuring they grow with the right support and best practices. He focuses his technical depth mostly around Containers, Security, and Environmental Sustainability.

Ifeanyi Okafor

Ifeanyi Okafor is a Product Manager with AWS. He enjoys building products that solve customer problems at scale.

Education Unplugged: Google Ends Unlimited Storage for Schools

Post Syndicated from Barry Kaufman original https://www.backblaze.com/blog/education-unplugged-google-ends-unlimited-storage-for-schools/

For schools and universities, data storage is paramount. Staff, administrators, and educators, not to mention students, need a secure place to store files. Add to that the legacy accounts of alumni storing irreplaceable files from their education, and you have a massive need for storage.

For a long time, Google was happy to oblige. In 2006, the company launched Google Apps for Education (later G Suite for Education; now Google Workplace for Education), offering free unlimited storage for qualifying schools and districts. But when they’d reached market penetration—somewhere in the neighborhood of 83% of school districts according to EdWeek Research Center—they ended the unlimited storage policy many schools had come to rely on.

If you already know about Google’s policy change and are looking for a solution to save your data and your budget, getting started with Backblaze B2 is easy. Otherwise, read on to learn more about the change, what it may mean for you in the long-term, and a Backblaze partnership with Carahsoft that eases purchasing through local, state, and federal buying programs.

Office Hours Are Over—Google Ends Unlimited Storage for Educational Institutions

Google’s policy change took effect in July 2022, and many schools and universities had to find alternative storage solutions or change their internal storage policies to stay within the new limits. Under the terms of the new policy, Google offers a baseline of 100TB of pooled storage shared across all users.

The policy shift was spurred, Google says, because “as we’ve grown to serve more schools and universities each year, storage consumption has also rapidly accelerated. Storage is not being consumed equitably across—nor within—institutions, and school leaders often don’t have the tools they need to manage this.”

For some school districts, colleges, and universities, this policy shift meant having to reach out to alumni with the request that they back up all their own data. It also hit some already-strapped IT budgets particularly hard. Estimates vary, but depending on the size of the school and their data needs, they could be looking at anywhere up to an extra $70,000 a year in storage costs.

That’s a non-negligible fee for a service that has become increasingly vital for schools. We’ve written about how important cloud storage is for schools, but it’s worth reiterating here.

School is in Session

Not only will a secure cloud storage solution help protect school districts from threats of ransomware, it can also help maintain predictable operating expenses and create opportunities for collaboration through remote learning. In cases like Kansas’ Pittsburg State University, it helped keep data safe from natural disasters that abound in places like Tornado Alley. Pittsburg State implemented Backblaze B2 as their off-site backup in the event of disaster and used Object Lock functionality to safeguard data from ransomware.

Photo Credit: Pittsburg State University

The academic world is still adjusting to Google’s policy change. Stories have emerged of schools simply dropping Google and being forced to move data out of thousands of alumni accounts. A quick-fix solution to avoid Google’s new fee structure, this strategy is being undertaken without a clear answer to the question of how alumni can access their own data after the move. After all, how up to date are those alumni email lists?

A Google Alternative for Schools

School districts, colleges, and universities need to find a new, budget-friendly way forward. If you’re still struggling to find an alternative storage solution now that the bell has rung and Google has dismissed its free storage, Backblaze can help you find a new home on the cloud.

Backblaze B2 offers schools unlimited, pay-as-you-go storage at a fraction of the price of Google, enabling you to continue offering students and alumni the storage space they’ve come to expect. For colleges, universities, and school districts not buying through government purchasing programs, you can sign up for Backblaze B2 directly. We offer 10TB of storage free so that you can see if it works for you, but if you want to do a larger or customized proof of concept, reach out to our Sales team.

Accessing Backblaze Through Your Local, State, or Federal Buying Program

As we revealed during this year’s Educause conference, Backblaze has recently rolled out a partnership with Carahsoft aimed squarely at budget-conscious educational institutions. The partnership brings Backblaze services to educational institutions with a capacity-based pricing model that’s a fraction of the price of traditional cloud providers like Google. And it can be purchased through local, state, or federal buying programs. If you buy IT services for your district through a distributor, this solution could work for you. Visit the partnership announcement to learn more.

The post Education Unplugged: Google Ends Unlimited Storage for Schools appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Better together: AWS SAM CLI and HashiCorp Terraform

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/

This post is written by Suresh Poopandi, Senior Solutions Architect and Seb Kasprzak, Senior Solutions Architect.

Today, AWS is announcing the public preview of AWS Serverless Application Model CLI (AWS SAM CLI) support for local development, testing, and debugging of serverless applications defined using HashiCorp Terraform configuration.

AWS SAM and Terraform are open-source frameworks for building applications using infrastructure as code (IaC). Both frameworks allow building, changing, and managing cloud infrastructure in a repeatable way by defining resource configurations.

Previously, you could use the AWS SAM CLI to build, test, and debug applications defined by AWS SAM templates or through the AWS Cloud Development Kit (CDK). With this preview release, you can also use AWS SAM CLI to test and debug serverless applications defined using Terraform configurations.

Walkthrough of Terraform support

This blog post contains a sample Terraform template, which shows how developers can use AWS SAM CLI to build locally, test, and debug AWS Lambda functions defined in Terraform. This sample application has a Lambda function that stores a book review score and review text in an Amazon DynamoDB table. An Amazon API Gateway book review API uses Lambda proxy integration to invoke the book review Lambda function.

Demo application architecture

Demo application architecture

Prerequisites

Before running this example:

  • Install the AWS CLI.
    • Configure with valid AWS credentials.
    • Note that AWS CLI now requires Python runtime.
  • Install HashiCorp Terraform.
  • Install the AWS SAM CLI.
  • Install Docker (required to run AWS Lambda function locally).

Since Terraform support is currently in public preview, you must provide a –beta-features flag while executing AWS SAM commands. Alternatively, set this flag in samconfig.toml file by adding beta_features=”true”.

Deploying the example application

This Lambda function interacts with DynamoDB. For the example to work, it requires an existing DynamoDB table in an AWS account. Deploying this creates all the required resources for local testing and debugging of the Lambda function.

To deploy:

  1. Clone the aws-sam-terraform-examples repository locally:
    git clone https://github.com/aws-samples/aws-sam-terraform-examples
  2. Change to the project directory:
    cd aws-sam-terraform-examples/zip_based_lambda_functions/api-lambda-dynamodb-example/

    Terraform must store the state of the infrastructure and configuration it creates. Terraform uses this state to map cloud resources to configuration and track changes. This example uses a local backend to store the state file on the local filesystem.

  3. Open the main.tf file and review its contents. Locate the backend section of the code, updating the region field with the target deployment Region of this sample solution:
    provider “aws” {
        region = “<AWS region>” # e.g. us-east-1
    }
  4. Initialize a working directory containing Terraform configuration files:
    terraform init
  5. Deploy the application using Terraform CLI. When prompted by “Do you want to perform these actions?”, enter Yes.
    terraform apply

Terraform deploys the application, as shown in the terminal output.

Terminal output

Terminal output

After completing the deployment process, the AWS account is ready for use by the Lambda function with all the required resources.

Terraform Configuration for local testing

Lambda functions require application dependencies bundled together with function code as a deployment package (typically a .zip file) to run correctly. Terraform natively does not create the deployment package and a separate build process handles this package creation.

This sample application uses Terraform’s null_resource and local-exec provisioner to trigger a build process script. This installs Python dependencies in a temporary folder and creates a .zip file with dependencies and function code. It contains this logic within the main.tf file of the example application.

To explain each code segment in more detail:

Terraform example

Terraform example

  1. aws_lambda_function: This sample defines a Lambda function resource. It contains properties such as environment variables (in this example, the DynamoDB table_id) and the depends_on argument, which creates the .zip package before deploying the Lambda function.

    Terraform example

    Terraform example

  2. null_resource: When the AWS SAM CLI build command runs, AWS SAM reviews Terraform code for any null_resource starting with sam_metadata_ and uses the information contained within this resource block to gather the location of the Lambda function source code and .zip package. This information allows the AWS SAM CLI to start the local execution of the Lambda function. This special resource should contain the following attributes:
    • resource_name: The Lambda function address as defined in the current module (aws_lambda_function.publish_book_review)
    • resource_type: Packaging type of the Lambda function (ZIP_LAMBDA_FUNCTION)
    • original_source_code: Location of Lambda function code
    • built_output_path: Location of .zip deployment package

Local testing

With the backend services now deployed, run local tests to see if everything is working. The locally running sample Lambda function interacts with the services deployed in the AWS account. Run the sam build to reflect the local sam testing environment with changes after each code update.

  1. Local Build: To create a local build of the Lambda function for testing, use the sam build command:
    sam build --hook-name terraform --beta-features
  2. Local invoke: The first test is to invoke the Lambda function with a mocked event payload from the API Gateway. These events are in the events directory. Run this command, passing in a mocked event:
    AWS_DEFAULT_REGION=<Your Region Name>
    sam local invoke aws_lambda_function.publish_book_review -e events/new-review.json --beta-features

    AWS SAM mounts the Lambda function runtime and code and runs it locally. The function makes a request to the DynamoDB table in the cloud to store the information provided via the API. It returns a 200 response code, signaling the successful completion of the function.

  3. Local invoke from AWS CLI
    Another test is to run a local emulation of the Lambda service using “sam local start-lambda” and invoke the function directly using AWS SDK or the AWS CLI. Start the local emulator with the following command:

    sam local start-lambda
    Terminal output

    Terminal output

    AWS SAM starts the emulator and exposes a local endpoint for the AWS CLI or a software development kit (SDK) to call. With the start-lambda command still running, run the following command to invoke this function locally with the AWS CLI:

    aws lambda invoke --function-name aws_lambda_function.publish_book_review --endpoint-url http://127.0.0.1:3001/ response.json --cli-binary-format raw-in-base64-out --payload file://events/new-review.json

    The AWS CLI invokes the local function and returns a status report of the service to the screen. The response from the function itself is in the response.json file. The window shows the following messages:

    Invocation results

    Invocation results

  4. Debugging the Lambda function

Developers can use AWS SAM with a variety of AWS toolkits and debuggers to test and debug serverless applications locally. For example, developers can perform local step-through debugging of Lambda functions by setting breakpoints, inspecting variables, and running function code one line at a time.

The AWS Toolkit Integrated Development Environment (IDE) plugin provides the ability to perform many common debugging tasks, like setting breakpoints, inspecting variables, and running function code one line at a time. AWS Toolkits make it easier to develop, debug, and deploy serverless applications defined using AWS SAM. They provide an experience for building, testing, debugging, deploying, and invoking Lambda functions integrated into IDE. Refer to this link that lists common IDE/runtime combinations that support step-through debugging of AWS SAM applications.

Visual Studio Code keeps debugging configuration information in a launch.json file in a workspace .vscode folder. Here is a sample launch configuration file to debug Lambda code locally using AWS SAM and Visual Studio Code.

{
    "version": "0.2.0",
    "configurations": [
          {
            "name": "Attach to SAM CLI",
            "type": "python",
            "request": "attach",
            "address": "localhost",
            "port": 9999,
            "localRoot": "${workspaceRoot}/sam-terraform/book-reviews",
            "remoteRoot": "/var/task",
            "protocol": "inspector",
            "stopOnEntry": false
          }
    ]
}

After adding the launch configuration, start a debug session in the Visual Studio Code.

Step 1: Uncomment the following two lines in zip_based_lambda_functions/api-lambda-dynamodb-example/src/index.py

Enable debugging in the Lambda function

Enable debugging in the Lambda function

Step 2: Run the Lambda function in the debug mode and wait for the Visual Studio Code to attach to this debugging session:

sam local invoke aws_lambda_function.publish_book_review -e events/new-review.json -d 9999

Step 3: Select the Run and Debug icon in the Activity Bar on the side of VS Code. In the Run and Debug view, select “Attach to SAM CLI” and choose Run.

For this example, set a breakpoint at the first line of lambda_handler. This breakpoint allows viewing the input data coming into the Lambda function. Also, it helps in debugging code issues before deploying to the AWS Cloud.

Debugging in then IDE

Debugging in then IDE

Lambda Terraform module

A community-supported Terraform module for lambda (terraform-aws-lambda) has added support for SAM metadata null_resource. When using the latest version of this module, AWS SAM CLI will automatically support local invocation of the Lambda function, without additional resource blocks required.

Conclusion

This blog post shows how to use the AWS SAM CLI together with HashiCorp Terraform to develop and test serverless applications in a local environment. With AWS SAM CLI’s support for HashiCorp Terraform, developers can now use the AWS SAM CLI to test their serverless functions locally while choosing their preferred infrastructure as code tooling.

For more information about the features supported by AWS SAM, visit AWS SAM. For more information about the Metadata resource, visit HashiCorp Terraform.

Support for the Terraform configuration is currently in preview, and the team is asking for feedback and feature request submissions. The goal is for both communities to help improve the local development process using AWS SAM CLI. Submit your feedback by creating a GitHub issue here.

For more serverless learning resources, visit Serverless Land.

Another Event-Related Spyware App

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/another-event-related-spyware-app.html

Last month, we were warned not to install Qatar’s World Cup app because it was spyware. This month, it’s Egypt’s COP27 Summit app:

The app is being promoted as a tool to help attendees navigate the event. But it risks giving the Egyptian government permission to read users’ emails and messages. Even messages shared via encrypted services like WhatsApp are vulnerable, according to POLITICO’s technical review of the application, and two of the outside experts.

The app also provides Egypt’s Ministry of Communications and Information Technology, which created it, with other so-called backdoor privileges, or the ability to scan people’s devices.

On smartphones running Google’s Android software, it has permission to potentially listen into users’ conversations via the app, even when the device is in sleep mode, according to the three experts and POLITICO’s separate analysis. It can also track people’s locations via smartphone’s built-in GPS and Wi-Fi technologies, according to two of the analysts.

A Digital Red Cross

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/a-digital-red-cross.html

The International Committee of the Red Cross wants some digital equivalent to the iconic red cross, to alert would-be hackers that they are accessing a medical network.

The emblem wouldn’t provide technical cybersecurity protection to hospitals, Red Cross infrastructure or other medical providers, but it would signal to hackers that a cyberattack on those protected networks during an armed conflict would violate international humanitarian law, experts say, Tilman Rodenhäuser, a legal adviser to the International Committee of the Red Cross, said at a panel discussion hosted by the organization on Thursday.

I can think of all sorts of problems with this idea and many reasons why it won’t work, but those also apply to the physical red cross on buildings, vehicles, and people’s clothing. So let’s try it.

EDITED TO ADD: Original reference.

New Book: A Hacker’s Mind

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/11/new-book-a-hackers-mind.html

I have a new book coming out in February. It’s about hacking.

A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back isn’t about hacking computer systems; it’s about hacking more general economic, political, and social systems. It generalizes the term hack as a means of subverting a system’s rules in unintended ways.

What sorts of system? Any system of rules, really. Take the tax code, for example. It’s not computer code, but it’s a series of algorithms—supposedly deterministic—that take a bunch of inputs about your income and produce an output that’s the amount of money you owe. This code has vulnerabilities; we call them loopholes. It has exploits; those are tax avoidance strategies. And there is an entire industry of black-hat hackers who exploit vulnerabilities in the tax code: we call them accountants and tax attorneys.

In my conception, a “hack” is something a system permits, but is unanticipated and unwanted by its designers. It’s unplanned: a mistake in the system’s design or coding. It’s subversion, or an exploitation. It’s a cheat—but only sort of. Just as a computer vulnerability can be exploited over the Internet because the code permits it, a tax loophole is “allowed” by the system because it follows the rules, even though it might subvert the intent of those rules.

Once you start thinking of hacking in this way, you’ll start seeing hacks everywhere. You can find hacks in professional sports, in customer reward programs, in financial systems, in politics; in lots of economic, political, and social systems; against our cognitive functions. A curved hockey stick is a hack, and we know the name of the hacker who invented it. Airline frequent-flier mileage runs are a hack. The filibuster was originally a hack, invented by Cato the Younger, A Roman senator in 60 BCE. Hedge funds are full of hacks.

A system is just a set of rules. Or norms, since the “rules” aren’t always formal. And even the best-thought-out sets of rules will be incomplete or inconsistent. It’ll have ambiguities, and things the designers haven’t thought of. As long as there are people who want to subvert the goals of a system, there will be hacks.

I use this framework in A Hacker’s Mind to tease out a lot of why today’s economic, political, and social systems are failing us so badly, and apply what we have learned about hacking defenses in the computer world to those more general hacks. And I end by looking at artificial intelligence, and what will happen when AIs start hacking. Not the problems of hacking AI, which are both ubiquitous and super weird, but what happens when an AI is able to discover new hacks against these more general systems. What happens when AIs find tax loopholes, or loopholes in financial regulations. We have systems in place to deal with these sorts of hacks, but they were invented when hackers were human and reflect the human pace of hack discovery. They won’t be able to withstand an AI finding dozens, or hundreds, of loopholes in financial regulations. We’re simply not ready for the speed, scale, scope, and sophistication of AI hackers.

A Hacker’s Mind is my pandemic book, written in 2020 and 2021. It represents another step in my continuing journey of increasing generalizations. And I really like the cover. It will be published on February 7. It makes an excellent belated holiday gift. Order yours today and avoid the rush.