Now Available – AMD EPYC-Powered Amazon EC2 T3a Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amd-epyc-powered-amazon-ec2-t3a-instances/

The AMD EPYC-powered T3a instances that I promised you last year are available now and you can start using them today! Like the recently announced M5ad and R5ad instances, the T3a instances are built on the AWS Nitro System and give you an opportunity to balance your instance mix based on cost and performance.

T3a Instances
These instances deliver burstable, cost-effective performance and are a great fit for workloads that do not need high sustained compute power but experience temporary spikes in usage. You get a generous and assured baseline amount of processing power and the ability to transparently scale up to full core performance when you need more processing power, for as long as necessary. To learn more about the burstable compute model common to the T3 and the T3a, read New T3 Instances – Burstable, Cost-Effective Performance.

You can launch T3a instances today in seven sizes in the US East (N. Virginia), US West (Oregon), Europe (Ireland), US East (Ohio), and Asia Pacific (Singapore) Regions in On-Demand, Spot, and Reserved Instance form. Here are the specs:

Instance NamevCPUsRAMEBS-Optimized BandwidthNetwork Bandwidth
t3a.nano
20.5 GiBUp to 1.5 GbpsUp to 5 Gbps
t3a.micro
21 GiBUp to 1.5 GbpsUp to 5 Gbps
t3a.small
22 GiBUp to 1.5 GbpsUp to 5 Gbps
t3a.medium
24 GiBUp to 1.5 GbpsUp to 5 Gbps
t3a.large
28 GiBUp to 2.1 GbpsUp to 5 Gbps
t3a.xlarge
416 GiBUp to 2.1 GbpsUp to 5 Gbps
t3a.2xlarge
832 GiBUp to 2.1 GbpsUp to 5 Gbps

The T3 and the T3a instances are available in the same sizes and can use the same AMIs, making it easy for you to try both and find the one that is the best match for you application.

Pricing is 10% lower than the equivalent existing T3 instances; see the On-Demand, Spot, and Reserved Instance pricing pages for more info.

Jeff;

[$] Devuan, April Fools, and self-destruction

Post Syndicated from jake original https://lwn.net/Articles/786593/rss

An April Fools joke that went sour seems to be at least the proximate cause
for a rather large upheaval in the Devuan community.
For much of April 1 (or March 31 depending on time zone), the
Devuan web site looked like it had been taken
over by attackers, which was worrisome to many, but it was all a prank.
The joke was
clever, way over the top, unprofessional, or some combination of those,
depending on who is
describing it, but the incident and the threads on the devuan-dev mailing
list have led to rancor, resignations, calls for resignations, and more.

Amazon SageMaker Ground Truth keeps simplifying labeling workflows

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-sagemaker-ground-truth-keeps-simplifying-labeling-workflows/

Launched at AWS re:Invent 2018, Amazon SageMaker Ground Truth is a capability of Amazon SageMaker that makes it easy for customers to efficiently and accurately label the datasets required for training machine learning systems.

A quick recap on Amazon SageMaker Ground Truth

Amazon SageMaker Ground Truth helps you build highly accurate training datasets for machine learning quickly. SageMaker Ground Truth offers easy access to public and private human labelers and provides them with built-in workflows and interfaces for common labeling tasks. Additionally, SageMaker Ground Truth can lower your labeling costs by up to 70% using automatic labeling, which works by training Ground Truth from data labeled by humans so that the service learns to label data independently.

Amazon SageMaker Ground Truth helps you build datasets for:

  • Text classification.
  • Image classification, i.e categorizing images in specific classes.
  • Object detection, i.e. locating objects in images with bounding boxes.
  • Semantic segmentation, i.e. locating objects in images with pixel-level precision.
  • Custom user-defined tasks, that let customers annotate literally anything.

You can choose to use your team of labelers and route labeling requests directly to them. Alternatively, if you need to scale up, options are provided directly in the Amazon SageMaker Ground Truth console to work with labelers outside of your organization. You can access a public workforce of over 500,000 labelers via integration with Amazon Mechanical Turk. Alternatively, if your data requires confidentiality or special skills, you can use professional labeling companies pre-screened by Amazon, and listed on the AWS Marketplace.

Announcing new features

Since the service was launched, we gathered plenty of customer feedback (keep it coming!), from companies such as T-Mobile, Pinterest, Change Healthcare, GumGum, Automagi and many more. We used it to define what the next iteration of the service would look like, and just a few weeks ago, we launched two highly requested features:

  • Multi-category bounding boxes, allowing you to label multiple categories within an image simultaneously.
  • Three new UI templates for your custom workflows, for a total of fifteen different templates that help you quickly build annotation workflows for images, text, and audio datasets.

Today, we’re happy to announce another set of new features that keep simplifying the process of building and running cost-effective labeling workflows. Let’s look at each one of them.

Job chaining

Customers often want to run a subsequent labeling job using the output of a previous labeling job. Basically, they want to chain together labeling jobs using the outputted labeled dataset (and outputted ML model if automated data labeling was enabled). For example, they may run an initial job where they identify if humans exist in an image, and then they may want to run a subsequent job where they get bounding boxes drawn around the humans.

If active learning was used, customers may also want to use the ML model that was produced in order to bootstrap automated data labeling in a subsequent job. Setup couldn’t be easier: you can chain labeling jobs with just one click!

Job tracking

Customers want to be able to see the status of the progress of their labeling jobs. We now provide near real-time status for labeling jobs.

Long-lived jobs

Many customers use experts as labelers, and these individuals perform labeling on a periodic basis. For example, healthcare companies often use clinicians as their expert labelers, and they can only perform labeling occasionally during downtime. In these scenarios, labeling jobs need to run longer, sometimes for weeks or months. We now support extended task timeout windows where each batch of a labeling job can run for 10 days, meaning labeling jobs can extend for months.

Dynamic custom workflows

When setting up custom workflows, customers want to insert or use additional context in addition to the source data. For example, a customer may want to display the specific weather conditions above each image in the tasks they send to labelers; this information can help labelers better perform the task at-hand. Specifically, this feature allows customers to inject output from previous labeling jobs or other custom content into the custom workflow. This information is passed into a pre-processing Lambda function using the augmented manifest file that includes the source data and additional context. The customer can also use the additional context to dynamically adjust the workflow.

New service providers and new languages

We are listing two new data labeling service providers onto the AWS Marketplace: Vivetic and SmartOne. With the addition of these two vendors, Amazon SageMaker Ground Truth will add support for data labeling in French, German, and Spanish.

Regional expansion

In addition to US-East (Virginia), US-Central (Ohio), US-West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo), Amazon SageMaker Ground Truth is now available in Asia Pacific (Sydney).

Customer case study: ZipRecruiter

ZipRecruiter is helping people find great jobs, and helping employers build great companies. They’ve been using Amazon SageMaker since launch. Says ZipRecruiter CTO Craig Ogg: “ZipRecruiter’s AI-powered algorithm learns what each employer is looking for and provides a personalized, curated set of highly relevant candidates. On the other side of the marketplace, the company’s technology matches job seekers with the most pertinent jobs. And to do all that efficiently, we needed a Machine Learning model to extract relevant data automatically from uploaded resumes”.

Of course, building datasets is a critical part of the machine learning process, and it’s often expensive and extremely time-consuming. To solve both problems, ZipRecruiter turned to Ground Truth and one of our labeling partners, iMerit.

As Craig puts it: “Amazon SageMaker Ground Truth will significantly help us reduce the time and effort required to create datasets for training. Due to the confidential nature of the data, we initially considered using one of our teams but it would take time away from their regular tasks and it would take months to collect the data we needed. Using Amazon SageMaker Ground Truth, we engaged iMerit, a professional labeling company that has been pre-screened by Amazon, to assist with the custom annotation project. With their assistance we were able to collect thousands of annotations in a fraction of the time it would have taken using our own team.”

Getting started

I hope that this post was informative, and that the new features will let you build even faster. Please try Amazon SageMaker Ground Truth, let us know what you think, and help us build the next iteration of this cool service!

Julien

Mozilla’s 2019 Internet Health Report

Post Syndicated from ris original https://lwn.net/Articles/786642/rss

The Mozilla Blog introduces
Mozilla’s 2019 Internet
Health Report
. “In the Report’s three spotlight articles, we
unpack three big issues: One examines the need
for better machine decision making
— that is, asking questions like
Who designs the algorithms? and What data do they feed on?
and Who is being discriminated against? Another examines ways to rethink
the ad economy
, so surveillance and addiction are no longer design
necessities. The third spotlight article examines
the rise of smart cities
, and how local governments can integrate tech
in a way that serves the public good, not commercial interests.

[$] On technological liberty

Post Syndicated from jake original https://lwn.net/Articles/786305/rss

In his keynote at the 2019 Legal and
Licensing Workshop
(LLW), longtime workshop participant Andrew
Wilson looked
at the past, but he went much further back than, say, the history of free
software—or even computers. His talk looked at technological liberty in
the context of classical liberal philosophic thinking. He mapped some of
that thinking to the world of free and open-source software (FOSS) and to
some other areas where our liberties are under attack.

New – Query for AWS Regions, Endpoints, and More Using AWS Systems Manager Parameter Store

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-query-for-aws-regions-endpoints-and-more-using-aws-systems-manager-parameter-store/

In response to requests from AWS customers, I have been asking our service teams to find ways to make information about our regions and services available programmatically. Today I am happy to announce that this information is available in the AWS Systems Manager Parameter Store, and that you can easily access it from your scripts and your code. You can get a full list of active regions, find out which services are available with them, and much more.

Running Queries
I’ll use the AWS Command Line Interface (CLI) for most of my examples; you can also use the AWS Tools for Windows PowerShell or any of the AWS SDKs. As is the case with all of the CLI commands, you can request output in JSON, tab-delimited text, or table format. I’ll use JSON, and will make liberal use of the jq utility to show the more relevant part of the output from each query.

Here’s how to query for the list of active regions:

$ aws ssm get-parameters-by-path \
  --path /aws/service/global-infrastructure/regions --output json | \
  jq .Parameters[].Name
"/aws/service/global-infrastructure/regions/ap-northeast-1"
"/aws/service/global-infrastructure/regions/eu-central-1"
"/aws/service/global-infrastructure/regions/eu-north-1"
"/aws/service/global-infrastructure/regions/eu-west-1"
"/aws/service/global-infrastructure/regions/eu-west-3"
"/aws/service/global-infrastructure/regions/sa-east-1"
"/aws/service/global-infrastructure/regions/us-east-2"
"/aws/service/global-infrastructure/regions/us-gov-east-1"
"/aws/service/global-infrastructure/regions/us-gov-west-1"
"/aws/service/global-infrastructure/regions/us-west-1"
"/aws/service/global-infrastructure/regions/ap-northeast-2"
"/aws/service/global-infrastructure/regions/ap-northeast-3"
"/aws/service/global-infrastructure/regions/ap-south-1"
"/aws/service/global-infrastructure/regions/ap-southeast-1"
"/aws/service/global-infrastructure/regions/ap-southeast-2"
"/aws/service/global-infrastructure/regions/ca-central-1"
"/aws/service/global-infrastructure/regions/cn-north-1"
"/aws/service/global-infrastructure/regions/cn-northwest-1"
"/aws/service/global-infrastructure/regions/eu-west-2"
"/aws/service/global-infrastructure/regions/us-west-2"
"/aws/service/global-infrastructure/regions/us-east-1"

Here’s how to display a complete list of all available AWS services, sort them into alphabetical order, and display the first 10 (out of 155, as I write this):

$ aws ssm get-parameters-by-path \
  --path /aws/service/global-infrastructure/services --output json | \
  jq .Parameters[].Name | sort | head -10
"/aws/service/global-infrastructure/services/acm"
"/aws/service/global-infrastructure/services/acm-pca"
"/aws/service/global-infrastructure/services/alexaforbusiness"
"/aws/service/global-infrastructure/services/apigateway"
"/aws/service/global-infrastructure/services/application-autoscaling"
"/aws/service/global-infrastructure/services/appmesh"
"/aws/service/global-infrastructure/services/appstream"
"/aws/service/global-infrastructure/services/appsync"
"/aws/service/global-infrastructure/services/athena"
"/aws/service/global-infrastructure/services/autoscaling"

Here’s how to get the list of services that are available in a given region (again, first 10, sorted):

$ aws ssm get-parameters-by-path \
  --path /aws/service/global-infrastructure/regions/us-east-1/services --output json | \
  jq .Parameters[].Name | sort | head -10
"/aws/service/global-infrastructure/regions/us-east-1/services/acm"
"/aws/service/global-infrastructure/regions/us-east-1/services/acm-pca"
"/aws/service/global-infrastructure/regions/us-east-1/services/alexaforbusiness"
"/aws/service/global-infrastructure/regions/us-east-1/services/apigateway"
"/aws/service/global-infrastructure/regions/us-east-1/services/application-autoscaling"
"/aws/service/global-infrastructure/regions/us-east-1/services/appmesh"
"/aws/service/global-infrastructure/regions/us-east-1/services/appstream"
"/aws/service/global-infrastructure/regions/us-east-1/services/appsync"
"/aws/service/global-infrastructure/regions/us-east-1/services/athena"
"/aws/service/global-infrastructure/regions/us-east-1/services/autoscaling"

Here’s how to get the list of regions where a service (Amazon Athena, in this case) is available:

$ aws ssm get-parameters-by-path \
  --path /aws/service/global-infrastructure/services/athena/regions --output json | \
  jq .Parameters[].Value
"ap-northeast-2"
"ap-south-1"
"ap-southeast-2"
"ca-central-1"
"eu-central-1"
"eu-west-1"
"eu-west-2"
"us-east-1"
"us-east-2"
"us-gov-west-1"
"ap-northeast-1"
"ap-southeast-1"
"us-west-2"

Here’s how to use the path to get the name of a service:

$ aws ssm get-parameters-by-path \
  --path /aws/service/global-infrastructure/services/athena --output json | \
  jq .Parameters[].Value
"Amazon Athena"

And here’s how you can find the regional endpoint for a given service, again using the path:

$ aws ssm get-parameter \
  --name /aws/service/global-infrastructure/regions/us-west-1/services/s3/endpoint \
  --output json | \
  jq .Parameter.Value
"s3.us-west-1.amazonaws.com"

Available Now
This data is available now and you can start using it today at no charge.

Jeff;

PS – Special thanks to my colleagues Blake Copenhaver and Phil Cali for their help with this post!

 

AWS Security Profiles: Paul Hawkins, Security Solutions Architect

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-paul-hawkins-security-solutions-architect/

Leading up to AWS Summit Sydney, we’re sharing our conversation with Paul Hawkins, who helped put together the summit’s “Secure” track, so you can learn more about him and some of the interesting work that he’s doing.


What does a day in the life of an AWS Security Solutions Architect look like?

That’s an interesting question because it varies a lot. As a Security Solutions Architect, I cover Australia and New Zealand with Phil Rodrigues—we split a rather large continent between us. Our role is to help AWS account teams help AWS customers with security, risk, and compliance in the cloud. If a customer is coming to the cloud from a on-premises environment, their security team is probably facing a lot of changes. We help those teams understand what it’s like to run security on AWS—how to think about the shared responsibility model, opportunities to be more secure, and ways to enable their businesses. My conversations with customers range from technical deep-dives to high-level questions about how to conceptualize security, governance, and risk in a cloud environment. A portion of my work also involves internal enablement. I can’t scale to talk to every customer, so teaching other customer-facing AWS teams how to have conversations about security is an important part of my role.

How do you explain your job to non-tech friends?

I say that my job has two functions. First, I explain complex things to people who are not experts in that domain so that they can understand how to do them. Second, I help people be more secure in the cloud. Sometimes I then have to explain what “the cloud” is. How I explain my job to friends reminds me of how I explain the cloud to brand new customers—people usually figure out how it works by comparing it to their existing mental models.

What’s your favorite part of your job?

Showing people that moving to the cloud doesn’t have to be scary. In fact, it’s often an opportunity to be more secure. The cloud is a chance for security folks—who have traditionally been seen as a “Department of No” or a blocker to the rest of the organization—to become an enabling function. Fundamentally, every business is about customer trust. And how do you make sure you maintain the trust of your customers? You keep their data secure. I get to help the organizations I work with to get applications and capabilities out to their customers more swiftly—and also more securely. And that’s a really great thing.

Professionally, what’s your background? What prompted you to move into the field of cloud security at AWS?

I used to work in a bank as a Security Architect. I was involved in helping the business move a bunch of workloads into AWS. In Australia, we have a regulator called APRA (the Australian Prudential Regulatory Authority). If you’re an insurance or financial services company who is running a material workload (that is, a workload that has a significant impact on the banking or financial services industry) you have to submit information to this regulator about how you’re going to run the workload, how you’re going to operationalize it, what your security posture looks like, and so on. After reviewing how you’re managing risk, APRA will hopefully give you a “no objection” response. I worked on the first one of those submissions in Australia.

Working in a bank with a very traditional IT organization and getting to bring that project over the finish line was a great experience. But I also witnessed a shift in perspective from other teams about what “security” meant. I moved from interacting with devs who didn’t want to come talk to us because we were the security folks to a point, just before I left, where I was getting emails from people in the dev and engineering teams, telling me how they built controls because we empowered them with the idea that “security is everyone’s job.” I was getting emails saying, “I built this control, but I think we can use it in other places!” Having the dev community actively working with the security team to make things better for the entire organization was an amazing cultural change. I wanted to do more work like that.

Are there any unique challenges—security or otherwise—that customers in New Zealand or Australia face when moving to the cloud?

If you look at a lot of regulators in this region, like the APRA, or an Australian government standard like IRAP, or compare these regulators with programs like FedRAMP in the US, you’ll see that everything tends to roll up toward requirements in the style of (for example) the NIST Cybersecurity Framework. When it comes to security, the fundamentals don’t change much. You need to identify who has access to your environment, you need to protect your network, you need good logging visibility, you need to protect data using encryption, and you need to have a mechanism for responding instantly to changes. I do think Australia has some interesting challenges in terms of the geographical size of the country, and the distant spread of population between the east and west coasts. Otherwise, the challenges are similar to what customers face globally: understanding shared responsibility, understanding how to build securely on the cloud, and understanding the differences from what they’ve traditionally been doing.

What’s the most common misperception you encounter about cloud security?

People think it’s a lot harder than it is. Some people also have the tendency to focus on esoteric edge cases, when the most important thing to get right is the foundation. And the foundation is actually straightforward: You follow best practices, and you follow the Well-Architected Framework. AWS provides a lot of guidance on these topics.

I talk to a lot of security folks, architects, instant responders, and CISOs, and I end up saying something similar to everyone: As you begin your cloud journey, you’ve probably got a hundred things you’re worried about. That’s reasonable. As a security person, your job is to worry about what can happen. But you should focus on the top five things that you need to do right now, and the top five things that are going to require a bit of thought across your organization. Get those things done. And then chip away at the rest—you can’t solve everything all at once. It’s better to get the foundations in and start building while raising your organization’s security bar, rather than spin your wheels for months because you can’t map out every edge case.

During the course of your work, what cloud security trends have you noticed that you’re excited about?

I’m really pleased to see more and more organizations genuinely embrace automation. Keeping humans away from systems is a great way to drive consistency: consistency of environments means you can have consistency of response, which is important for security.

As humans, we aren’t great at doing the same thing repeatedly. We’ll do okay for a bit, but then we get distracted. Automated systems are really good at consistently doing the same things. If you’re starting at the very beginning of an environment, and you build your AWS accounts consistently, then your monitoring can also be consistent. You don’t have to build a complicated list of exceptions to the rules. And that means you can have automation covering how you build environments, how you build applications into environments, and how to detect and respond in environments. This frees up the people in your organization to focus on the interesting stuff. If people are focused on interesting challenges, they’re going to be more engaged, and they’re going to deliver great things. No one wants to just put square pegs in square holes every day at work. We want to be able to exercise our minds. Security automation enables that.

What does cloud security mean to you, personally?

I genuinely believe that I have an incredible opportunity to help as many customers as possible be more secure in the cloud. “Being more secure in the cloud” doesn’t just mean configuring AWS services in a way that’s sensible—it also means helping drive cultural change, and moving peoples’ perceptions of security away from, “Don’t go talk to those people because they’re going to yell at you” to security as an enabling function for the business. Security boils down to “keeping the information of humans protected.” Whether that’s banking information or photos on a photo-sharing site, the fundamental approach should be the same. At the end of the day, we’re building things that humans will use. As security people, we need to make it easier for our engineers to build securely, as well as for end users to be confident their data is protected—whatever that data is.

I get to help these organizations deliver their services in a way that’s safer and enables them to move faster. They can build new features without having to worry about enduring a six-month loop of security people saying, “No, you can’t do that.” Helping people understand what’s possible with the technology and helping people understand how to empower their teams through that technology is an incredibly important thing for all parts of an organization, and it’s deeply motivating to me.

Five years from now, what changes do you think we’ll see across the cloud security and compliance landscape?

I believe the ways people think about security in the cloud will continue to evolve. AWS is releasing more higher-function services like Amazon GuardDuty and AWS Security Hub that make it easier for customers to be more secure. I believe people will become more comfortable using these tools as they start to understand that we’re devoting a huge amount of effort to making these services provide useful, actionable information for customers, rather than just being another set of logs to look at. This will allow customers to focus on the aspects of their organization that deliver business value, while worrying less about the underlying composition of services.

At the moment, people approach cloud security by applying a traditional security mindset. It’s normal to come to the cloud from a physical environment, where you could touch and see the servers, and you could see blinking lights. This background can color the ways that people think about the cloud. In a few years’ time, as people become more comfortable with this new way of thinking about security, I think customers will start to come to us right out of the gate with questions like, “What specifics services do I need, and how do I configure them to make my environment better?”

You’ve been involved in putting together the Security track at the upcoming AWS summit in Sydney. What were you looking for as you selected session topics?

We have ten talks in the “Secure” track, and we’ve selected topics to address as many customer needs as possible. That means sessions for people who are just getting started and have questions like, “What foundational things can I turn on to ensure I start my cloud journey securely?” It also means sessions for people who are very adept at using cloud services and want to know things like, “How do I automate incidence response and forensics?” We’ve also talked to people who run organizations that don’t even have a security team—often small startups—who want to get their heads wrapped around cloud security. So, hopefully we have sessions that appeal to a range of customers.

We’re including existing AWS customers in nine out of the ten talks. These customers fall across the spectrum, from some of our largest enterprise customers to public sector, startups, mid-market, and financial services. We’ve tried to represent both breadth and depth of customer stories, because we think these stories are incredibly important. We had a few customers in the track last year, and we got a great response from the audience, who appreciated the chance to hear from peers, or people in similar industries, about how they improved their security posture on AWS.

What do you hope people gain from attending the “Secure” track?

Regardless of the specific sessions that people attend, I want them walk away saying, “Wow, I can do this in my organization right now.” I want people to see the art of the possible for cloud security. You can follow the prescriptive advice from various talks, go out, and do things for your organization. It shouldn’t be some distant, future goal, either. We offer prescriptive guidance for what you can do right now.

Say you’re in a session about secrets management. We might say, This is the problem we’re talking about, this is how to approach it using AWS Identity and Access Management (IAM) roles, and if you can’t use AWS IAM roles, here how to use AWS Secrets Manager. Next, here’s a customer to talk about how they think of secrets management in a multi-account environment. Next, here are a bunch of different use cases. Finally, here are the places you can go for further information, and here’s how you can get started today.” My hope is that the talk inspires people to go and build and be more secure immediately. I want people to leave the Summit and immediately start building.

We’re really proud of the track. We’ve got a range of customer perspectives and a range of topics that hopefully covers as much of the amazing breadth of cloud security as we can fit into ten talks.

Sometimes you make music and post it to Soundcloud. Who are your greatest musical influences?

Argh. There are so many. I went through a big Miles Davis phase, more from listening than in any way capable of being that good. I also draw inspiration from shouty English punk bands like the Buzzcocks, plus quite a lot of hip-hop. That includes both classic hip-hop like De La Soul or A Tribe Called Quest and more recent stuff like Run the Jewels. They’re an American band out of Atlanta who I listen to quite a lot at the moment. There are a lot of different groups I could point to, depending on mood. I’ve been posting my music to Soundcloud for ten years or so. Long enough that I should be better. But it’s a journey, and of course time is a limiting factor—AWS is a very busy place.

We understand that you’ve switched from playing cricket to baseball. What turned you into a baseball fan?

I moved from Sydney to Melbourne. In Sydney, the climate is okay for playing outdoor cricket in the winter. But in Melbourne, it’s not really a winter sport. I was looking for something to do, so I started playing winter baseball with a local team. The next summer, I played both cricket and baseball—they were on different days—but it became quite confusing because there are some fundamental differences. I ended up enjoying baseball more, and it took a bit less time. Cricket is definitely a full day event. Plus, one of the great things about baseball is that as a hitter you’re sort of expected to fail 60% of the time. But you get another go. If you’re out at cricket, that is it for your day. With baseball, you’re engaged for a lot longer during the game.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Paul Hawkins

Paul has more than 15 years of information security experience with a significant part of that working with financial services. He works across the range of AWS customers from start-ups to the largest enterprises to improve their security, risk, and compliance in the cloud.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/786629/rss

Security updates have been issued by Arch Linux (dovecot, flashplugin, ghostscript, and jenkins), Fedora (glpi, hostapd, python-urllib3, and znc), openSUSE (apache2, audiofile, libqt5-qtvirtualkeyboard, php5, and SDL2), Scientific Linux (kernel), SUSE (curl and dovecot23), and Ubuntu (advancecomp and freeradius).

Vulnerability in French Government Tchap Chat App

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/vulnerability_i_1.html

A researcher found a vulnerability in the French government WhatsApp replacement app: Tchap. The vulnerability allows anyone to surreptitiously join any conversation.

Of course the developers will fix this vulnerability. But it is amusing to point out that this is exactly the backdoor that GCHQ is proposing.

Create wearable tech with Sophy Wong and our new book | HackSpace magazine issue 18

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/create-wearable-tech-projects-with-sophy-wong/

Forget Apple Watch and Fitbit — if we’re going to wear something electronic, we want to make it ourselves!

Wearable Tech Projects, from the makers of HackSpace magazine, is a 164-page book packed with projects for the fashionable electronics enthusiast, with more than 30 projects which will blink, flash, and spark joy in your life.

Sophy Wong HackSpace Wearable Tech Projects book

Make a wearable game controller

Fans of Sophy Wong will already know about the amazing wearable tech that she develops. We wanted to make sure that more people discovered her work and the incredible world of wearable technology. You’ll start simple with sewable circuits and LEDs, and work all the way up to building your own wearable controller (complete with feathers) for an interactive, fully immersive game of Flappy Bird.

Sophy Wong HackSpace Wearable Tech Projects book

Pick up the tricks of the trade

Along the way, you’ll embed NFC data in a pair of cufflinks, laser cut jewellery, 3D print LED diffusers onto fabric for a cyberpunk leather jacket, and lots more.

 

Sophy Wong HackSpace Wearable Tech Projects book

Learn new techniques from Sophy Wong

You’ll discover new techniques for working with fabric, find out about the best microcontrollers for your projects, and learn the basics of CircuitPython, the language developed at Adafruit for physical computing. There’s no ‘Hello, World!’ or computer theory here; this is all about practical results and making unique, fascinating things to wear.

Get your copy today

Wearable Tech Projects is available to buy online for £10 with free delivery. You can also get it from WHSmith and all the usual high street retail suspects.


And that’s not all. There is also a new issue of HackSpace magazine out now, with an awesome special feature on space! You can find your copy at the same retailers as above. You can also download both Issue 18 and the Wearables book for free from the HackSpace website.

 

The post Create wearable tech with Sophy Wong and our new book | HackSpace magazine issue 18 appeared first on Raspberry Pi.

The Trade Desk: Lessons We Learned Migrating from Homegrown Monitoring to Prometheus

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2019/04/24/the-trade-desk-lessons-we-learned-migrating-from-homegrown-monitoring-to-prometheus/

The Trade Desk provides a self-service, cloud-based platform for buyers of online advertising. Since its founding in 2009, TTD has grown into a publicly traded company with more than 900 employees and a market cap of $8.89 billion.

The company recently moved from an old monitoring system based on Nagios, Graphite, and a number of homegrown pieces of software, to something more standard, based on Prometheus. SRE Patrick O’Brien gave a talk at GrafanaCon about the lessons they learned along the way to processing 11 million requests per second with Prometheus.

1. Think about your (hard) alerts.

When migrating alerts defined in a legacy alerting system into a new system, O’Brien said, “90% of those alerts will be insanely easy to move over. It’s the remaining 10% that will be difficult.” O’Brien’s advice: Spend time figuring out which ones will still be useful in the new system, and how you’ll actually migrate them. “Oftentimes, especially coming from Nagios, we’ll have Python scripts that do many different things in that single script to kind of figure out if there is an issue,” he said. “Those are the hard ones, and that’s where your longest tail of the project will be.”

2. Prometheus documentation is clinical.

“I’m super happy to now hear that we can contribute better documentation,” O’Brien said. “You will get a lot of PromQL questions when you start rolling up Prometheus, and it’s best to kind of become an expert in that as much as possible.”

3. Do maths.

“We immediately hit cardinality issues because we have a lot of hosts,” O’Brien explained. Users were told to make metric names generic and not embed any metadata into them, but add labels instead. “We hit 2 million metrics in the single namespace in like 30 seconds,” he said. “It was terrible and it was very painful… so maybe embed some metadata in that metric name.”

4. Find a few internal evangelists.

O’Brien gave a shout-out to one TTD engineer, Nathan, who “knew many more developers than I knew, and so he was able to kind of work with them, show them in code how it works, show them the benefits, and was able to reach much further than I was able to reach. It was fantastic.”

5. Create a dedicated team.

“The more opinions on how to do something, the better,” he said.

6. Get involved in the community.

“This one kind of speaks for itself,” O’Brien said. “You learn more about the product, you learn more about the project, and you’re able to help everybody else out.”

For more from GrafanaCon 2019, check out all the talks on YouTube.

Optimizing Network Intensive Workloads on Amazon EC2 A1 Instances

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/optimizing-network-intensive-workloads-on-amazon-ec2-a1-instances/

This post courtesy of Ali Saidi, AWS, Principal Engineer

At re:Invent 2018, AWS announced the Amazon EC2 A1 instance. The A1 instances are powered by our internally developed Arm-based AWS Graviton processors and are up to 45% less expensive than other instance types with the same number of vCPUs and DRAM. These instances are based on the AWS Nitro System, and offer enhanced-networking of up to 10 Gbps with Elastic Network Adapters (ENA).

One of the use cases for the A1 instance is key-value stores and in this post, we describe how to get the most performance from the A1 instance running memcached. Some simple configuration options increase the performance of memcached by 3.9X over the out-of-the-box experience as we’ll show below. Although we focus on memcached, the configuration advice is similar for any network intensive workload running on A1 instances. Typically, the performance of network intensive workloads will improve by tuning some of these parameters, however depending on the particular data rates and processing requirements the values below could change.

irqbalance

Most Linux distributions enable irqbalance by default which load-balance interrupts to different CPUs during runtime. It does a good job to balance interrupt load, but in some cases, we can do better by pinning interrupts to specific CPUs. For our optimizations we’re going to temporarily disable irqbalance, however, if this is a production configuration that needs to survive a server reboot, irqbalance would need to be permanently disabled and the changes below would need to be added to the boot sequence.

Receive Packet Steering (RPS)

RPS controls which CPUs process packets are received by the Linux networking stack (softIRQs). Depending on instance size and the amount of application processing needed per packet, sometimes the optimal configuration is to have the core receiving packets also execute the Linux networking stack, other times it’s better to spread the processing among a set of cores. For memcached on EC2 A1 instances, we found that using RPS to spread the load out is helpful on the larger instance sizes.

Networking Queues

A1 instances with medium, large, and xlarge instance sizes have a single queue to send and receive packets while 2xlarge and 4xlarge instance sizes have two queues. On the single queue droplets, we’ll pin the IRQ to core 0, while on the dual-queue droplets we’ll use either core 0 or core 0 and core 8.

Instance TypeIRQ settingsRPS settingsApplication settings
a1.xlargeCore 0Core 0Run on cores 1-3
a1.2xlargeBoth on core 0Core 0-3, 4-7Run on core 1-7
a1.4xlargeCore 0 and core 8Core 0-7, 8-15Run on cores 1-7 and 9-15

 

 

 

 

 

The following script sets up the Linux kernel parameters:

#!/bin/bash 

sudo systemctl stop irqbalance.service
set_irq_affinity() {
  grep eth0 /proc/interrupts | awk '{print $1}' | tr -d : | while read IRQ; 
do
    sudo sh -c "echo $1 > /proc/irq/$IRQ/smp_affinity_list"
    shift
  done
}
 
case `grep ^processor /proc/cpuinfo  | wc -l ` in
  (4) sudo sh -c 'echo 1 > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      set_irq_affinity 0
      ;;
  (8) sudo sh -c 'echo f > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      sudo sh -c 'echo f0 > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      set_irq_affinity 0 0
      ;;
  (16) sudo sh -c 'echo ff > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      sudo sh -c 'echo ff00 > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      set_irq_affinity 0 08
      ;;
  *)  echo "Script only supports 4, 8, 16 cores on A1 instances"
      exit 1;
      ;;
esac

Summary

Some simple tuning parameters can significantly improve the performance of network intensive workloads on the A1 instance. With these changes we get 3.9X the performance on an a1.4xlarge and the other two instance sizes see similar improvements. While the particular values listed here aren’t applicable to all network intensive benchmarks, this article demonstrates the methodology and provides a starting point to tune the system and balance the load across CPUs to improve performance. If you have questions about your own workload running on A1 instances, please don’t hesitate to get in touch with us at [email protected] .

How to Have Fun This Summer and Keep Your Data Safe, Too

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/protecting-your-data-when-traveling/

Man in hat taking goofy summer photos

If you’re like me, you can hardly wait for summer to be here. Summer is the time to get outdoors, go swimming, hang out with friends, and enjoy the weather. For many, it’s also a time for graduations, weddings, vacations, visiting family, and grilling in the backyard.

We’re likely to take more photos and go places we haven’t been before. And we take along all our portable gadgets, especially our cameras, phones, and digital music devices.

Unfortunately, being on the move means that the data on our digital devices is more susceptible to loss. We’re often not as careful backing up that data or even keeping track of the devices themselves. Perhaps you’ve had the sad experience of getting back home after a family reunion, company picnic, or vacation and discovering that your phone or camera didn’t make it all the way home with you.

With just a little planning and a few simple practices, you can be certain that your digital memories will last far beyond summer.

Keep All Those Summer Memories Safe

We don’t want you to miss out on all the great summer memories you’re going to create this year. Before summer is actually here, it’s good to review some tips to make sure that all those great memories you create will be with you for years to come.

Summer Data Backup Tips

Even if your devices are lost or stolen, you’ll be able to recover what was on them if you back them up during your trip. Don’t wait until you get home — do it regularly no matter where you are. It’s not hard to make sure your devices are backed up; you just need to take a few minutes to make a plan on how and when you’re going to back up your devices.

Have somewhere to put your backup data, either in the cloud or on a backup device that you can keep safe, give to someone else, or ship home

If You Have Access to Wi-Fi
  • If your devices are internet-ready, you can back them up to the cloud directly whenever you’re connected.
  • If you don’t have access to Wi-Fi, you can back up your devices to a laptop computer and then back up that computer to the cloud.

Note: See Safety Tips for Using Wi-Fi on the Go, below.

If You Don’t Have Access to Wi-Fi

If you don’t have access to Wi-Fi, you can back up your devices to a USB thumb drive and carry that with you. If you put it in luggage, put it in a piece of luggage different than where you carry your devices, or give it to a family member to put in their bag or luggage. To be extra safe, it’s easy and inexpensive to mail a thumb drive to yourself when you’re away from home. Some hotels will even do that for you.

Make Sure Your Devices Get Home With You

You want to be careful with your devices when you travel.

  • Use covers for your phone and cameras. It helps protects them from physical damage and also discourages robbers who are attracted to shiny things. In any case, don’t flash around your nice mobile phone or expensive digital camera. Keep them out of sight when you’re not using them.
  • Don’t leave any of your digital devices unprotected in an airport security line, at a hotel, on a cafe or restaurant table, beside the pool, or in a handbag on the floor or hanging from a chair.
  • Be aware of your surroundings. Be especially cautious of anyone getting close to you in a crowd.
  • It seems silly to say, but keep your devices away from all forms of liquid.
  • If available, you can use a hotel room or front desk safe to protect your devices when you’re not using them.

Water and Tech Don’t Mix

I love being near or in the water, but did you know that water damage is the most common cause of damage to digital devices? We should be more careful around water, but it’s easy for accidents to happen. And in the summer they tend to happen even more.

Mobile phone in pool

Safety Tips for Using Wi-Fi on the Go

Public Wi-Fi networks are notorious for being places where nefarious individuals snoop on other computers to steal passwords and account information. You can avoid that possibility by following some easy tips.

  • Before you travel, change the passwords on the accounts you plan to use. Change them again when you get home. Don’t use the same password on different accounts or reuse a password you’ve used previously. Password managers, such as 1Password, LastPass, or BitWarden, make handling your password easy.
  • Turn off sharing on your devices to prevent anyone obtaining access to your device.
  • Turn off automatic connection to open Wi-Fi networks.
  • Don’t use the web to access your bank, financial institutions, or other important sites if you’re not 100% confident in the security of your internet connection.
  • If you do access a financial, shopping, or other high risk site, make sure your connection is protected with Secure Socket Layer (SSL), which is indicated with the HTTPS prefix in the URL. When you browse over HTTPS, people on the same Wi-Fi network as you can’t snoop on the data that travels between you and the server of the website you’re connecting to. Most sites that ask for payment or confidential information use SSL. If they don’t, stay away.
  • If you can, set up a virtual private network (VPN) to protect your connection. A VPN routes your traffic through a secure network even on public Wi-Fi, giving you all the protection of your private network while still having the freedom of public Wi-Fi. This is something you should look into and set up before you go on a trip. Here are some tips for choosing a VPN.

Share the Knowledge About Keeping Data Safe

You might be savvy about all the above, but undoubtedly you have family members or friends who aren’t as knowledgeable. Why not share this post with someone you know who might benefit from these tips? To email this post to a friend, just click on the email social sharing icon to the left or at the bottom of this post. Or, you can just send an email containing this post’s URL, https://www.backblaze.com/blog/protecting-your-data-when-traveling.

And be sure to have a great summer!

The post How to Have Fun This Summer and Keep Your Data Safe, Too appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] The sustainability of open source for the long term

Post Syndicated from jake original https://lwn.net/Articles/786304/rss

The problem of “sustainability” for open-source software is a common topic of
conversation in our community these days. We covered a talk by Bradley Kuhn on
sustainability a month ago. Another longtime community member, Luis Villa,
gave his take on the problem of making open-source projects sustainable at
the 2019 Legal and Licensing Workshop (LLW) in Barcelona. Villa is one of the
co-founders of Tidelift, which is a
company dedicated to helping close the gap so that the maintainers of
open-source projects get paid in order to continue their work.

[$] SGX: when 20 patch versions aren’t enough

Post Syndicated from corbet original https://lwn.net/Articles/786487/rss

Intel’s “Software Guard
Extensions
” (SGX) feature allows the creation of
encrypted “enclaves” that cannot be accessed from the rest of the system.
Normal code can call into an enclave, but only code running inside the
enclave itself can access the data stored there. SGX is pitched as a way
of protecting data from a hostile kernel; for example, an encryption key
stored in an
enclave should be secure even if the system as a whole is compromised.
Support for SGX has been under development for over three years; LWN covered it in 2016. But, as can be seen from
the response to the
latest revision of the SGX patch set
, all that work has still not
answered an important question: what protects the kernel against a hostile
enclave?

An Introduction to C & GUI Programming – the new book from Raspberry Pi Press

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/an-introduction-to-c-gui-programming-the-new-book-from-raspberry-pi-press/

The latest book from Raspberry Pi Press, An Introduction to C & GUI Programming, is now available. Author Simon Long explains how it came to be written…

An Introduction to C and GUI programming by Simon Long

Learning C

I remember my first day in a ‘proper’ job very well. I’d just left university, and was delighted to have been taken on by a world-renowned consultancy firm as a software engineer. I was told that most of my work would be in C, which I had never used, so the first order of business was to learn it.

My manager handed me a copy of Kernighan & Ritchie’s The C Programming Language, pointed to a terminal in the corner, said ‘That’s got a compiler. Off you go!’, and left me to it. So, I started reading the book, which is affectionately known to most software engineers as ‘K&R‘.

I didn’t get very far. K&R is basically the specification of the C language. Dennis Ritchie, the eponymous ‘R’, invented C, and while the book he helped write is an excellent reference guide, it is not a great introduction for a beginner. Like most people who know their subject inside out, the authors tend to assume that you know more than you do, so reading the book when you don’t know anything about the language at all is a little frustrating. I do know people who have learned C from K&R, and they have my undying respect!

I ended up learning C on the job as I went along; I looked at other people’s code, hacked stuff together, worked out why things didn’t work, asked for help from my colleagues, made a lot of mistakes, and gradually got the hang of it. I found only one book that was helpful for a beginner: it was called C For Yourself, and was actually one of the manuals for the long-extinct Microsoft QuickC compiler. That book is now impossible to find, so I’ve always had to tell people that the best book for learning C as a beginner is ‘C For Yourself, but you won’t be able to find a copy!’

Writing An Introduction to C & GUI Programming

When I embarked on this project, the editor of The MagPi and I were discussing possible series for the magazine, and we thought about creating a guide to writing GUI applications in C — that’s what I do in my day job at Raspberry Pi, so it seemed a logical place to start. We realised that the reader would need to know C to benefit from the series, and they wouldn’t be able to find a copy of C For Yourself. We decided that I ought to solve that problem first, so I wrote the original beginners’ guide to C series for The MagPi.

(At this point, I should stress that the series is aimed at absolute beginners. I freely admit that I have simplified parts of the language so that the reader does not have to absorb as much in one go. So yes, I do know about returning a success/fail code from a program, but beginners really don’t need to learn about that in the first chapter — especially when many will never need to write a program which does it. That’s why it isn’t explained until Chapter 9.)

An Introduction to C and GUI programming by Simon Long published by Raspberry Pi Press

So, the beginners’ guide to C came first, and I have now got round to writing the second part, which was what I’d planned to write all along. The section on GUIs describes how to write applications using the GTK toolkit, which is used for most of the Raspberry Pi Desktop and its associated applications. GTK is very powerful, and allows you to write rich graphical user interfaces with relatively few lines of code, but it’s not the most intuitive for beginners. (Much like C itself!) The book walks you through the basics of creating a window, putting widgets on it, and making the widgets do useful things, and gets you to the point where you know enough to be able to write an application like the ones I have written for the Raspberry Pi Desktop.

An Introduction to C and GUI programming by Simon Long published by Raspberry Pi Press

It then seemed logical to bring the two parts together in a single volume, so that someone with no experience of C has enough information to go from a standing start to writing useful desktop applications.

I hope that I’ve achieved that and if nothing else, I hope that I’ve written a book which is a bit more approachable for beginners than K&R!

Get An Introduction to C & GUI Programming today!

An Introduction to C & GUI Programming is available today from the Raspberry Pi Press online store, or as a free download here. You can also pick up a copy from the Raspberry Pi Store in Cambridge, or ask your local bookstore if they have it in stock or can order it in for you.

Alex interjects to state the obvious: Basically, what we’re saying here is that there’s no reason for you not to read Simon’s book. Oh, and it feels really nice too.

The post An Introduction to C & GUI Programming – the new book from Raspberry Pi Press appeared first on Raspberry Pi.

G7 Comes Out in Favor of Encryption Backdoors

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/04/g7_comes_out_in.html

From a G7 meeting of interior ministers in Paris this month, an “outcome document“:

Encourage Internet companies to establish lawful access solutions for their products and services, including data that is encrypted, for law enforcement and competent authorities to access digital evidence, when it is removed or hosted on IT servers located abroad or encrypted, without imposing any particular technology and while ensuring that assistance requested from internet companies is underpinned by the rule law and due process protection. Some G7 countries highlight the importance of not prohibiting, limiting, or weakening encryption;

There is a weird belief amongst policy makers that hacking an encryption system’s key management system is fundamentally different than hacking the system’s encryption algorithm. The difference is only technical; the effect is the same. Both are ways of weakening encryption.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close