The EU’s General Data Protection Regulation (GDPR) describes data processor and data controller roles, and some customers and AWS Partner Network (APN) partners are asking how this affects the long-established AWS Shared Responsibility Model. I wanted to take some time to help folks understand shared responsibilities for us and for our customers in context of the GDPR.
How does the AWS Shared Responsibility Model change under GDPR? The short answer – it doesn’t. AWS is responsible for securing the underlying infrastructure that supports the cloud and the services provided; while customers and APN partners, acting either as data controllers or data processors, are responsible for any personal data they put in the cloud. The shared responsibility model illustrates the various responsibilities of AWS and our customers and APN partners, and the same separation of responsibility applies under the GDPR.
AWS responsibilities as a data processor
The GDPR does introduce specific regulation and responsibilities regarding data controllers and processors. When any AWS customer uses our services to process personal data, the controller is usually the AWS customer (and sometimes it is the AWS customer’s customer). However, in all of these cases, AWS is always the data processor in relation to this activity. This is because the customer is directing the processing of data through its interaction with the AWS service controls, and AWS is only executing customer directions. As a data processor, AWS is responsible for protecting the global infrastructure that runs all of our services. Controllers using AWS maintain control over data hosted on this infrastructure, including the security configuration controls for handling end-user content and personal data. Protecting this infrastructure, is our number one priority, and we invest heavily in third-party auditors to test our security controls and make any issues they find available to our customer base through AWS Artifact. Our ISO 27018 report is a good example, as it tests security controls that focus on protection of personal data in particular.
AWS has an increased responsibility for our managed services. Examples of managed services include Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon Elastic MapReduce, and Amazon WorkSpaces. These services provide the scalability and flexibility of cloud-based resources with less operational overhead because we handle basic security tasks like guest operating system (OS) and database patching, firewall configuration, and disaster recovery. For most managed services, you only configure logical access controls and protect account credentials, while maintaining control and responsibility of any personal data.
Customer and APN partner responsibilities as data controllers — and how AWS Services can help
Our customers can act as data controllers or data processors within their AWS environment. As a data controller, the services you use may determine how you configure those services to help meet your GDPR compliance needs. For example, AWS Services that are classified as Infrastructure as a Service (IaaS), such as Amazon EC2, Amazon VPC, and Amazon S3, are under your control and require you to perform all routine security configuration and management that would be necessary no matter where the servers were located. With Amazon EC2 instances, you are responsible for managing: guest OS (including updates and security patches), application software or utilities installed on the instances, and the configuration of the AWS-provided firewall (called a security group).
To help you realize data protection by design principles under the GDPR when using our infrastructure, we recommend you protect AWS account credentials and set up individual user accounts with Amazon Identity and Access Management (IAM) so that each user is only given the permissions necessary to fulfill their job duties. We also recommend using multi-factor authentication (MFA) with each account, requiring the use of SSL/TLS to communicate with AWS resources, setting up API/user activity logging with AWS CloudTrail, and using AWS encryption solutions, along with all default security controls within AWS Services. You can also use advanced managed security services, such as Amazon Macie, which assists in discovering and securing personal data stored in Amazon S3.
For more information, you can download the AWS Security Best Practices whitepaper or visit the AWS Security Resources or GDPR Center webpages. In addition to our solutions and services, AWS APN partners can provide hundreds of tools and features to help you meet your security objectives, ranging from network security and configuration management to access control and data encryption.
For Joel Wagener, Director of IT at AIBS, simplicity is an important feature he looks for in software applications to use in his organization. So maybe it’s not unexpected that Joel chose Backblaze for Business to back up AIBS’s staff computers. According to Joel, “It just works.”
AIBS (The American Institute of Biological Sciences) is a non-profit scientific association dedicated to advancing biological research and education. Founded in 1947 as part of the National Academy of Sciences, AIBS later became independent and now has over 100 member organizations. AIBS works to ensure that the public, legislators, funders, and the community of biologists have access to and use information that will guide them in making informed decisions about matters that require biological knowledge.
AIBS started using Backblaze for Business Cloud Backup several years ago to make sure that the organization’s data was backed up and protected from accidental loss or computer failure. AIBS is based in Washington, D.C., but is a virtual organization, with staff dispersed around the United States. AIBS needed a backup solution that worked anywhere a staff member was located, and was easy to use, as well. Joel has made Backblaze a default part of the configuration management for all the AIBS endpoints, which in their case are exclusively Macintosh.
“We started using Backblaze on a single computer in 2014, then not too long after that decided to deploy it to all our endpoints,” explains Joel. “We use Groups to oversee backups and for central billing, but we let each user manage their own computer and restore files on their own if they need to.”
“Backblaze stays out of the way until we need it. It’s fairly lightweight, and I appreciate that it’s simple,” says Joel. “It doesn’t throttle backups and the price point is good. I have family members who use Backblaze, as well.”
Backblaze’s Groups feature permits an organization to oversee and manage the user accounts, including restores, or let users handle that themselves. This flexibility fits a variety of organizations, where various degrees of oversight or independence are desirable. The finance and HR departments could manage their own data, for example, while the rest of the organization could be managed by IT. All groups can be billed centrally no matter how other functionality is set up.
“If we have a computer that needs repair, we can put a loaner computer in that person’s hands and they can immediately get the data they need directly from the Backblaze cloud backup, which is really helpful. When we get the original computer back from repair we can do a complete restore and return it to the user all ready to go again. When we’ve needed restores, Backblaze has been reliable.”
Joel also likes that the memory footprint of Backblaze is light — the clients for both Macintosh and Windows are native, and designed to use minimum system resources and not impact any applications used on the computer. He also likes that updates to the client software are pushed out when necessary.
Backblaze for Business also helps IT maintain archives of users’ computers after they leave the organization.
“We like that we have a ready-made archive of a computer when someone leaves,” said Joel. The Backblaze backup is there if we need to retrieve anything that person was working on.”
There are other capabilities in Backblaze that Joel likes, but hasn’t had a chance to use yet.
“We’ve used Casper (Jamf) to deploy and manage software on endpoints without needing any interaction from the user. We haven’t used it yet for Backblaze, but we know that Backblaze supports it. It’s a handy feature to have.”
”It just works.” — Joel Wagener, AIBS Director of IT
Perhaps the best thing about Backblaze for Business isn’t a specific feature that can be found on a product data sheet.
“When files have been lost, Backblaze has provided us access to multiple prior versions, and this feature has been important and worked well several times,” says Joel.
“That provides needed peace of mind to our users, and our IT department, as well.”
We’re looking for someone who enjoys solving difficult problems, running down elusive tech gremlins, and improving our environment one server at a time. If you enjoy being stretched, learning new skills, and want to look forward to seeing your co-workers every day, then we want you!
Backblaze is a small (in headcount) cloud storage (and backup!) company with a big mission, bringing feature-rich and accessible services to the masses, even if they don’t have unlimited VC funding (because we don’t either)! We believe in a fun and positive work environment where people can learn and grow, and where a sense of community is not just a buzzword from a company handbook (though you might probably find it in there).
What You’ll Be Doing
Mastering your craft, becoming a subject matter expert, and acting as an escalation point for areas of expertise (this means responding to pages in your areas of ownership as well)
Leading projects across a range of IT operations disciplines
Developing a thorough understanding of the environment and the skills necessary to troubleshoot all systems and services
Collaborating closely with other teams (Engineering, Infrastructure, etc.) to build out new systems and improve existing ones
Participating in on-call rotation when necessary
Petting the office dogs when appropriate
What You Should Have
5+ years of work as a Systems Administrator (or equivalent college degree)
Expert knowledge of Linux systems administration (Debian preferred)
Ability to work under pressure in a fast-paced startup environment
A passion for build and improving all manner of systems and services
Excellent problem solving, investigative, and troubleshooting skills
Strong interpersonal communication skills
Local enough to commute to San Mateo office
Highly Desirable Skills
Experience working at a technology/software startup
Configuration management and automation software (Ansible preferred)
Familiarity with server and storage system hardware and configurations
Understanding of Java servlet containers (Tomcat preferred)
Skill in administration of different software suites and cloud-based integrations (G Suite, PagerDuty, etc.)
Comprehension of standard web services and packages (WordPress, Apache, etc.)
Some Backblaze Perks
Generous healthcare plans
Competitive compensation and 401k
All employees receive Option grants
Unlimited vacation days
Strong coffee
Fully stocked Micro kitchens
Weekly catered breakfast and lunches
Awesome people who work on awesome projects
Childcare bonus (human children only)
Get to bring your (well behaved) pets into the office
Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy
This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services
Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.
With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.
This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.
Solution overview
This example is made up of the following components:
An unencrypted Parameter Store parameter that the Lambda function loads
A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.
Launch the AWS SAM template
To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.
SAM template resources
The following sections show the code for the resources defined in the template. Lambda function
In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.
The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*. Parameter Store parameter
SimpleParameter:
Type: AWS::SSM::Parameter
Properties:
Name: '/dev/parameterStoreBlog/appConfig'
Description: 'Sample dev config values for my app'
Type: String
Value: '{"key1": "value1","key2": "value2","key3": "value3"}'
This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access. KMS encryption key
ParameterStoreBlogDevEncryptionKeyAlias:
Type: AWS::KMS::Alias
Properties:
AliasName: 'alias/ParameterStoreBlogKeyDev'
TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey
ParameterStoreBlogDevEncryptionKey:
Type: AWS::KMS::Key
Properties:
Description: 'Encryption key for secret config values for the Parameter Store blog post'
Enabled: True
EnableKeyRotation: False
KeyPolicy:
Version: '2012-10-17'
Id: 'key-default-1'
Statement:
-
Sid: 'Allow administration of the key & encryption of new values'
Effect: Allow
Principal:
AWS:
- !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
Action:
- 'kms:Create*'
- 'kms:Encrypt'
- 'kms:Describe*'
- 'kms:Enable*'
- 'kms:List*'
- 'kms:Put*'
- 'kms:Update*'
- 'kms:Revoke*'
- 'kms:Disable*'
- 'kms:Get*'
- 'kms:Delete*'
- 'kms:ScheduleKeyDeletion'
- 'kms:CancelKeyDeletion'
Resource: '*'
-
Sid: 'Allow use of the key'
Effect: Allow
Principal:
AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
Action:
- 'kms:Encrypt'
- 'kms:Decrypt'
- 'kms:ReEncrypt*'
- 'kms:GenerateDataKey*'
- 'kms:DescribeKey'
Resource: '*'
This YAML code creates an encryption key with a key policy with two statements.
The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.
The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.
Lambda function
Here I walk you through the Lambda function code.
import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()
# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None
class MyApp:
def __init__(self, config):
"""
Construct new MyApp with configuration
:param config: application configuration
"""
self.config = config
def get_config(self):
return self.config
def load_config(ssm_parameter_path):
"""
Load configparser from config stored in SSM Parameter Store
:param ssm_parameter_path: Path to app config in SSM Parameter Store
:return: ConfigParser holding loaded config
"""
configuration = configparser.ConfigParser()
try:
# Get all parameters for this app
param_details = client.get_parameters_by_path(
Path=ssm_parameter_path,
Recursive=False,
WithDecryption=True
)
# Loop through the returned parameters and populate the ConfigParser
if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
for param in param_details.get('Parameters'):
param_path_array = param.get('Name').split("/")
section_position = len(param_path_array) - 1
section_name = param_path_array[section_position]
config_values = json.loads(param.get('Value'))
config_dict = {section_name: config_values}
print("Found configuration: " + str(config_dict))
configuration.read_dict(config_dict)
except:
print("Encountered an error loading config from SSM.")
traceback.print_exc()
finally:
return configuration
def lambda_handler(event, context):
global app
# Initialize app if it doesn't yet exist
if app is None:
print("Loading config and creating new MyApp...")
config = load_config(full_config_path)
app = MyApp(config)
return "MyApp config is " + str(app.get_config()._sections)
Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.
Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.
The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.
Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.
To confirm that everything was created successfully, test the function in the Lambda console.
In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.
After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.
Create an encrypted parameter
You currently have a simple, unencrypted parameter and a Lambda function that can access it.
Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.
To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.
For Name, enter /dev/parameterStoreBlog/appSecrets.
For Type, select Secure String.
For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
For Value, enter {"secretKey": "secretValue"}.
Choose Create Parameter.
If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.
If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.
Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.
Profiling the impact of querying Parameter Store using AWS X-Ray
By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.
From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.
This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results. In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.
Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.
Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.
Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.
Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.
While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.
In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.
After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:
From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.
Conclusion
Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.
In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.
Alberto Ruiz announces that Fleet Commander is ready for production use. “Fleet Commander is an integrated solution for large Linux desktop deployments that provides a configuration management interface that is controlled centrally and that covers desktop, applications and network configuration. For people familiar with Group Policy Objects in Active Directory in Windows, it is very similar.”
As a school supply aficionado, the month of September has always held a special place in my heart. Nothing sets the tone for success like getting a killer deal on pens and a crisp college ruled notebook. Even if back to school shopping trips have secured a seat in your distant memory, this is still a perfect time of year to stock up on office supplies and set aside some time for flexing those learning muscles. A great way to get started: scan through our September Tech Talks and check out the ones that pique your interest. This month we are covering re:Invent, AI, and much more.
September 2017 – Schedule
Noted below are the upcoming scheduled live, online technical sessions being held during the month of September. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.
The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.
Are you an Automation Systems Administrator who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.
Responsibilities:
Develop and deploy automated provisioning & updating of systems
Lead projects across a range of IT disciplines
Understand environment thoroughly enough to administer/debug any system
Participate in the 24×7 on-call rotation and respond to alerts as needed
Requirements:
Expert knowledge of automated provisioning
Expert knowledge of Linux administration (Debian preferred)
Scripting skills
Experience in automation/configuration management
Position based in the San Mateo, California Corporate Office
Required for all Backblaze Employees
Good attitude and willingness to do whatever it takes to get the job done.
Desire to learn and adapt to rapidly changing technologies and work environment.
Relentless attention to detail.
Excellent communication and problem solving skills.
Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.
Company Description: Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.
We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.
We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.
Some Backblaze Perks:
Competitive healthcare plans
Competitive compensation and 401k
All employees receive Option grants
Unlimited vacation days
Strong coffee
Fully stocked Micro kitchen
Catered breakfast and lunches
Awesome people who work on awesome projects
Childcare bonus
Normal work hours
Get to bring your pets into the office
San Mateo Office – located near Caltrain and Highways 101 & 280.
If this sounds like you — follow these steps:
Send an email to [email protected] with the position in the subject line.
Include your resume.
Tell us a bit about your experience and why you’re excited to work with Backblaze.
Are you a Site Reliability Engineer who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.
Responsibilities:
Lead projects across a range of IT disciplines
Understand environment thoroughly enough to administer/debug any system
Collaborate on automated provisioning & updating of systems
Collaborate on network administration and security
Collaborate on database administration
Participate in the 24×7 on-call rotation and respond to alerts as needed
Requirements:
Expert knowledge of Linux administration (Debian preferred)
Scripting skills
Experience in automation/configuration management (Ansible preferred)
Position based in the San Mateo, California Corporate Office
Required for all Backblaze Employees
Good attitude and willingness to do whatever it takes to get the job done.
Desire to learn and adapt to rapidly changing technologies and work environment.
Relentless attention to detail.
Excellent communication and problem solving skills.
Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.
Company Description: Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.
We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.
We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.
Some Backblaze Perks:
Competitive healthcare plans
Competitive compensation and 401k
All employees receive Option grants
Unlimited vacation days
Strong coffee
Fully stocked Micro kitchen
Catered breakfast and lunches
Awesome people who work on awesome projects
Childcare bonus
Normal work hours
Get to bring your pets into the office
San Mateo Office – located near Caltrain and Highways 101 & 280.
If this sounds like you — follow these steps:
Send an email to [email protected] with the position in the subject line.
Include your resume.
Tell us a bit about your experience and why you’re excited to work with Backblaze.
Are you a Database Systems Administrator who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.
Responsibilities:
Own the administration of Cassandra and MySQL
Lead projects across a range of IT disciplines
Understand environment thoroughly enough to administer/debug the system
Participate in the 24×7 on-call rotation and respond to alerts as needed
Requirements:
Expert knowledge of Cassandra & MySQL
Expert knowledge of Linux administration (Debian preferred)
Scripting skills
Experience in automation/configuration management
Position is based in the San Mateo, California corporate office
Required for all Backblaze Employees
Good attitude and willingness to do whatever it takes to get the job done.
Desire to learn and adapt to rapidly changing technologies and work environment.
Relentless attention to detail.
Excellent communication and problem solving skills.
Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.
Company Description: Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.
We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.
We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.
Some Backblaze Perks:
Competitive healthcare plans
Competitive compensation and 401k
All employees receive Option grants
Unlimited vacation days
Strong coffee
Fully stocked Micro kitchen
Catered breakfast and lunches
Awesome people who work on awesome projects
Childcare bonus
Normal work hours
Get to bring your pets into the office
San Mateo Office – located near Caltrain and Highways 101 & 280.
If this sounds like you — follow these steps:
Send an email to [email protected] with the position in the subject line.
Include your resume.
Tell us a bit about your experience and why you’re excited to work with Backblaze.
This month we are hosting two joint AWS-Partner webinars about how executing DevOps practices on AWS can automate configuration management and leave time for innovation. Many organizations adopt DevOps practices to manage their cloud and on-premises environments for greater scalability, speed, and reliability and these webinars give you a chance to hear directly from the partners and customers on how they did it.
Puppet
Puppet helped ServiceChannel automate their cloud configuration management to take advantage of the scalability of AWS, achieve greater flexibility, and improve their customers’ ability to connect and collaborate more frequently.
Webinar Topic: How ServiceChannel Automated Their AWS Environment with Puppet Customer Presenter:Brian Engler, CIO, ServiceChannel AWS Presenter: Kevin Cochran, Partner Solutions Architect Partner Presenter:Chris Barker, Principal Solutions Engineer, Puppet Time: July 20th, 2017 10am – 11am PDT | 1pm – 2pm EDT
New Relic helped MLBAM utilize the scalability of AWS and the visibility provided by New Relic to create the “gold standard” for digital streaming video infrastructure.
Webinar Topic: MLB Advanced Media: Delivering a Digital Experience to 25 Million Fans with New Relic and AWS Customer Presenter:Christian Villoslada, VP of Software Engineering, MLBAM & Brandon San Giovanni, Senior Operations Manager, Core Media Operations, MLBAM AWS Presenter: Kevin Cochran, Partner Solutions Architect Partner Presenter:Lee Atchison, Senior Director of Strategic Architecture, New Relic Time: July 25th, 2017 10am – 11am PDT | 1pm – 2pm EDT
Hot on the heels of some other great Amazon EC2 Systems Manager (SSM)updates is another vital enhancement: the ability to use Patch Manager on Linux instances!
We launched Patch Manager with SSM at re:Invent in 2016 and Linux support was a commonly requested feature. Starting today we can support patch manager in:
Amazon Linux 2014.03 and later (2015.03 and later for 64-bit)
Ubuntu Server 16.04 LTS, 14.04 LTS, and 12.04 LTS
RHEL 6.5 and later (7.x and later for 64-Bit)
When I think about patching a big group of heterogenous systems I get a little anxious. Years ago, I administered my school’s computer lab. This involved a modest group of machines running a small number of VMs with an immodest number of distinct Linux distros. When there was a critical security patch it was a lot of work to remember the constraints of each system. I remember having to switch back and forth between arcane invocations of various package managers – pinning and unpinning packages: sudo yum update -y, rpm -Uvh ..., apt-get, or even emerge (one of our professors loved Gentoo).
Even now, when I use configuration management systems like Chef or Puppet I still have to specify the package manager and remember a portion of the invocation – and I don’t always want to roll out a patch without some manual approval process. Based on these experiences I decided it was time for me to update my skillset and learn to use Patch Manager.
Patch Manager is a fully-managed service (provided at no additional cost) that helps you simplify your operating system patching process, including defining the patches you want to approve for deployment, the method of patch deployment, the timing for patch roll-outs, and determining patch compliance status across your entire fleet of instances. It’s extremely configurable with some sensible defaults and helps you easily deal with patching hetergenous clusters.
Since I’m not running that school computer lab anymore my fleet is a bit smaller these days:
As you can see above I only have a few instances in this region but if you look at the launch times they range from 2014 to a few minutes ago. I’d be willing to bet I’ve missed a patch or two somewhere (luckily most of these have strict security groups). To get started I installed the SSM agent on all of my machines by following the documentation here. I also made sure I had the appropriate role and IAM profile attached to the instances to talk to SSM – I just used this managed policy: AmazonEC2RoleforSSM.
Now I need to define a Patch Baseline. We’ll make security updates critical and all other updates informational and subject to my approval.
Next, I can run the AWS-RunPatchBaseline SSM Run Command in “Scan” mode to generate my patch baseline data.
Then, we can go to the Patch Compliance page in the EC2 console and check out how I’m doing.
Yikes, looks like I need some security updates! Now, I can use Maintenance Windows, Run Command, or State Manager in SSM to actually manage this patching process. One thing to note, when patching is completed, your machine reboots – so managing that roll out with Maintenance Windows or State Manager is a best practice. If I had a larger set of instances I could group them by creating a tag named “Patch Group”.
For now, I’ll just use the same AWS-RunPatchBaseline Run Command command from above with the “Install” operation to update these machines.
As always, the CLIs and APIs have been updated to support these new options. The documentation is here. I hope you’re all able to spend less time patching and more time coding!
Written by AWS Solutions Architects Jason Barto and Heitor Lessa
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.
You’ll find the source code for this post in our GitHub repo.
At the end of this post, we will have the following architecture:
Requirements
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.
Technologies
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.
AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.
Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.
Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.
Getting Started
We use CloudFormation to bootstrap the following infrastructure:
Component
Purpose
AWS CodeCommit repository
Git repository where the AMI builder code is stored.
S3 bucket
Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project
Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline
Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic
Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule
Defines how the AMI builder should send a custom event to notify an SNS topic.
Region
AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)
After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)
Next, we will clone the newly created AWS CodeCommit repository.
Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:
git add .
git commit -m "My first AMI"
git push origin master
AWS CodeBuild Implementation Details
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:
...
phases:
...
build:
commands:
...
- ./packer build -color=false packer_cis.json | tee build.log
post_build:
commands:
- egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
# Packer doesn't return non-zero status; we must do that if Packer build failed
- test -s ami_id.txt || exit 1
- sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
- aws events put-events – entries file://ami_builder_event.json
...
artifacts:
files:
- ami_builder_event.json
- build.log
discard-paths: yes
In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:
Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.
Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.
In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.
ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
vpc and subnet: Environment variables defined by the CloudFormation stack parameters.
We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.
That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.
We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.
For more information, see functions on the HashiCorp Packer website.
We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.
Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.
Ansible Implementation Details
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.
CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.
The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.
The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:
---
- hosts: localhost
connection: local
gather_facts: true # gather OS info that is made available for tasks/roles
become: yes # majority of CIS tasks require root
vars:
# CIS Controls whitepaper: http://bit.ly/2mGAmUc
# AWS CIS Whitepaper: http://bit.ly/2m2Ovrh
cis_level_1_exclusions:
# 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
## This can break automation; ignoring it as there are stronger mechanisms than that
- 3.4.2
- 3.4.3
# CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
- 4.2.1.4
- 4.2.2.4
- 4.2.2.5
# Autofs is not installed in newer versions, let's ignore
- 1.1.19
# Cloudwatch Logs role configuration
logs:
- file: /var/log/messages
group_name: "system_logs"
roles:
- common
- anthcourtney.cis-amazon-linux
- dharrisio.aws-cloudwatch-logs-agent
Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.
If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.
For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.
Committing Changes
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.
When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.
Summary
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.
By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.
Next Steps
Here are some ideas to extend this AMI builder:
Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
Use AWS CodePipeline parallel steps to build multiple Packer images.
Add a commit ID as a tag for the AMI you created.
Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
Implement Windows support for the AMI builder.
Create a cross-account or cross-region AMI build.
Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.
If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.
The guest post below, written by Ananth Vaidyanathan (Senior Product Manager for EC2 Systems Manager) and Rich Urmston (Senior Director of Cloud Architecture at Pegasystems) shows you how to use EC2 Run Command to manage a large collection of EC2 instances without having to resort to SSH.
Enterprises often have several managed environments and thousands of Amazon EC2 instances. It’s important to manage systems securely, without the headaches of Secure Shell (SSH). Run Command, part of Amazon EC2 Systems Manager, allows you to run remote commands on instances (or groups of instances using tags) in a controlled and auditable manner. It’s been a nice added productivity boost for Pega Cloud operations, which rely daily on Run Command services.
You can control Run Command access through standard IAM roles and policies, define documents to take input parameters, control the S3 bucket used to return command output. You can also share your documents with other AWS accounts, or with the public. All in all, Run Command provides a nice set of remote management features.
Better than SSH Here’s why Run Command is a better option than SSH and why Pegasystems has adopted it as their primary remote management tool:
Run Command Takes Less Time – Securely connecting to an instance requires a few steps e.g. jumpboxes to connect to or IP addresses to whitelist etc. With Run Command, cloud ops engineers can invoke commands directly from their laptop, and never have to find keys or even instance IDs. Instead, system security relies on AWS auth, IAM roles and policies.
Run Command Operations are Fully Audited – With SSH, there is no real control over what they can do, nor is there an audit trail. With Run Command, every invoked operation is audited in CloudTrail, including information on the invoking user, instances on which command was run, parameters, and operation status. You have full control and ability to restrict what functions engineers can perform on a system.
Run Command has no SSH keys to Manage – Run Command leverages standard AWS credentials, API keys, and IAM policies. Through integration with a corporate auth system, engineers can interact with systems based on their corporate credentials and identity.
Run Command can Manage Multiple Systems at the Same Time – Simple tasks such as looking at the status of a Linux service or retrieving a log file across a fleet of managed instances is cumbersome using SSH. Run Command allows you to specify a list of instances by IDs or tags, and invokes your command, in parallel, across the specified fleet. This provides great leverage when troubleshooting or managing more than the smallest Pega clusters.
Run Command Makes Automating Complex Tasks Easier – Standardizing operational tasks requires detailed procedure documents or scripts describing the exact commands. Managing or deploying these scripts across the fleet is cumbersome. Run Command documents provide an easy way to encapsulate complex functions, and handle document management and access controls. When combined with AWS Lambda, documents provide a powerful automation platform to handle any complex task.
Example – Restarting a Docker Container Here is an example of a simple document used to restart a Docker container. It takes one parameter; the name of the Docker container to restart. It uses the AWS-RunShellScript method to invoke the command. The output is collected automatically by the service and returned to the caller. For an example of the latest document schema, see Creating Systems Manager Documents.
{
"schemaVersion":"1.2",
"description":"Restart the specified docker container.",
"parameters":{
"param":{
"type":"String",
"description":"(Required) name of the container to restart.",
"maxChars":1024
}
},
"runtimeConfig":{
"aws:runShellScript":{
"properties":[
{
"id":"0.aws:runShellScript",
"runCommand":[
"docker restart {{param}}"
]
}
]
}
}
}
Putting Run Command into practice at Pegasystems The Pegasystems provisioning system sits on AWS CloudFormation, which is used to deploy and update Pega Cloud resources. Layered on top of it is the Pega Provisioning Engine, a serverless, Lambda-based service that manages a library of CloudFormation templates and Ansible playbooks.
A Configuration Management Database (CMDB) tracks all the configurations details and history of every deployment and update, and lays out its data using a hierarchical directory naming convention. The following diagram shows how the various systems are integrated:
For cloud system management, Pega operations uses a command line version called cuttysh and a graphical version based on the Pega 7 platform, called the Pega Operations Portal. Both tools allow you to browse the CMDB of deployed environments, view configuration settings, and interact with deployed EC2 instances through Run Command.
CLI Walkthrough Here is a CLI walkthrough for looking into a customer deployment and interacting with instances using Run Command.
Launching the cuttysh tool brings you to the root of the CMDB and a list of the provisioned customers:
% cuttysh
d CUSTA
d CUSTB
d CUSTC
d CUSTD
You interact with the CMDB using standard Linux shell commands, such as cd, ls, cat, and grep. Items prefixed with s are services that have viewable properties. Items prefixed with d are navigable subdirectories in the CMDB hierarchy.
In this example, change directories into customer CUSTB’s portion of the CMDB hierarchy, and then further into a provisioned Pega environment called env1, under the Dev network. The tool displays the artifacts that are provisioned for that environment. These entries map to provisioned CloudFormation templates.
> cd CUSTB
/ROOT/CUSTB/us-east-1 > cd DEV/env1
The ls –l command shows the version of the provisioned resources. These version numbers map back to source control–managed artifacts for the CloudFormation, Ansible, and other components that compose a version of the Pega Cloud.
/ROOT/CUSTB/us-east-1/DEV/env1 > ls -l
s 1.2.5 RDSDatabase
s 1.2.5 PegaAppTier
s 7.2.1 Pega7
Now, use Run Command to interact with the deployed environments. To do this, use the attach command and specify the service with which to interact. In the following example, you attach to the Pega Web Tier. Using the information in the CMDB and instance tags, the CLI finds the corresponding EC2 instances and displays some basic information about them. This deployment has three instances.
/ROOT/CUSTB/us-east-1/DEV/env1 > attach PegaWebTier
# ID State Public Ip Private Ip Launch Time
0 i-0cf0e84 running 52.63.216.42 10.96.15.70 2017-01-16
1 i-0043c1d running 53.47.191.22 10.96.15.43 2017-01-16
2 i-09b879e running 55.93.118.27 10.96.15.19 2017-01-16
From here, you can use the run command to invoke Run Command documents. In the following example, you run the docker-ps document against instance 0 (the first one on the list). EC2 executes the command and returns the output to the CLI, which in turn shows it.
/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-ps
. .
CONTAINER ID IMAGE CREATED STATUS NAMES
2f187cc38c1 pega-7.2 10 weeks ago Up 8 weeks pega-web
Using the same command and some of the other documents that have been defined, you can restart a Docker container or even pull back the contents of a file to your local system. When you get a file, Run Command also leaves a copy in an S3 bucket in case you want to pass the link along to a colleague.
/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-restart pega-web
..
pega-web
/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 get-file /var/log/cfn-init-cmd.log
. . . . .
get-file
Data has been copied locally to: /tmp/get-file/i-0563c9e/data
Data is also available in S3 at: s3://my-bucket/CUSTB/cuttysh/get-file/data
Now, leverage the Run Command ability to do more than one thing at a time. In the following example, you attach to a deployment with three running instances and want to see the uptime for each instance. Using the par (parallel) option for run, the CLI tells Run Command to execute the uptime document on all instances in parallel.
/ROOT/CUSTB/us-east-1/DEV/env1 > run par uptime
…
Output for: i-006bdc991385c33
20:39:12 up 15 days, 3:54, 0 users, load average: 0.42, 0.32, 0.30
Output for: i-09390dbff062618
20:39:12 up 15 days, 3:54, 0 users, load average: 0.08, 0.19, 0.22
Output for: i-08367d0114c94f1
20:39:12 up 15 days, 3:54, 0 users, load average: 0.36, 0.40, 0.40
Commands are complete.
/ROOT/PEGACLOUD/CUSTB/us-east-1/PROD/prod1 >
Summary Run Command improves productivity by giving you faster access to systems and the ability to run operations across a group of instances. Pega Cloud operations has integrated Run Command with other operational tools to provide a clean and secure method for managing systems. This greatly improves operational efficiency, and gives greater control over who can do what in managed deployments. The Pega continual improvement process regularly assesses why operators need access, and turns those operations into new Run Command documents to be added to the library. In fact, their long-term goal is to stop deploying cloud systems with SSH enabled.
If you have any questions or suggestions, please leave a comment for us!
A Preliminary systemd.conf 2015 Schedule is Now Online!
We are happy to announce that an initial, preliminary version of the systemd.conf 2015 schedule is now online! (Please ignore that some rows in the schedule link the same session twice on that page. That’s a bug in the web site CMS we are working on to fix.)
We got an overwhelming number of high-quality submissions during the CfP! Because there were so many good talks we really wanted to accept, we decided to do two full days of talks now, leaving one more day for the hackfest and BoFs. We also shortened many of the slots, to make room for more. All in all we now have a schedule packed with fantastic presentations!
The areas covered range from containers, to system provisioning, stateless systems, distributed init systems, the kdbus IPC, control groups, systemd on the desktop, systemd in embedded devices, configuration management and systemd, and systemd in downstream distributions.
We’d like to thank everybody who submited a presentation proposal!
Also, don’t forget to register for the conference! Only a limited number of registrations are available due to space constraints! Register here!.
We are still looking for sponsors. If you’d like to join the ranks of systemd.conf 2015 sponsors, please have a look at our Becoming a Sponsor page!
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.