Tag Archives: scripting

Amazon Sumerian – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-sumerian-now-generally-available/

We announced Amazon Sumerian at AWS re:Invent 2017. As you can see from Tara‘s blog post (Presenting Amazon Sumerian: An Easy Way to Create VR, AR, and 3D Experiences), Sumerian does not require any specialized programming or 3D graphics expertise. You can build VR, AR, and 3D experiences for a wide variety of popular hardware platforms including mobile devices, head-mounted displays, digital signs, and web browsers.

I’m happy to announce that Sumerian is now generally available. You can create realistic virtual environments and scenes without having to acquire or master specialized tools for 3D modeling, animation, lighting, audio editing, or programming. Once built, you can deploy your finished creation across multiple platforms without having to write custom code or deal with specialized deployment systems and processes.

Sumerian gives you a web-based editor that you can use to quickly and easily create realistic, professional-quality scenes. There’s a visual scripting tool that lets you build logic to control how objects and characters (Sumerian Hosts) respond to user actions. Sumerian also lets you create rich, natural interactions powered by AWS services such as Amazon Lex, Polly, AWS Lambda, AWS IoT, and Amazon DynamoDB.

Sumerian was designed to work on multiple platforms. The VR and AR apps that you create in Sumerian will run in browsers that supports WebGL or WebVR and on popular devices such as the Oculus Rift, HTC Vive, and those powered by iOS or Android.

During the preview period, we have been working with a broad spectrum of customers to put Sumerian to the test and to create proof of concept (PoC) projects designed to highlight an equally broad spectrum of use cases, including employee education, training simulations, field service productivity, virtual concierge, design and creative, and brand engagement. Fidelity Labs (the internal R&D unit of Fidelity Investments), was the first to use a Sumerian host to create an engaging VR experience. Cora (the host) lives within a virtual chart room. She can display stock quotes, pull up company charts, and answer questions about a company’s performance. This PoC uses Amazon Polly to implement text to speech and Amazon Lex for conversational chatbot functionality. Read their blog post and watch the video inside to see Cora in action:

Now that Sumerian is generally available, you have the power to create engaging AR, VR, and 3D experiences of your own. To learn more, visit the Amazon Sumerian home page and then spend some quality time with our extensive collection of Sumerian Tutorials.

Jeff;

 

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    How to Enhance the Security of Sensitive Customer Data by Using Amazon CloudFront Field-Level Encryption

    Post Syndicated from Alex Tomic original https://aws.amazon.com/blogs/security/how-to-enhance-the-security-of-sensitive-customer-data-by-using-amazon-cloudfront-field-level-encryption/

    Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content to end users through a worldwide network of edge locations. CloudFront provides a number of benefits and capabilities that can help you secure your applications and content while meeting compliance requirements. For example, you can configure CloudFront to help enforce secure, end-to-end connections using HTTPS SSL/TLS encryption. You also can take advantage of CloudFront integration with AWS Shield for DDoS protection and with AWS WAF (a web application firewall) for protection against application-layer attacks, such as SQL injection and cross-site scripting.

    Now, CloudFront field-level encryption helps secure sensitive data such as a customer phone numbers by adding another security layer to CloudFront HTTPS. Using this functionality, you can help ensure that sensitive information in a POST request is encrypted at CloudFront edge locations. This information remains encrypted as it flows to and beyond your origin servers that terminate HTTPS connections with CloudFront and throughout the application environment. In this blog post, we demonstrate how you can enhance the security of sensitive data by using CloudFront field-level encryption.

    Note: This post assumes that you understand concepts and services such as content delivery networks, HTTP forms, public-key cryptography, CloudFrontAWS Lambda, and the AWS CLI. If necessary, you should familiarize yourself with these concepts and review the solution overview in the next section before proceeding with the deployment of this post’s solution.

    How field-level encryption works

    Many web applications collect and store data from users as those users interact with the applications. For example, a travel-booking website may ask for your passport number and less sensitive data such as your food preferences. This data is transmitted to web servers and also might travel among a number of services to perform tasks. However, this also means that your sensitive information may need to be accessed by only a small subset of these services (most other services do not need to access your data).

    User data is often stored in a database for retrieval at a later time. One approach to protecting stored sensitive data is to configure and code each service to protect that sensitive data. For example, you can develop safeguards in logging functionality to ensure sensitive data is masked or removed. However, this can add complexity to your code base and limit performance.

    Field-level encryption addresses this problem by ensuring sensitive data is encrypted at CloudFront edge locations. Sensitive data fields in HTTPS form POSTs are automatically encrypted with a user-provided public RSA key. After the data is encrypted, other systems in your architecture see only ciphertext. If this ciphertext unintentionally becomes externally available, the data is cryptographically protected and only designated systems with access to the private RSA key can decrypt the sensitive data.

    It is critical to secure private RSA key material to prevent unauthorized access to the protected data. Management of cryptographic key material is a larger topic that is out of scope for this blog post, but should be carefully considered when implementing encryption in your applications. For example, in this blog post we store private key material as a secure string in the Amazon EC2 Systems Manager Parameter Store. The Parameter Store provides a centralized location for managing your configuration data such as plaintext data (such as database strings) or secrets (such as passwords) that are encrypted using AWS Key Management Service (AWS KMS). You may have an existing key management system in place that you can use, or you can use AWS CloudHSM. CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud.

    To illustrate field-level encryption, let’s look at a simple form submission where Name and Phone values are sent to a web server using an HTTP POST. A typical form POST would contain data such as the following.

    POST / HTTP/1.1
    Host: example.com
    Content-Type: application/x-www-form-urlencoded
    Content-Length:60
    
    Name=Jane+Doe&Phone=404-555-0150

    Instead of taking this typical approach, field-level encryption converts this data similar to the following.

    POST / HTTP/1.1
    Host: example.com
    Content-Type: application/x-www-form-urlencoded
    Content-Length: 1713
    
    Name=Jane+Doe&Phone=AYABeHxZ0ZqWyysqxrB5pEBSYw4AAA...

    To further demonstrate field-level encryption in action, this blog post includes a sample serverless application that you can deploy by using a CloudFormation template, which creates an application environment using CloudFront, Amazon API Gateway, and Lambda. The sample application is only intended to demonstrate field-level encryption functionality and is not intended for production use. The following diagram depicts the architecture and data flow of this sample application.

    Sample application architecture and data flow

    Diagram of the solution's architecture and data flow

    Here is how the sample solution works:

    1. An application user submits an HTML form page with sensitive data, generating an HTTPS POST to CloudFront.
    2. Field-level encryption intercepts the form POST and encrypts sensitive data with the public RSA key and replaces fields in the form post with encrypted ciphertext. The form POST ciphertext is then sent to origin servers.
    3. The serverless application accepts the form post data containing ciphertext where sensitive data would normally be. If a malicious user were able to compromise your application and gain access to your data, such as the contents of a form, that user would see encrypted data.
    4. Lambda stores data in a DynamoDB table, leaving sensitive data to remain safely encrypted at rest.
    5. An administrator uses the AWS Management Console and a Lambda function to view the sensitive data.
    6. During the session, the administrator retrieves ciphertext from the DynamoDB table.
    7. The administrator decrypts sensitive data by using private key material stored in the EC2 Systems Manager Parameter Store.
    8. Decrypted sensitive data is transmitted over SSL/TLS via the AWS Management Console to the administrator for review.

    Deployment walkthrough

    The high-level steps to deploy this solution are as follows:

    1. Stage the required artifacts
      When deployment packages are used with Lambda, the zipped artifacts have to be placed in an S3 bucket in the target AWS Region for deployment. This step is not required if you are deploying in the US East (N. Virginia) Region because the package has already been staged there.
    2. Generate an RSA key pair
      Create a public/private key pair that will be used to perform the encrypt/decrypt functionality.
    3. Upload the public key to CloudFront and associate it with the field-level encryption configuration
      After you create the key pair, the public key is uploaded to CloudFront so that it can be used by field-level encryption.
    4. Launch the CloudFormation stack
      Deploy the sample application for demonstrating field-level encryption by using AWS CloudFormation.
    5. Add the field-level encryption configuration to the CloudFront distribution
      After you have provisioned the application, this step associates the field-level encryption configuration with the CloudFront distribution.
    6. Store the RSA private key in the Parameter Store
      Store the private key in the Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value.

    Deploy the solution

    1. Stage the required artifacts

    (If you are deploying in the US East [N. Virginia] Region, skip to Step 2, “Generate an RSA key pair.”)

    Stage the Lambda function deployment package in an Amazon S3 bucket located in the AWS Region you are using for this solution. To do this, download the zipped deployment package and upload it to your in-region bucket. For additional information about uploading objects to S3, see Uploading Object into Amazon S3.

    2. Generate an RSA key pair

    In this section, you will generate an RSA key pair by using OpenSSL:

    1. Confirm access to OpenSSL.
      $ openssl version

      You should see version information similar to the following.

      OpenSSL <version> <date>

    1. Create a private key using the following command.
      $ openssl genrsa -out private_key.pem 2048

      The command results should look similar to the following.

      Generating RSA private key, 2048 bit long modulus
      ................................................................................+++
      ..........................+++
      e is 65537 (0x10001)
    1. Extract the public key from the private key by running the following command.
      $ openssl rsa -pubout -in private_key.pem -out public_key.pem

      You should see output similar to the following.

      writing RSA key
    1. Restrict access to the private key.$ chmod 600 private_key.pem Note: You will use the public and private key material in Steps 3 and 6 to configure the sample application.

    3. Upload the public key to CloudFront and associate it with the field-level encryption configuration

    Now that you have created the RSA key pair, you will use the AWS Management Console to upload the public key to CloudFront for use by field-level encryption. Complete the following steps to upload and configure the public key.

    Note: Do not include spaces or special characters when providing the configuration values in this section.

    1. From the AWS Management Console, choose Services > CloudFront.
    2. In the navigation pane, choose Public Key and choose Add Public Key.
      Screenshot of adding a public key

    Complete the Add Public Key configuration boxes:

    • Key Name: Type a name such as DemoPublicKey.
    • Encoded Key: Paste the contents of the public_key.pem file you created in Step 2c. Copy and paste the encoded key value for your public key, including the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines.
    • Comment: Optionally add a comment.
    1. Choose Create.
    2. After adding at least one public key to CloudFront, the next step is to create a profile to tell CloudFront which fields of input you want to be encrypted. While still on the CloudFront console, choose Field-level encryption in the navigation pane.
    3. Under Profiles, choose Create profile.
      Screenshot of creating a profile

    Complete the Create profile configuration boxes:

    • Name: Type a name such as FLEDemo.
    • Comment: Optionally add a comment.
    • Public key: Select the public key you configured in Step 4.b.
    • Provider name: Type a provider name such as FLEDemo.
      This information will be used when the form data is encrypted, and must be provided to applications that need to decrypt the data, along with the appropriate private key.
    • Pattern to match: Type phone. This configures field-level encryption to match based on the phone.
    1. Choose Save profile.
    2. Configurations include options for whether to block or forward a query to your origin in scenarios where CloudFront can’t encrypt the data. Under Encryption Configurations, choose Create configuration.
      Screenshot of creating a configuration

    Complete the Create configuration boxes:

    • Comment: Optionally add a comment.
    • Content type: Enter application/x-www-form-urlencoded. This is a common media type for encoding form data.
    • Default profile ID: Select the profile you added in Step 3e.
    1. Choose Save configuration

    4. Launch the CloudFormation stack

    Launch the sample application by using a CloudFormation template that automates the provisioning process.

    Input parameterInput parameter description
    ProviderIDEnter the Provider name you assigned in Step 3e. The ProviderID is used in field-level encryption configuration in CloudFront (letters and numbers only, no special characters)
    PublicKeyNameEnter the Key Name you assigned in Step 3b. This name is assigned to the public key in field-level encryption configuration in CloudFront (letters and numbers only, no special characters).
    PrivateKeySSMPathLeave as the default: /cloudfront/field-encryption-sample/private-key
    ArtifactsBucketThe S3 bucket with artifact files (staged zip file with app code). Leave as default if deploying in us-east-1.
    ArtifactsPrefixThe path in the S3 bucket containing artifact files. Leave as default if deploying in us-east-1.

    To finish creating the CloudFormation stack:

    1. Choose Next on the Select Template page, enter the input parameters and choose Next.
      Note: The Artifacts configuration needs to be updated only if you are deploying outside of us-east-1 (US East [N. Virginia]). See Step 1 for artifact staging instructions.
    2. On the Options page, accept the defaults and choose Next.
    3. On the Review page, confirm the details, choose the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. (The stack will be created in approximately 15 minutes.)

    5. Add the field-level encryption configuration to the CloudFront distribution

    While still on the CloudFront console, choose Distributions in the navigation pane, and then:

      1. In the Outputs section of the FLE-Sample-App stack, look for CloudFrontDistribution and click the URL to open the CloudFront console.
      2. Choose Behaviors, choose the Default (*) behavior, and then choose Edit.
      3. For Field-level Encryption Config, choose the configuration you created in Step 3g.
        Screenshot of editing the default cache behavior
      4. Choose Yes, Edit.
      5. While still in the CloudFront distribution configuration, choose the General Choose Edit, scroll down to Distribution State, and change it to Enabled.
      6. Choose Yes, Edit.

    6. Store the RSA private key in the Parameter Store

    In this step, you store the private key in the EC2 Systems Manager Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value. For more information about AWS KMS, see the AWS Key Management Service Developer Guide. You will need a working installation of the AWS CLI to complete this step.

    1. Store the private key in the Parameter Store with the AWS CLI by running the following command. You will find the <KMSKeyID> in the KMSKeyID in the CloudFormation stack Outputs. Substitute it for the placeholder in the following command.
      $ aws ssm put-parameter --type "SecureString" --name /cloudfront/field-encryption-sample/private-key --value file://private_key.pem --key-id "<KMSKeyID>"
      
      ------------------
      |  PutParameter  |
      +----------+-----+
      |  Version |  1  |
      +----------+-----+

    1. Verify the parameter. Your private key material should be accessible through the ssm get-parameter in the following command in the Value The key material has been truncated in the following output.
      $ aws ssm get-parameter --name /cloudfront/field-encryption-sample/private-key --with-decryption
      
      -----…
      
      ||  Value  |  -----BEGIN RSA PRIVATE KEY-----
      MIIEowIBAAKCAQEAwGRBGuhacmw+C73kM6Z…….

      Notice we use the —with decryption argument in this command. This returns the private key as cleartext.

      This completes the sample application deployment. Next, we show you how to see field-level encryption in action.

    1. Delete the private key from local storage. On Linux for example, using the shred command, securely delete the private key material from your workstation as shown below. You may also wish to store the private key material within an AWS CloudHSM or other protected location suitable for your security requirements. For production implementations, you also should implement key rotation policies.
      $ shred -zvu -n  100 private*.pem
      
      shred: private_encrypted_key.pem: pass 1/101 (random)...
      shred: private_encrypted_key.pem: pass 2/101 (dddddd)...
      shred: private_encrypted_key.pem: pass 3/101 (555555)...
      ….

    Test the sample application

    Use the following steps to test the sample application with field-level encryption:

    1. Open sample application in your web browser by clicking the ApplicationURL link in the CloudFormation stack Outputs. (for example, https:d199xe5izz82ea.cloudfront.net/prod/). Note that it may take several minutes for the CloudFront distribution to reach the Deployed Status from the previous step, during which time you may not be able to access the sample application.
    2. Fill out and submit the HTML form on the page:
      1. Complete the three form fields: Full Name, Email Address, and Phone Number.
      2. Choose Submit.
        Screenshot of completing the sample application form
        Notice that the application response includes the form values. The phone number returns the following ciphertext encryption using your public key. This ciphertext has been stored in DynamoDB.
        Screenshot of the phone number as ciphertext
    3. Execute the Lambda decryption function to download ciphertext from DynamoDB and decrypt the phone number using the private key:
      1. In the CloudFormation stack Outputs, locate DecryptFunction and click the URL to open the Lambda console.
      2. Configure a test event using the “Hello World” template.
      3. Choose the Test button.
    4. View the encrypted and decrypted phone number data.
      Screenshot of the encrypted and decrypted phone number data

    Summary

    In this blog post, we showed you how to use CloudFront field-level encryption to encrypt sensitive data at edge locations and help prevent access from unauthorized systems. The source code for this solution is available on GitHub. For additional information about field-level encryption, see the documentation.

    If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the CloudFront forum.

    – Alex and Cameron

    Presenting Amazon Sumerian: An easy way to create VR, AR, and 3D experiences

    Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-presenting-amazon-sumerian/

    If you have had an opportunity to read any of my blog posts or attended any session I’ve conducted at various conferences, you are probably aware that I am definitively a geek girl. I am absolutely enamored with all of the latest advancements that have been made in technology areas like cloud, artificial intelligence, internet of things and the maker space, as well as, with virtual reality and augmented reality. In my opinion, it is a wonderful time to be a geek. All the things that we dreamed about building while we sweated through our algorithms and discrete mathematics classes or the technology we marveled at when watching Star Wars and Star Trek are now coming to fruition.  So hopefully this means it will only be a matter of time before I can hyperdrive to other galaxies in space, but until then I can at least build the 3D virtual reality and augmented reality characters and images like those featured in some of my favorite shows.

    Amazon Sumerian provides tools and resources that allows anyone to create and run augmented reality (AR), virtual reality (VR), and 3D applications with ease.  With Sumerian, you can build multi-platform experiences that run on hardware like the Oculus, HTC Vive, and iOS devices using WebVR compatible browsers and with support for ARCore on Android devices coming soon.

    This exciting new service, currently in preview, delivers features to allow you to design highly immersive and interactive 3D experiences from your browser. Some of these features are:

    • Editor: A web-based editor for constructing 3D scenes, importing assets, scripting interactions and special effects, with cross-platform publishing.
    • Object Library: a library of pre-built objects and templates.
    • Asset Import: Upload 3D assets to use in your scene. Sumerian supports importing FBX, OBJ, and coming soon Unity projects.
    • Scripting Library: provides a JavaScript scripting library via its 3D engine for advanced scripting capabilities.
    • Hosts: animated, lifelike 3D characters that can be customized for gender, voice, and language.
    • AWS Services Integration: baked in integration with Amazon Polly and Amazon Lex to add speech and natural language to into Sumerian hosts. Additionally, the scripting library can be used with AWS Lambda allowing use of the full range of AWS services.

    Since Amazon Sumerian doesn’t require you to have 3D graphics or programming experience to build rich, interactive VR and AR scenes, let’s take a quick run to the Sumerian Dashboard and check it out.

    From the Sumerian Dashboard, I can easily create a new scene with a push of a button.

    A default view of the new scene opens and is displayed in the Sumerian Editor. With the Tara Blog Scene opened in the editor, I can easily import assets into my scene.

    I’ll click the Import Asset button and pick an asset, View Room, to import into the scene. With the desired asset selected, I’ll click the Add button to import it.

    Excellent, my asset was successfully imported into the Sumerian Editor and is shown in the Asset panel.  Now, I have the option to add the View Room object into my scene by selecting it in the Asset panel and then dragging it onto the editor’s canvas.

    I’ll repeat the import asset process and this time I will add the Mannequin asset to the scene.

    Additionally, with Sumerian, I can add scripting to Entity assets to make my scene even more exciting by adding a ScriptComponent to an entity and creating a script.  I can use the provided built-in scripts or create my own custom scripts. If I create a new custom script, I will get a blank script with some base JavaScript code that looks similar to the code below.

    'use strict';
    /* global sumerian */
    //This is Me-- trying out the custom scripts - Tara
    
    var setup = function (args, ctx) {
    // Called when play mode starts.
    };
    var fixedUpdate = function (args, ctx) {
    // Called on every physics update, after setup().
    };
    var update = function (args, ctx) {
    // Called on every render frame, after setup().
    };
    var lateUpdate = function (args, ctx) {
    // Called after all script "update" methods in the scene has been called.
    };
    var cleanup = function (args, ctx) {
    // Called when play mode stops.
    };
    var parameters = [];

    Very cool, I just created a 3D scene using Amazon Sumerian in a matter of minutes and I have only scratched the surface.

    Summary

    The Amazon Sumerian service enables you to create, build, and run virtual reality (VR), augmented reality (AR), and 3D applications with ease.  You don’t need any 3D graphics or specialized programming knowledge to get started building scenes and immersive experiences.  You can import FBX, OBJ, and Unity projects in Sumerian, as well as upload your own 3D assets for use in your scene. In addition, you can create digital characters to narrate your scene and with these digital assets, you have choices for the character’s appearance, speech and behavior.

    You can learn more about Amazon Sumerian and sign up for the preview to get started with the new service on the product page.  I can’t wait to see what rich experiences you all will build.

    Tara

     

    B2 Cloud Storage Roundup

    Post Syndicated from Andy Klein original https://www.backblaze.com/blog/b2-cloud-storage-roundup/

    B2 Integrations
    Over the past several months, B2 Cloud Storage has continued to grow like we planted magic beans. During that time we have added a B2 Java SDK, and certified integrations with GoodSync, Arq, Panic, UpdraftPlus, Morro Data, QNAP, Archiware, Restic, and more. In addition, B2 customers like Panna Cooking, Sermon Audio, and Fellowship Church are happy they chose B2 as their cloud storage provider. If any of that sounds interesting, read on.

    The B2 Java SDK

    While the Backblaze B2 API is well documented and straight-forward to implement, we were asked by a few of our Integration Partners if we had an SDK they could use. So we developed one as an open-course project on GitHub, where we hope interested parties will not only use our Java SDK, but make it better for everyone else.

    There are different reasons one might use the Java SDK, but a couple of areas where the SDK can simplify the coding process are:

    Expiring Authorization — B2 requires an application key for a given account be reissued once a day when using the API. If the application key expires while you are in the middle of transferring files or some other B2 activity (bucket list, etc.), the SDK can be used to detect and then update the application key on the fly. Your B2 related activities will continue without incident and without having to capture and code your own exception case.

    Error Handling — There are different types of error codes B2 will return, from expired application keys to detecting malformed requests to command time-outs. The SDK can dramatically simplify the coding needed to capture and account for the various things that can happen.

    While Backblaze has created the Java SDK, developers in the GitHub community have also created other SDKs for B2, for example, for PHP (https://github.com/cwhite92/b2-sdk-php,) and Go (https://github.com/kurin/blazer.) Let us know in the comments about other SDKs you’d like to see or perhaps start your own GitHub project. We will publish any updates in our next B2 roundup.

    What You Can Do with Affordable and Available Cloud Storage

    You’re probably aware that B2 is up to 75% less expensive than other similar cloud storage services like Amazon S3 and Microsoft Azure. Businesses and organizations are finding that projects that previously weren’t economically feasible with other Cloud Storage services are now not only possible, but a reality with B2. Here are a few recent examples:

    SermonAudio logoSermonAudio wanted their media files to be readily available, but didn’t want to build and manage their own internal storage farm. Until B2, cloud storage was just too expensive to use. Now they use B2 to store their audio and video files, and also as the primary source of downloads and streaming requests from their subscribers.
    Fellowship Church logoFellowship Church wanted to escape from the ever increasing amount of time they were spending saving their data to their LTO-based system. Using B2 saved countless hours of personnel time versus LTO, fit easily into their video processing workflow, and provided instant access at any time to their media library.
    Panna logoPanna Cooking replaced their closet full of archive hard drives with a cost-efficient hybrid-storage solution combining 45Drives and Backblaze B2 Cloud Storage. Archived media files that used to take hours to locate are now readily available regardless of whether they reside in local storage or in the B2 Cloud.

    B2 Integrations

    Leading companies in backup, archive, and sync continue to add B2 Cloud Storage as a storage destination for their customers. These companies realize that by offering B2 as an option, they can dramatically lower the total cost of ownership for their customers — and that’s always a good thing.

    If your favorite application is not integrated to B2, you can do something about it. One integration partner told us they received over 200 customer requests for a B2 integration. The partner got the message and the integration is currently in beta test.

    Below are some of the partner integrations completed in the past few months. You can check the B2 Partner Integrations page for a complete list.

    Archiware — Both P5 Archive and P5 Backup can now store data in the B2 Cloud making your offsite media files readily available while keeping your off-site storage costs predictable and affordable.

    Arq — Combine Arq and B2 for amazingly affordable backup of external drives, network drives, NAS devices, Windows PCs, Windows Servers, and Macs to the cloud.

    GoodSync — Automatically synchronize and back up all your photos, music, email, and other important files between all your desktops, laptops, servers, external drives, and sync, or back up to B2 Cloud Storage for off-site storage.

    QNAP — QNAP Hybrid Backup Sync consolidates backup, restoration, and synchronization functions into a single QTS application to easily transfer your data to local, remote, and cloud storage.

    Morro Data — Their CloudNAS solution stores files in the cloud, caches them locally as needed, and syncs files globally among other CloudNAS systems in an organization.

    Restic – Restic is a fast, secure, multi-platform command line backup program. Files are uploaded to a B2 bucket as de-duplicated, encrypted chunks. Each backup is a snapshot of only the data that has changed, making restores of a specific date or time easy.

    Transmit 5 by Panic — Transmit 5, the gold standard for macOS file transfer apps, now supports B2. Upload, download, and manage files on tons of servers with an easy, familiar, and powerful UI.

    UpdraftPlus — WordPress developers and admins can now use the UpdraftPlus Premium WordPress plugin to affordably back up their data to the B2 Cloud.

    Getting Started with B2 Cloud Storage

    If you’re using B2 today, thank you. If you’d like to try B2, but don’t know where to start, here’s a guide to getting started with the B2 Web Interface — no programming or scripting is required. You get 10 gigabytes of free storage and 1 gigabyte a day in free downloads. Give it a try.

    The post B2 Cloud Storage Roundup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Backing Up Linux to Backblaze B2 with Duplicity and Restic

    Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/

    Linux users have a variety of options for handling data backup. The choices range from free and open-source programs to paid commercial tools, and include applications that are purely command-line based (CLI) and others that have a graphical interface (GUI), or both.

    If you take a look at our Backblaze B2 Cloud Storage Integrations page, you will see a number of offerings that enable you to back up your Linux desktops and servers to Backblaze B2. These include CloudBerry, Duplicity, Duplicacy, 45 Drives, GoodSync, HashBackup, QNAP, Restic, and Rclone, plus other choices for NAS and hybrid uses.

    In this post, we’ll discuss two popular command line and open-source programs: one older, Duplicity, and a new player, Restic.

    Old School vs. New School

    We’re highlighting Duplicity and Restic today because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

    Old School (Duplicity)

    In the old school model, data is written sequentially to the storage medium. Once a section of data is recorded, new data is written starting where that section of data ends. It’s not possible to go back and change the data that’s already been written.

    This old-school model has long been associated with the use of magnetic tape, a prime example of which is the LTO (Linear Tape-Open) standard. In this “write once” model, files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. As a Linux/Unix user, you undoubtedly are familiar with the TAR archive format, which is an acronym for Tape ARchive. TAR has been around since 1979 and was originally developed to write data to sequential I/O devices with no file system of their own.

    It is from the use of tape that we get the full backup/incremental backup approach to backups. A backup sequence beings with a full backup of data. Each incremental backup contains what’s been changed since the last full backup until the next full backup is made and the process starts over, filling more and more tape or whatever medium is being used.

    This is the model used by Duplicity: full and incremental backups. Duplicity backs up files by producing encrypted, digitally signed, versioned, TAR-format volumes and uploading them to a remote location, including Backblaze B2 Cloud Storage. Released under the terms of the GNU General Public License (GPL), Duplicity is free software.

    With Duplicity, the first archive is a complete (full) backup, and subsequent (incremental) backups only add differences from the latest full or incremental backup. Chains consisting of a full backup and a series of incremental backups can be recovered to the point in time that any of the incremental steps were taken. If any of the incremental backups are missing, then reconstructing a complete and current backup is much more difficult and sometimes impossible.

    Duplicity is available under many Unix-like operating systems (such as Linux, BSD, and Mac OS X) and ships with many popular Linux distributions including Ubuntu, Debian, and Fedora. It also can be used with Windows under Cygwin.

    We recently published a KB article on How to configure Backblaze B2 with Duplicity on Linux that demonstrates how to set up Duplicity with B2 and back up and restore a directory from Linux.

    New School (Restic)

    With the arrival of non-sequential storage medium, such as disk drives, and new ideas such as deduplication, comes the new school approach, which is used by Restic. Data can be written and changed anywhere on the storage medium. This efficiency comes largely through the use of deduplication. Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, greatly increasing storage efficiency and flexibility.

    Restic is a recently available multi-platform command line backup software program that is designed to be fast, efficient, and secure. Restic supports a variety of backends for storing backups, including a local server, SFTP server, HTTP Rest server, and a number of cloud storage providers, including Backblaze B2.

    Files are uploaded to a B2 bucket as deduplicated, encrypted chunks. Each time a backup runs, only changed data is backed up. On each backup run, a snapshot is created enabling restores to a specific date or time.

    Restic assumes that the storage location for repository is shared, so it always encrypts the backed up data. This is in addition to any encryption and security from the storage provider.

    Restic is open source and free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

    There’s a lot more you can do with Restic, including adding tags, mounting a repository locally, and scripting. To learn more, you can review the documentation at https://restic.readthedocs.io.

    Coincidentally with this blog post, we published a KB article, How to configure Backblaze B2 with Restic on Linux, in which we show how to set up Restic for use with B2 and how to back up and restore a home directory from Linux to B2.

    Which is Right for You?

    While Duplicity is a popular, widely-available, and useful program, many users of cloud storage solutions such as B2 are moving to new-school solutions like Restic that take better advantage of the non-sequential access capabilities and speed of modern storage media used by cloud storage providers.

    Tell us how you’re backing up Linux

    Please let us know in the comments what you’re using for Linux backups, and if you have experience using Duplicity, Restic, or other backup software with Backblaze B2.

    The post Backing Up Linux to Backblaze B2 with Duplicity and Restic appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Spaghetti Download – Web Application Security Scanner

    Post Syndicated from Darknet original https://www.darknet.org.uk/2017/10/spaghetti-download-web-application-security-scanner/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

    Spaghetti Download – Web Application Security Scanner

    Spaghetti is an Open-source Web Application Security Scanner, it is designed to find various default and insecure files, configurations, and misconfigurations.

    It is built on Python 2.7 and can run on any platform which has a Python environment.

    Features of Spaghetti Web Application Security Scanner

    • Fingerprints
      • Server
      • Web Frameworks (CakePHP, CherryPy,…)
      • Web Application Firewall (Waf)
      • Content Management System (CMS)
      • Operating System (Linux, Unix,..)
      • Language (PHP, Ruby,…)
      • Cookie Security
    • Bruteforce
      • Admin Interface
      • Common Backdoors
      • Common Backup Directory
      • Common Backup File
      • Common Directory
      • Common File
      • Log File
    • Disclosure
      • Emails
      • Private IP
      • Credit Cards
    • Attacks
      • HTML Injection
      • SQL Injection
      • LDAP Injection
      • XPath Injection
      • Cross Site Scripting (XSS)
      • Remote File Inclusion (RFI)
      • PHP Code Injection
    • Other
      • HTTP Allow Methods
      • HTML Object
      • Multiple Index
      • Robots Paths
      • Web Dav
      • Cross Site Tracing (XST)
      • PHPINFO
      • .Listing
    • Vulns
      • ShellShock
      • Anonymous Cipher (CVE-2007-1858)
      • Crime (SPDY) (CVE-2012-4929)
      • Struts-Shock

    Using Spaghetti Web Application Security Scanner

    [email protected]:~/Spaghetti# python spaghetti.py
    _____ _ _ _ _
    | __|___ ___ ___| |_ ___| |_| |_|_|
    |__ | .

    Read the rest of Spaghetti Download – Web Application Security Scanner now! Only available at Darknet.

    Browser hacking for 280 character tweets

    Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/browser-hacking-for-280-character-tweets.html

    Twitter has raised the limit to 280 characters for a select number of people. However, they left open a hole, allowing anybody to make large tweets with a little bit of hacking. The hacking skills needed are basic hacking skills, which I thought I’d write up in a blog post.


    Specifically, the skills you will exercise are:

    • basic command-line shell
    • basic HTTP requests
    • basic browser DOM editing

    The short instructions

    The basic instructions were found in tweets like the following:
    These instructions are clear to the average hacker, but of course, a bit difficult for those learning hacking, hence this post.

    The command-line

    The basics of most hacking start with knowledge of the command-line. This is the “Terminal” app under macOS or cmd.exe under Windows. Almost always when you see hacking dramatized in the movies, they are using the command-line.
    In the beginning, the command-line is all computers had. To do anything on a computer, you had to type a “command” telling it what to do. What we see as the modern graphical screen is a layer on top of the command-line, one that translates clicks of the mouse into the raw commands.
    On most systems, the command-line is known as “bash”. This is what you’ll find on Linux and macOS. Windows historically has had a different command-line that uses slightly different syntax, though in the last couple years, they’ve also supported “bash”. You’ll have to install it first, such as by following these instructions.
    You’ll see me use command that may not be yet installed on your “bash” command-line, like nc and curl. You’ll need to run a command to install them, such as:
    sudo apt-get install nc curl
    The thing to remember about the command-line is that the mouse doesn’t work. You can’t click to move the cursor as you normally do in applications. That’s because the command-line predates the mouse by decades. Instead, you have to use arrow keys.
    I’m not going to spend much effort discussing the command-line, as a complete explanation is beyond the scope of this document. Instead, I’m assuming the reader either already knows it, or will learn-from-example as we go along.

    Web requests

    The basics of how the web works are really simple. A request to a web server is just a small packet of text, such as the following, which does a search on Google for the search-term “penguin” (presumably, you are interested in knowing more about penguins):
    GET /search?q=penguin HTTP/1.0
    Host: www.google.com
    User-Agent: human
    The command we are sending to the server is GET, meaning get a page. We are accessing the URL /search, which on Google’s website, is how you do a search. We are then sending the parameter q with the value penguin. We also declare that we are using version 1.0 of the HTTP (hyper-text transfer protocol).
    Following the first line there are a number of additional headers. In one header, we declare the Host name that we are accessing. Web servers can contain many different websites, with different names, so this header is usually imporant.
    We also add the User-Agent header. The “user-agent” means the “browser” that you use, like Edge, Chrome, Firefox, or Safari. It allows servers to send content optimized for different browsers. Since we are sending web requests without a browser here, we are joking around saying human.
    Here’s what happens when we use the nc program to send this to a google web server:
    The first part is us typing, until we hit the [enter] key to create a blank line. After that point is the response from the Google server. We get back a result code (OK), followed by more headers from the server, and finally the contents of the webpage, which goes on from many screens. (We’ll talk about what web pages look like below).
    Note that a lot of HTTP headers are optional and really have little influence on what’s going on. They are just junk added to web requests. For example, we see Google report a P3P header is some relic of 2002 that nobody uses anymore, as far as I can tell. Indeed, if you follow the URL in the P3P header, Google pretty much says exactly that.
    I point this out because the request I show above is a simplified one. In practice, most requests contain a lot more headers, especially Cookie headers. We’ll see that later when making requests.

    Using cURL instead

    Sending the raw HTTP request to the server, and getting raw HTTP/HTML back, is annoying. The better way of doing this is with the tool known as cURL, or plainly, just curl. You may be familiar with the older command-line tools wget. cURL is similar, but more flexible.
    To use curl for the experiment above, we’d do something like the following. We are saving the web page to “penguin.html” instead of just spewing it on the screen.
    Underneath, cURL builds an HTTP header just like the one we showed above, and sends it to the server, getting the response back.

    Web-pages

    Now let’s talk about web pages. When you look at the web page we got back from Google while searching for “penguin”, you’ll see that it’s intimidatingly complex. I mean, it intimidates me. But it all starts from some basic principles, so we’ll look at some simpler examples.
    The following is text of a simple web page:
    <html>
    <body>
    <h1>Test</h1>
    <p>This is a simple web page</p>
    </body>
    </html>
    This is HTML, “hyper-text markup language”. As it’s name implies, we “markup” text, such as declaring the first text as a level-1 header (H1), and the following text as a paragraph (P).
    In a web browser, this gets rendered as something that looks like the following. Notice how a header is formatted differently from a paragraph. Also notice that web browsers can use local files as well as make remote requests to web servers:
    You can right-mouse click on the page and do a “View Source”. This will show the raw source behind the web page:
    Web pages don’t just contain marked-up text. They contain two other important features, style information that dictates how things appear, and script that does all the live things that web pages do, from which we build web apps.
    So let’s add a little bit of style and scripting to our web page. First, let’s view the source we’ll be adding:
    In our header (H1) field, we’ve added the attribute to the markup giving this an id of mytitle. In the style section above, we give that element a color of blue, and tell it to align to the center.
    Then, in our script section, we’ve told it that when somebody clicks on the element “mytitle”, it should send an “alert” message of “hello”.
    This is what our web page now looks like, with the center blue title:
    When we click on the title, we get a popup alert:
    Thus, we see an example of the three components of a webpage: markup, style, and scripting.

    Chrome developer tools

    Now we go off the deep end. Right-mouse click on “Test” (not normal click, but right-button click, to pull up a menu). Select “Inspect”.
    You should now get a window that looks something like the following. Chrome splits the screen in half, showing the web page on the left, and it’s debug tools on the right.
    This looks similar to what “View Source” shows, but it isn’t. Instead, it’s showing how Chrome interpreted the source HTML. For example, our style/script tags should’ve been marked up with a head (header) tag. We forgot it, but Chrome adds it in anyway.
    What Google is showing us is called the DOM, or document object model. It shows us all the objects that make up a web page, and how they fit together.
    For example, it shows us how the style information for #mytitle is created. It first starts with the default style information for an h1 tag, and then how we’ve changed it with our style specifications.
    We can edit the DOM manually. Just double click on things you want to change. For example, in this screen shot, I’ve changed the style spec from blue to red, and I’ve changed the header and paragraph test. The original file on disk hasn’t changed, but I’ve changed the DOM in memory.
    This is a classic hacking technique. If you don’t like things like paywalls, for example, just right-click on the element blocking your view of the text, “Inspect” it, then delete it. (This works for some paywalls).
    This edits the markup and style info, but changing the scripting stuff is a bit more complicated. To do that, click on the [Console] tab. This is the scripting console, and allows you to run code directly as part of the webpage. We are going to run code that resets what happens when we click on the title. In this case, we are simply going to change the message to “goodbye”.
    Now when we click on the title, we indeed get the message:
    Again, a common way to get around paywalls is to run some code like that that change which functions will be called.

    Putting it all together

    Now let’s put this all together in order to hack Twitter to allow us (the non-chosen) to tweet 280 characters. Review Dildog’s instructions above.
    The first step is to get to Chrome Developer Tools. Dildog suggests F12. I suggest right-clicking on the Tweet button (or Reply button, as I use in my example) and doing “Inspect”, as I describe above.
    You’ll now see your screen split in half, with the DOM toward the right, similar to how I describe above. However, Twitter’s app is really complex. Well, not really complex, it’s all basic stuff when you come right down to it. It’s just so much stuff — it’s a large web app with lots of parts. So we have to dive in without understanding everything that’s going on.
    The Tweet/Reply button we are inspecting is going to look like this in the DOM:
    The Tweet/Reply button is currently greyed out because it has the “disabled” attribute. You need to double click on it and remove that attribute. Also, in the class attribute, there is also a “disabled” part. Double-click, then click on that and removed just that disabled as well, without impacting the stuff around it. This should change the button from disabled to enabled. It won’t be greyed out, and it’ll respond when you click on it.
    Now click on it. You’ll get an error message, as shown below:
    What we’ve done here is bypass what’s known as client-side validation. The script in the web page prevented sending Tweets longer than 140 characters. Our editing of the DOM changed that, allowing us to send a bad request to the server. Bypassing client-side validation this way is the source of a lot of hacking.
    But Twitter still does server-side validation as well. They know any client-side validation can be bypassed, and are in on the joke. They tell us hackers “You’ll have to be more clever”. So let’s be more clever.
    In order to make longer 280 characters tweets work for select customers, they had to change something on the server-side. The thing they added was adding a “weighted_character_count=true” to the HTTP request. We just need to repeat the request we generated above, adding this parameter.
    In theory, we can do this by fiddling with the scripting. The way Dildog describes does it a different way. He copies the request out of the browser, edits it, then send it via the command-line using curl.
    We’ve used the [Elements] and [Console] tabs in Chrome’s DevTools. Now we are going to use the [Network] tab. This lists all the requests the web page has made to the server. The twitter app is constantly making requests to refresh the content of the web page. The request we made trying to do a long tweet is called “create”, and is red, because it failed.
    Google Chrome gives us a number of ways to duplicate the request. The most useful is that it copies it as a full cURL command we can just paste onto the command-line. We don’t even need to know cURL, it takes care of everything for us. On Windows, since you have two command-lines, it gives you a choice to use the older Windows cmd.exe, or the newer bash.exe. I use the bash version, since I don’t know where to get the Windows command-line version of cURL.exe.
    There’s a lot of going on here. The first thing to notice is the long xxxxxx strings. That’s actually not in the original screenshot. I edited the picture. That’s because these are session-cookies. If inserted them into your browser, you’d hijack my Twitter session, and be able to tweet as me (such as making Carlos Danger style tweets). Therefore, I have to remove them from the example.
    At the top of the screen is the URL that we are accessing, which is https://twitter.com/i/tweet/create. Much of the rest of the screen uses the cURL -H option to add a header. These are all the HTTP headers that I describe above. Finally, at the bottom, is the –data section, which contains the data bits related to the tweet, especially the tweet itself.
    We need to edit either the URL above to read https://twitter.com/i/tweet/create?weighted_character_count=true, or we need to add &weighted_character_count=true to the –data section at the bottom (either works). Remember: mouse doesn’t work on command-line, so you have to use the cursor-keys to navigate backwards in the line. Also, since the line is larger than the screen, it’s on several visual lines, even though it’s all a single line as far as the command-line is concerned.
    Now just hit [return] on your keyboard, and the tweet will be sent to the server, which at the moment, works. Presto!
    Twitter will either enable or disable the feature for everyone in a few weeks, at which point, this post won’t work. But the reason I’m writing this is to demonstrate the basic hacking skills. We manipulate the web pages we receive from servers, and we manipulate what’s sent back from our browser back to the server.

    Easier: hack the scripting

    Instead of messing with the DOM and editing the HTTP request, the better solution would be to change the scripting that does both DOM client-side validation and HTTP request generation. The only reason Dildog above didn’t do that is that it’s a lot more work trying to find where all this happens.
    Others have, though. @Zemnmez did just that, though his technique works for the alternate TweetDeck client (https://tweetdeck.twitter.com) instead of the default client. Go copy his code from here, then paste it into the DevTools scripting [Console]. It’ll go in an replace some scripting functions, such like my simpler example above.
    The console is showing a stream of error messages, because TweetDeck has bugs, ignore those.
    Now you can effortlessly do long tweets as normal, without all the messing around I’ve spent so much text in this blog post describing.
    Now, as I’ve mentioned this before, you are only editing what’s going on in the current web page. If you refresh this page, or close it, everything will be lost. You’ll have to re-open the DevTools scripting console and repaste the code. The easier way of doing this is to use the [Sources] tab instead of [Console] and use the “Snippets” feature to save this bit of code in your browser, to make it easier next time.
    The even easier way is to use Chrome extensions like TamperMonkey and GreaseMonkey that’ll take care of this for you. They’ll save the script, and automatically run it when they see you open the TweetDeck webpage again.
    An even easier way is to use one of the several Chrome extensions written in the past day specifically designed to bypass the 140 character limit. Since the purpose of this blog post is to show you how to tamper with your browser yourself, rather than help you with Twitter, I won’t list them.

    Conclusion

    Tampering with the web-page the server gives you, and the data you send back, is a basic hacker skill. In truth, there is a lot to this. You have to get comfortable with the command-line, using tools like cURL. You have to learn how HTTP requests work. You have to understand how web pages are built from markup, style, and scripting. You have to be comfortable using Chrome’s DevTools for messing around with web page elements, network requests, scripting console, and scripting sources.
    So it’s rather a lot, actually.
    My hope with this page is to show you a practical application of all this, without getting too bogged down in fully explaining how every bit works.

    [$] Reducing Python’s startup time

    Post Syndicated from jake original https://lwn.net/Articles/730915/rss

    The startup time for the Python interpreter has been discussed by the core
    developers and others numerous times over the years; optimization efforts
    are made periodically as well.
    Startup time can dominate the execution time of command-line programs
    written in Python,
    especially if they import a lot of other modules. Python startup time is
    worse than some other scripting languages and more recent versions of the
    language are taking more than twice as long to start up when compared to
    earlier versions (e.g. 3.7 versus 2.7).
    The most recent iteration of the startup time
    discussion has played out in the python-dev and python-ideas mailing lists
    since mid-July. This time, the focus has been on the collections.namedtuple()
    data structure that is used in multiple places throughout the standard
    library and in other Python modules, but the discussion has been more
    wide-ranging than simply that.

    Top 10 Most Obvious Hacks of All Time (v0.9)

    Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/top-10-most-obvious-hacks-of-all-time.html

    For teaching hacking/cybersecurity, I thought I’d create of the most obvious hacks of all time. Not the best hacks, the most sophisticated hacks, or the hacks with the biggest impact, but the most obvious hacks — ones that even the least knowledgeable among us should be able to understand. Below I propose some hacks that fit this bill, though in no particular order.

    The reason I’m writing this is that my niece wants me to teach her some hacking. I thought I’d start with the obvious stuff first.

    Shared Passwords

    If you use the same password for every website, and one of those websites gets hacked, then the hacker has your password for all your websites. The reason your Facebook account got hacked wasn’t because of anything Facebook did, but because you used the same email-address and password when creating an account on “beagleforums.com”, which got hacked last year.

    I’ve heard people say “I’m sure, because I choose a complex password and use it everywhere”. No, this is the very worst thing you can do. Sure, you can the use the same password on all sites you don’t care much about, but for Facebook, your email account, and your bank, you should have a unique password, so that when other sites get hacked, your important sites are secure.

    And yes, it’s okay to write down your passwords on paper.

    Tools: HaveIBeenPwned.com

    PIN encrypted PDFs

    My accountant emails PDF statements encrypted with the last 4 digits of my Social Security Number. This is not encryption — a 4 digit number has only 10,000 combinations, and a hacker can guess all of them in seconds.
    PIN numbers for ATM cards work because ATM machines are online, and the machine can reject your card after four guesses. PIN numbers don’t work for documents, because they are offline — the hacker has a copy of the document on their own machine, disconnected from the Internet, and can continue making bad guesses with no restrictions.
    Passwords protecting documents must be long enough that even trillion upon trillion guesses are insufficient to guess.

    Tools: Hashcat, John the Ripper

    SQL and other injection

    The lazy way of combining websites with databases is to combine user input with an SQL statement. This combines code with data, so the obvious consequence is that hackers can craft data to mess with the code.
    No, this isn’t obvious to the general public, but it should be obvious to programmers. The moment you write code that adds unfiltered user-input to an SQL statement, the consequence should be obvious. Yet, “SQL injection” has remained one of the most effective hacks for the last 15 years because somehow programmers don’t understand the consequence.
    CGI shell injection is a similar issue. Back in early days, when “CGI scripts” were a thing, it was really important, but these days, not so much, so I just included it with SQL. The consequence of executing shell code should’ve been obvious, but weirdly, it wasn’t. The IT guy at the company I worked for back in the late 1990s came to me and asked “this guy says we have a vulnerability, is he full of shit?”, and I had to answer “no, he’s right — obviously so”.

    XSS (“Cross Site Scripting”) [*] is another injection issue, but this time at somebody’s web browser rather than a server. It works because websites will echo back what is sent to them. For example, if you search for Cross Site Scripting with the URL https://www.google.com/search?q=cross+site+scripting, then you’ll get a page back from the server that contains that string. If the string is JavaScript code rather than text, then some servers (thought not Google) send back the code in the page in a way that it’ll be executed. This is most often used to hack somebody’s account: you send them an email or tweet a link, and when they click on it, the JavaScript gives control of the account to the hacker.

    Cross site injection issues like this should probably be their own category, but I’m including it here for now.

    More: Wikipedia on SQL injection, Wikipedia on cross site scripting.
    Tools: Burpsuite, SQLmap

    Buffer overflows

    In the C programming language, programmers first create a buffer, then read input into it. If input is long than the buffer, then it overflows. The extra bytes overwrite other parts of the program, letting the hacker run code.
    Again, it’s not a thing the general public is expected to know about, but is instead something C programmers should be expected to understand. They should know that it’s up to them to check the length and stop reading input before it overflows the buffer, that there’s no language feature that takes care of this for them.
    We are three decades after the first major buffer overflow exploits, so there is no excuse for C programmers not to understand this issue.

    What makes particular obvious is the way they are wrapped in exploits, like in Metasploit. While the bug itself is obvious that it’s a bug, actually exploiting it can take some very non-obvious skill. However, once that exploit is written, any trained monkey can press a button and run the exploit. That’s where we get the insult “script kiddie” from — referring to wannabe-hackers who never learn enough to write their own exploits, but who spend a lot of time running the exploit scripts written by better hackers than they.

    More: Wikipedia on buffer overflow, Wikipedia on script kiddie,  “Smashing The Stack For Fun And Profit” — Phrack (1996)
    Tools: bash, Metasploit

    SendMail DEBUG command (historical)

    The first popular email server in the 1980s was called “SendMail”. It had a feature whereby if you send a “DEBUG” command to it, it would execute any code following the command. The consequence of this was obvious — hackers could (and did) upload code to take control of the server. This was used in the Morris Worm of 1988. Most Internet machines of the day ran SendMail, so the worm spread fast infecting most machines.
    This bug was mostly ignored at the time. It was thought of as a theoretical problem, that might only rarely be used to hack a system. Part of the motivation of the Morris Worm was to demonstrate that such problems was to demonstrate the consequences — consequences that should’ve been obvious but somehow were rejected by everyone.

    More: Wikipedia on Morris Worm

    Email Attachments/Links

    I’m conflicted whether I should add this or not, because here’s the deal: you are supposed to click on attachments and links within emails. That’s what they are there for. The difference between good and bad attachments/links is not obvious. Indeed, easy-to-use email systems makes detecting the difference harder.
    On the other hand, the consequences of bad attachments/links is obvious. That worms like ILOVEYOU spread so easily is because people trusted attachments coming from their friends, and ran them.
    We have no solution to the problem of bad email attachments and links. Viruses and phishing are pervasive problems. Yet, we know why they exist.

    Default and backdoor passwords

    The Mirai botnet was caused by surveillance-cameras having default and backdoor passwords, and being exposed to the Internet without a firewall. The consequence should be obvious: people will discover the passwords and use them to take control of the bots.
    Surveillance-cameras have the problem that they are usually exposed to the public, and can’t be reached without a ladder — often a really tall ladder. Therefore, you don’t want a button consumers can press to reset to factory defaults. You want a remote way to reset them. Therefore, they put backdoor passwords to do the reset. Such passwords are easy for hackers to reverse-engineer, and hence, take control of millions of cameras across the Internet.
    The same reasoning applies to “default” passwords. Many users will not change the defaults, leaving a ton of devices hackers can hack.

    Masscan and background radiation of the Internet

    I’ve written a tool that can easily scan the entire Internet in a short period of time. It surprises people that this possible, but it obvious from the numbers. Internet addresses are only 32-bits long, or roughly 4 billion combinations. A fast Internet link can easily handle 1 million packets-per-second, so the entire Internet can be scanned in 4000 seconds, little more than an hour. It’s basic math.
    Because it’s so easy, many people do it. If you monitor your Internet link, you’ll see a steady trickle of packets coming in from all over the Internet, especially Russia and China, from hackers scanning the Internet for things they can hack.
    People’s reaction to this scanning is weirdly emotional, taking is personally, such as:
    1. Why are they hacking me? What did I do to them?
    2. Great! They are hacking me! That must mean I’m important!
    3. Grrr! How dare they?! How can I hack them back for some retribution!?

    I find this odd, because obviously such scanning isn’t personal, the hackers have no idea who you are.

    Tools: masscan, firewalls

    Packet-sniffing, sidejacking

    If you connect to the Starbucks WiFi, a hacker nearby can easily eavesdrop on your network traffic, because it’s not encrypted. Windows even warns you about this, in case you weren’t sure.

    At DefCon, they have a “Wall of Sheep”, where they show passwords from people who logged onto stuff using the insecure “DefCon-Open” network. Calling them “sheep” for not grasping this basic fact that unencrypted traffic is unencrypted.

    To be fair, it’s actually non-obvious to many people. Even if the WiFi itself is not encrypted, SSL traffic is. They expect their services to be encrypted, without them having to worry about it. And in fact, most are, especially Google, Facebook, Twitter, Apple, and other major services that won’t allow you to log in anymore without encryption.

    But many services (especially old ones) may not be encrypted. Unless users check and verify them carefully, they’ll happily expose passwords.

    What’s interesting about this was 10 years ago, when most services which only used SSL to encrypt the passwords, but then used unencrypted connections after that, using “cookies”. This allowed the cookies to be sniffed and stolen, allowing other people to share the login session. I used this on stage at BlackHat to connect to somebody’s GMail session. Google, and other major websites, fixed this soon after. But it should never have been a problem — because the sidejacking of cookies should have been obvious.

    Tools: Wireshark, dsniff

    Stuxnet LNK vulnerability

    Again, this issue isn’t obvious to the public, but it should’ve been obvious to anybody who knew how Windows works.
    When Windows loads a .dll, it first calls the function DllMain(). A Windows link file (.lnk) can load icons/graphics from the resources in a .dll file. It does this by loading the .dll file, thus calling DllMain. Thus, a hacker could put on a USB drive a .lnk file pointing to a .dll file, and thus, cause arbitrary code execution as soon as a user inserted a drive.
    I say this is obvious because I did this, created .lnks that pointed to .dlls, but without hostile DllMain code. The consequence should’ve been obvious to me, but I totally missed the connection. We all missed the connection, for decades.

    Social Engineering and Tech Support [* * *]

    After posting this, many people have pointed out “social engineering”, especially of “tech support”. This probably should be up near #1 in terms of obviousness.

    The classic example of social engineering is when you call tech support and tell them you’ve lost your password, and they reset it for you with minimum of questions proving who you are. For example, you set the volume on your computer really loud and play the sound of a crying baby in the background and appear to be a bit frazzled and incoherent, which explains why you aren’t answering the questions they are asking. They, understanding your predicament as a new parent, will go the extra mile in helping you, resetting “your” password.

    One of the interesting consequences is how it affects domain names (DNS). It’s quite easy in many cases to call up the registrar and convince them to transfer a domain name. This has been used in lots of hacks. It’s really hard to defend against. If a registrar charges only $9/year for a domain name, then it really can’t afford to provide very good tech support — or very secure tech support — to prevent this sort of hack.

    Social engineering is such a huge problem, and obvious problem, that it’s outside the scope of this document. Just google it to find example after example.

    A related issue that perhaps deserves it’s own section is OSINT [*], or “open-source intelligence”, where you gather public information about a target. For example, on the day the bank manager is out on vacation (which you got from their Facebook post) you show up and claim to be a bank auditor, and are shown into their office where you grab their backup tapes. (We’ve actually done this).

    More: Wikipedia on Social Engineering, Wikipedia on OSINT, “How I Won the Defcon Social Engineering CTF” — blogpost (2011), “Questioning 42: Where’s the Engineering in Social Engineering of Namespace Compromises” — BSidesLV talk (2016)

    Blue-boxes (historical) [*]

    Telephones historically used what we call “in-band signaling”. That’s why when you dial on an old phone, it makes sounds — those sounds are sent no differently than the way your voice is sent. Thus, it was possible to make tone generators to do things other than simply dial calls. Early hackers (in the 1970s) would make tone-generators called “blue-boxes” and “black-boxes” to make free long distance calls, for example.

    These days, “signaling” and “voice” are digitized, then sent as separate channels or “bands”. This is call “out-of-band signaling”. You can’t trick the phone system by generating tones. When your iPhone makes sounds when you dial, it’s entirely for you benefit and has nothing to do with how it signals the cell tower to make a call.

    Early hackers, like the founders of Apple, are famous for having started their careers making such “boxes” for tricking the phone system. The problem was obvious back in the day, which is why as the phone system moves from analog to digital, the problem was fixed.

    More: Wikipedia on blue box, Wikipedia article on Steve Wozniak.

    Thumb drives in parking lots [*]

    A simple trick is to put a virus on a USB flash drive, and drop it in a parking lot. Somebody is bound to notice it, stick it in their computer, and open the file.

    This can be extended with tricks. For example, you can put a file labeled “third-quarter-salaries.xlsx” on the drive that required macros to be run in order to open. It’s irresistible to other employees who want to know what their peers are being paid, so they’ll bypass any warning prompts in order to see the data.

    Another example is to go online and get custom USB sticks made printed with the logo of the target company, making them seem more trustworthy.

    We also did a trick of taking an Adobe Flash game “Punch the Monkey” and replaced the monkey with a logo of a competitor of our target. They now only played the game (infecting themselves with our virus), but gave to others inside the company to play, infecting others, including the CEO.

    Thumb drives like this have been used in many incidents, such as Russians hacking military headquarters in Afghanistan. It’s really hard to defend against.

    More: “Computer Virus Hits U.S. Military Base in Afghanistan” — USNews (2008), “The Return of the Worm That Ate The Pentagon” — Wired (2011), DoD Bans Flash Drives — Stripes (2008)

    Googling [*]

    Search engines like Google will index your website — your entire website. Frequently companies put things on their website without much protection because they are nearly impossible for users to find. But Google finds them, then indexes them, causing them to pop up with innocent searches.
    There are books written on “Google hacking” explaining what search terms to look for, like “not for public release”, in order to find such documents.

    More: Wikipedia entry on Google Hacking, “Google Hacking” book.

    URL editing [*]

    At the top of every browser is what’s called the “URL”. You can change it. Thus, if you see a URL that looks like this:

    http://www.example.com/documents?id=138493

    Then you can edit it to see the next document on the server:

    http://www.example.com/documents?id=138494

    The owner of the website may think they are secure, because nothing points to this document, so the Google search won’t find it. But that doesn’t stop a user from manually editing the URL.
    An example of this is a big Fortune 500 company that posts the quarterly results to the website an hour before the official announcement. Simply editing the URL from previous financial announcements allows hackers to find the document, then buy/sell the stock as appropriate in order to make a lot of money.
    Another example is the classic case of Andrew “Weev” Auernheimer who did this trick in order to download the account email addresses of early owners of the iPad, including movie stars and members of the Obama administration. It’s an interesting legal case because on one hand, techies consider this so obvious as to not be “hacking”. On the other hand, non-techies, especially judges and prosecutors, believe this to be obviously “hacking”.

    DDoS, spoofing, and amplification [*]

    For decades now, online gamers have figured out an easy way to win: just flood the opponent with Internet traffic, slowing their network connection. This is called a DoS, which stands for “Denial of Service”. DoSing game competitors is often a teenager’s first foray into hacking.
    A variant of this is when you hack a bunch of other machines on the Internet, then command them to flood your target. (The hacked machines are often called a “botnet”, a network of robot computers). This is called DDoS, or “Distributed DoS”. At this point, it gets quite serious, as instead of competitive gamers hackers can take down entire businesses. Extortion scams, DDoSing websites then demanding payment to stop, is a common way hackers earn money.
    Another form of DDoS is “amplification”. Sometimes when you send a packet to a machine on the Internet it’ll respond with a much larger response, either a very large packet or many packets. The hacker can then send a packet to many of these sites, “spoofing” or forging the IP address of the victim. This causes all those sites to then flood the victim with traffic. Thus, with a small amount of outbound traffic, the hacker can flood the inbound traffic of the victim.
    This is one of those things that has worked for 20 years, because it’s so obvious teenagers can do it, yet there is no obvious solution. President Trump’s executive order of cyberspace specifically demanded that his government come up with a report on how to address this, but it’s unlikely that they’ll come up with any useful strategy.

    More: Wikipedia on DDoS, Wikipedia on Spoofing

    Conclusion

    Tweet me (@ErrataRob) your obvious hacks, so I can add them to the list.

    Wanted: Automation Systems Administrator

    Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-automation-systems-administrator/

    Are you an Automation Systems Administrator who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.

    Responsibilities:

    • Develop and deploy automated provisioning & updating of systems
    • Lead projects across a range of IT disciplines
    • Understand environment thoroughly enough to administer/debug any system
    • Participate in the 24×7 on-call rotation and respond to alerts as needed

    Requirements:

    • Expert knowledge of automated provisioning
    • Expert knowledge of Linux administration (Debian preferred)
    • Scripting skills
    • Experience in automation/configuration management
    • Position based in the San Mateo, California Corporate Office

    Required for all Backblaze Employees

    • Good attitude and willingness to do whatever it takes to get the job done.
    • Desire to learn and adapt to rapidly changing technologies and work environment.
    • Relentless attention to detail.
    • Excellent communication and problem solving skills.
    • Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

    Company Description:
    Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

    We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

    We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

    Some Backblaze Perks:

    • Competitive healthcare plans
    • Competitive compensation and 401k
    • All employees receive Option grants
    • Unlimited vacation days
    • Strong coffee
    • Fully stocked Micro kitchen
    • Catered breakfast and lunches
    • Awesome people who work on awesome projects
    • Childcare bonus
    • Normal work hours
    • Get to bring your pets into the office
    • San Mateo Office – located near Caltrain and Highways 101 & 280.

    If this sounds like you — follow these steps:

    1. Send an email to [email protected] with the position in the subject line.
    2. Include your resume.
    3. Tell us a bit about your experience and why you’re excited to work with Backblaze.

    The post Wanted: Automation Systems Administrator appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Wanted: Site Reliability Engineer

    Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-site-reliability-engineer/

    Are you a Site Reliability Engineer who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.

    Responsibilities:

    • Lead projects across a range of IT disciplines
    • Understand environment thoroughly enough to administer/debug any system
    • Collaborate on automated provisioning & updating of systems
    • Collaborate on network administration and security
    • Collaborate on database administration
    • Participate in the 24×7 on-call rotation and respond to alerts
      as needed

    Requirements:

    • Expert knowledge of Linux administration (Debian preferred)
    • Scripting skills
    • Experience in automation/configuration management (Ansible preferred)
    • Position based in the San Mateo, California Corporate Office

    Required for all Backblaze Employees

    • Good attitude and willingness to do whatever it takes to get the job done.
    • Desire to learn and adapt to rapidly changing technologies and work environment.
    • Relentless attention to detail.
    • Excellent communication and problem solving skills.
    • Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

    Company Description:
    Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

    We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

    We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

    Some Backblaze Perks:

    • Competitive healthcare plans
    • Competitive compensation and 401k
    • All employees receive Option grants
    • Unlimited vacation days
    • Strong coffee
    • Fully stocked Micro kitchen
    • Catered breakfast and lunches
    • Awesome people who work on awesome projects
    • Childcare bonus
    • Normal work hours
    • Get to bring your pets into the office
    • San Mateo Office – located near Caltrain and Highways 101 & 280.

    If this sounds like you — follow these steps:

    1. Send an email to [email protected] with the position in the subject line.
    2. Include your resume.
    3. Tell us a bit about your experience and why you’re excited to work with Backblaze.

    The post Wanted: Site Reliability Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Wanted: Network Systems Administrator

    Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-network-systems-administrator/

    Are you a Network Systems Administrator who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.

    Responsibilities:

    • Own the network administration and security
    • Lead projects across a range of IT disciplines
    • Understand environment thoroughly enough to administer/debug any system
    • Participate in the 24×7 on-call rotation and respond to alerts as needed

    Requirements:

    • Expert knowledge of network administration and security
    • Expert knowledge of Linux administration (Debian preferred)
    • Scripting skills
    • Position based in the San Mateo, California Corporate Office

    Required for all Backblaze Employees

    • Good attitude and willingness to do whatever it takes to get the job done.
    • Desire to learn and adapt to rapidly changing technologies and work environment.
    • Relentless attention to detail.
    • Excellent communication and problem solving skills.
    • Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

    Company Description:
    Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

    We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

    We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

    Some Backblaze Perks:

    • Competitive healthcare plans
    • Competitive compensation and 401k
    • All employees receive Option grants
    • Unlimited vacation days
    • Strong coffee
    • Fully stocked Micro kitchen
    • Catered breakfast and lunches
    • Awesome people who work on awesome projects
    • Childcare bonus
    • Normal work hours
    • Get to bring your pets into the office
    • San Mateo Office – located near Caltrain and Highways 101 & 280.

    If this sounds like you — follow these steps:

    1. Send an email to [email protected] with the position in the subject line.
    2. Include your resume.
    3. Tell us a bit about your experience and why you’re excited to work with Backblaze.

    The post Wanted: Network Systems Administrator appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Wanted: Database Systems Administrator

    Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-database-systems-administrator/

    Are you a Database Systems Administrator who is looking for a challenging and fast-paced working environment? Want to a join our dynamic team and help Backblaze grow to new heights? Our Operations team is a distributed and collaborative group of individual contributors. We work closely together to build and maintain our home grown cloud storage farm, carefully controlling costs by utilizing open source and various brands of technology, as well as designing our own cloud storage servers. Members of Operations participate in the prioritization and decision making process, and make a difference everyday. The environment is challenging, but we balance the challenges with rewards, and we are looking for clever and innovative people to join us.

    Responsibilities:

    • Own the administration of Cassandra and MySQL
    • Lead projects across a range of IT disciplines
    • Understand environment thoroughly enough to administer/debug the system
    • Participate in the 24×7 on-call rotation and respond to alerts as needed

    Requirements:

    • Expert knowledge of Cassandra & MySQL
    • Expert knowledge of Linux administration (Debian preferred)
    • Scripting skills
    • Experience in automation/configuration management
    • Position is based in the San Mateo, California corporate office

    Required for all Backblaze Employees

    • Good attitude and willingness to do whatever it takes to get the job done.
    • Desire to learn and adapt to rapidly changing technologies and work environment.
    • Relentless attention to detail.
    • Excellent communication and problem solving skills.
    • Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

    Company Description:
    Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

    We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

    We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

    Some Backblaze Perks:

    • Competitive healthcare plans
    • Competitive compensation and 401k
    • All employees receive Option grants
    • Unlimited vacation days
    • Strong coffee
    • Fully stocked Micro kitchen
    • Catered breakfast and lunches
    • Awesome people who work on awesome projects
    • Childcare bonus
    • Normal work hours
    • Get to bring your pets into the office
    • San Mateo Office – located near Caltrain and Highways 101 & 280.

    If this sounds like you — follow these steps:

    1. Send an email to [email protected] with the position in the subject line.
    2. Include your resume.
    3. Tell us a bit about your experience and why you’re excited to work with Backblaze.

    The post Wanted: Database Systems Administrator appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    Prepare for the OWASP Top 10 Web Application Vulnerabilities Using AWS WAF and Our New White Paper

    Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prepare-for-the-owasp-top-10-web-application-vulnerabilities-using-aws-waf-and-our-new-white-paper/

    Are you aware of the Open Web Application Security Project (OWASP) and the work that they do to improve the security of web applications? Among many other things, they publish a list of the 10 most critical application security flaws, known as the OWASP Top 10. The release candidate for the 2017 version contains a consensus view of common vulnerabilities often found in web sites and web applications.

    AWS WAF, as I described in my blog post, New – AWS WAF, helps to protect your application from application-layer attacks such as SQL injection and cross-site scripting. You can create custom rules to define the types of traffic that are accepted or rejected.

    Our new white paper, Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities, shows you how to put AWS WAF to use. Going far beyond a simple recommendation to “use WAF,” it includes detailed, concrete mitigation strategies and implementation details for the most important items in the OWASP Top 10 (formally known as A1 through A10):

    Download Today
    The white paper provides background and context for each vulnerability, and then shows you how to create WAF rules to identify and block them. It also provides some defense-in-depth recommendations, including a very cool suggestion to use [email protected] to prevalidate the parameters supplied to HTTP requests.

    The white paper links to a companion AWS CloudFormation template that creates a Web ACL, along with the recommended condition types and rules. You can use this template as a starting point for your own work, adding more condition types and rules as desired.

    AWSTemplateFormatVersion: '2010-09-09'
    Description: AWS WAF Basic OWASP Example Rule Set
    
    ## ::PARAMETERS::
    ## Template parameters to be configured by user
    Parameters:
      stackPrefix:
        Type: String
        Description: The prefix to use when naming resources in this stack. Normally we would use the stack name, but since this template can be us\
    ed as a resource in other stacks we want to keep the naming consistent. No symbols allowed.
        ConstraintDescription: Alphanumeric characters only, maximum 10 characters
        AllowedPattern: ^[a-zA-z0-9]+$
        MaxLength: 10
        Default: generic
      stackScope:
        Type: String
        Description: You can deploy this stack at a regional level, for regional WAF targets like Application Load Balancers, or for global targets\
    , such as Amazon CloudFront distributions.
        AllowedValues:
          - Global
          - Regional
        Default: Regional
    ...
    

    Attend our Webinar
    If you would like to learn more about the topics discussed in this new white paper, please plan to attend our upcoming webinar, Secure Your Applications with AWS Web Application Firewall (WAF) and AWS Shield. On July 12, 2017, my colleagues Jeffrey Lyon and Sundar Jayashekar will show you how to secure your web applications and how to defend against the most common Layer 7 attacks.

    Jeff;

     

     

     

    Synchronizing Amazon S3 Buckets Using AWS Step Functions

    Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/synchronizing-amazon-s3-buckets-using-aws-step-functions/

    Constantin Gonzalez is a Principal Solutions Architect at AWS

    In my free time, I run a small blog that uses Amazon S3 to host static content and Amazon CloudFront to distribute it world-wide. I use a home-grown, static website generator to create and upload my blog content onto S3.

    My blog uses two S3 buckets: one for staging and testing, and one for production. As a website owner, I want to update the production bucket with all changes from the staging bucket in a reliable and efficient way, without having to create and populate a new bucket from scratch. Therefore, to synchronize files between these two buckets, I use AWS Lambda and AWS Step Functions.

    In this post, I show how you can use Step Functions to build a scalable synchronization engine for S3 buckets and learn some common patterns for designing Step Functions state machines while you do so.

    Step Functions overview

    Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly.

    While this particular example focuses on synchronizing objects between two S3 buckets, it can be generalized to any other use case that involves coordinated processing of any number of objects in S3 buckets, or other, similar data processing patterns.

    Bucket replication options

    Before I dive into the details on how this particular example works, take a look at some alternatives for copying or replicating data between two Amazon S3 buckets:

    • The AWS CLI provides customers with a powerful aws s3 sync command that can synchronize the contents of one bucket with another.
    • S3DistCP is a powerful tool for users of Amazon EMR that can efficiently load, save, or copy large amounts of data between S3 buckets and HDFS.
    • The S3 cross-region replication functionality enables automatic, asynchronous copying of objects across buckets in different AWS regions.

    In this use case, you are looking for a slightly different bucket synchronization solution that:

    • Works within the same region
    • Is more scalable than a CLI approach running on a single machine
    • Doesn’t require managing any servers
    • Uses a more finely grained cost model than the hourly based Amazon EMR approach

    You need a scalable, serverless, and customizable bucket synchronization utility.

    Solution architecture

    Your solution needs to do three things:

    1. Copy all objects from a source bucket into a destination bucket, but leave out objects that are already present, for efficiency.
    2. Delete all "orphaned" objects from the destination bucket that aren’t present on the source bucket, because you don’t want obsolete objects lying around.
    3. Keep track of all objects for #1 and #2, regardless of how many objects there are.

    In the beginning, you read in the source and destination buckets as parameters and perform basic parameter validation. Then, you operate two separate, independent loops, one for copying missing objects and one for deleting obsolete objects. Each loop is a sequence of Step Functions states that read in chunks of S3 object lists and use the continuation token to decide in a choice state whether to continue the loop or not.

    This solution is based on the following architecture that uses Step Functions, Lambda, and two S3 buckets:

    As you can see, this setup involves no servers, just two main building blocks:

    • Step Functions manages the overall flow of synchronizing the objects from the source bucket with the destination bucket.
    • A set of Lambda functions carry out the individual steps necessary to perform the work, such as validating input, getting lists of objects from source and destination buckets, copying or deleting objects in batches, and so on.

    To understand the synchronization flow in more detail, look at the Step Functions state machine diagram for this example.

    Walkthrough

    Here’s a detailed discussion of how this works.

    To follow along, use the code in the sync-buckets-state-machine GitHub repo. The code comes with a ready-to-run deployment script in Python that takes care of all the IAM roles, policies, Lambda functions, and of course the Step Functions state machine deployment using AWS CloudFormation, as well as instructions on how to use it.

    Fine print: Use at your own risk

    Before I start, here are some disclaimers:

    • Educational purposes only.

      The following example and code are intended for educational purposes only. Make sure that you customize, test, and review it on your own before using any of this in production.

    • S3 object deletion.

      In particular, using the code included below may delete objects on S3 in order to perform synchronization. Make sure that you have backups of your data. In particular, consider using the Amazon S3 Versioning feature to protect yourself against unintended data modification or deletion.

    Step Functions execution starts with an initial set of parameters that contain the source and destination bucket names in JSON:

    {
        "source":       "my-source-bucket-name",
        "destination":  "my-destination-bucket-name"
    }

    Armed with this data, Step Functions execution proceeds as follows.

    Step 1: Detect the bucket region

    First, you need to know the regions where your buckets reside. In this case, take advantage of the Step Functions Parallel state. This allows you to use a Lambda function get_bucket_location.py inside two different, parallel branches of task states:

    • FindRegionForSourceBucket
    • FindRegionForDestinationBucket

    Each task state receives one bucket name as an input parameter, then detects the region corresponding to "their" bucket. The output of these functions is collected in a result array containing one element per parallel function.

    Step 2: Combine the parallel states

    The output of a parallel state is a list with all the individual branches’ outputs. To combine them into a single structure, use a Lambda function called combine_dicts.py in its own CombineRegionOutputs task state. The function combines the two outputs from step 1 into a single JSON dict that provides you with the necessary region information for each bucket.

    Step 3: Validate the input

    In this walkthrough, you only support buckets that reside in the same region, so you need to decide if the input is valid or if the user has given you two buckets in different regions. To find out, use a Lambda function called validate_input.py in the ValidateInput task state that tests if the two regions from the previous step are equal. The output is a Boolean.

    Step 4: Branch the workflow

    Use another type of Step Functions state, a Choice state, which branches into a Failure state if the comparison in step 3 yields false, or proceeds with the remaining steps if the comparison was successful.

    Step 5: Execute in parallel

    The actual work is happening in another Parallel state. Both branches of this state are very similar to each other and they re-use some of the Lambda function code.

    Each parallel branch implements a looping pattern across the following steps:

    1. Use a Pass state to inject either the string value "source" (InjectSourceBucket) or "destination" (InjectDestinationBucket) into the listBucket attribute of the state document.

      The next step uses either the source or the destination bucket, depending on the branch, while executing the same, generic Lambda function. You don’t need two Lambda functions that differ only slightly. This step illustrates how to use Pass states as a way of injecting constant parameters into your state machine and as a way of controlling step behavior while re-using common step execution code.

    2. The next step UpdateSourceKeyList/UpdateDestinationKeyList lists objects in the given bucket.

      Remember that the previous step injected either "source" or "destination" into the state document’s listBucket attribute. This step uses the same list_bucket.py Lambda function to list objects in an S3 bucket. The listBucket attribute of its input decides which bucket to list. In the left branch of the main parallel state, use the list of source objects to work through copying missing objects. The right branch uses the list of destination objects, to check if they have a corresponding object in the source bucket and eliminate any orphaned objects. Orphans don’t have a source object of the same S3 key.

    3. This step performs the actual work. In the left branch, the CopySourceKeys step uses the copy_keys.py Lambda function to go through the list of source objects provided by the previous step, then copies any missing object into the destination bucket. Its sister step in the other branch, DeleteOrphanedKeys, uses its destination bucket key list to test whether each object from the destination bucket has a corresponding source object, then deletes any orphaned objects.

    4. The S3 ListObjects API action is designed to be scalable across many objects in a bucket. Therefore, it returns object lists in chunks of configurable size, along with a continuation token. If the API result has a continuation token, it means that there are more objects in this list. You can work from token to token to continue getting object list chunks, until you get no more continuation tokens.

    By breaking down large amounts of work into chunks, you can make sure each chunk is completed within the timeframe allocated for the Lambda function, and within the maximum input/output data size for a Step Functions state.

    This approach comes with a slight tradeoff: the more objects you process at one time in a given chunk, the faster you are done. There’s less overhead for managing individual chunks. On the other hand, if you process too many objects within the same chunk, you risk going over time and space limits of the processing Lambda function or the Step Functions state so the work cannot be completed.

    In this particular case, use a Lambda function that maximizes the number of objects listed from the S3 bucket that can be stored in the input/output state data. This is currently up to 32,768 bytes, assuming (based on some experimentation) that the execution of the COPY/DELETE requests in the processing states can always complete in time.

    A more sophisticated approach would use the Step Functions retry/catch state attributes to account for any time limits encountered and adjust the list size accordingly through some list site adjusting.

    Step 6: Test for completion

    Because the presence of a continuation token in the S3 ListObjects output signals that you are not done processing all objects yet, use a Choice state to test for its presence. If a continuation token exists, it branches into the UpdateSourceKeyList step, which uses the token to get to the next chunk of objects. If there is no token, you’re done. The state machine then branches into the FinishCopyBranch/FinishDeleteBranch state.

    By using Choice states like this, you can create loops exactly like the old times, when you didn’t have for statements and used branches in assembly code instead!

    Step 7: Success!

    Finally, you’re done, and can step into your final Success state.

    Lessons learned

    When implementing this use case with Step Functions and Lambda, I learned the following things:

    • Sometimes, it is necessary to manipulate the JSON state of a Step Functions state machine with just a few lines of code that hardly seem to warrant their own Lambda function. This is ok, and the cost is actually pretty low given Lambda’s 100 millisecond billing granularity. The upside is that functions like these can be helpful to make the data more palatable for the following steps or for facilitating Choice states. An example here would be the combine_dicts.py function.
    • Pass states can be useful beyond debugging and tracing, they can be used to inject arbitrary values into your state JSON and guide generic Lambda functions into doing specific things.
    • Choice states are your friend because you can build while-loops with them. This allows you to reliably grind through large amounts of data with the patience of an engine that currently supports execution times of up to 1 year.

      Currently, there is an execution history limit of 25,000 events. Each Lambda task state execution takes up 5 events, while each choice state takes 2 events for a total of 7 events per loop. This means you can loop about 3500 times with this state machine. For even more scalability, you can split up work across multiple Step Functions executions through object key sharding or similar approaches.

    • It’s not necessary to spend a lot of time coding exception handling within your Lambda functions. You can delegate all exception handling to Step Functions and instead simplify your functions as much as possible.

    • Step Functions are great replacements for shell scripts. This could have been a shell script, but then I would have had to worry about where to execute it reliably, how to scale it if it went beyond a few thousand objects, etc. Think of Step Functions and Lambda as tools for scripting at a cloud level, beyond the boundaries of servers or containers. "Serverless" here also means "boundary-less".

    Summary

    This approach gives you scalability by breaking down any number of S3 objects into chunks, then using Step Functions to control logic to work through these objects in a scalable, serverless, and fully managed way.

    To take a look at the code or tweak it for your own needs, use the code in the sync-buckets-state-machine GitHub repo.

    To see more examples, please visit the Step Functions Getting Started page.

    Enjoy!

    Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

    Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

    AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

    When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

    The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

    New Rate-Based Rules
    Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

    Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

    IP Address Tracking– You can see which IP addresses are currently blacklisted.

    IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

    IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

    Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

    You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

    Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

    Using Rate-Based Rules
    Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

    Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

    With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

    Then attach the rule (ProtectLogin) to the Web ACL:

    The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

    Available Now
    The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

    Jeff;