Tag Archives: Jobs

Wanted: Senior Director of Product Management

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/wanted-senior-director-of-product-management/

We're Hiring -- Senior Director of Product Management

“You know what’s worth every. single. penny? @backblaze #lifesaver”
Andrei P. (Backblaze customer)

Do you want to lead product management for a set of products that have helped hundreds of thousands of customers in over 150 countries with their storage and backup needs; products people love and are often grateful for? Let’s chat.

Backblaze started over a decade ago and built a cloud storage platform, which is 1/4th the price of Amazon/Google/Microsoft, by developing custom Storage Pod hardware and purpose-built Vault software. On top of it, we’ve built out three business lines:

  • Personal Backup — consumer cloud backup for Mac/PCs
  • Business Backup — business cloud backup for Macs/PCs
  • B2 Cloud Storage — high-performance cloud storage for backups, hosting, development

All three generate millions of dollars in revenue and are loved for their ease-of-use and affordability.

And that’s where you come in. We’re hiring a Director of Product Management, reporting to me (CEO), to lead the product management function for Backblaze. You would be the first product manager and get to establish the approach as well as drive the roadmap in a fast-growing company that is cash-flow positive and has built a cloud storage platform on the scale of Dropbox & Facebook with a mere $3m of funding.

So if you:

Care deeply about product experience: You learn about customer needs, empathize with them, and aim to build a product experience that they’ll love. You believe products should be easy-to-use, even if they’re being used by people who are extremely technical. You always start with “what’s the right customer experience” and then work backwards from there.

Use gut and data: You research the market and dig into data from surveys, sales, and support; but also have good judgement to interpret the data and make decisions with imperfect information.

Work with everyone: You know that this role is one of the most cross-functional in the company, needing input from across the organization on product needs, planning with engineering on what’s possible when, and planning and communication to the organization around roadmaps and launches. You care about culture and values and like to be a steward for values such as transparency, collaboration, inclusiveness, accountability and experimentation.

Think big picture: You look at where the market is headed and come up with strategy and roadmaps to deliver product lines to support the future.

And sweat the details: You ensure the individual features are clearly defined and work with design and engineering to get them built.

Build enough process: You build clear processes/guidelines/documents that are necessary to communicate and scale an organization, but know that a 100-page requirements document will never get read.

Are technically proficient: You don’t have to have an engineering background, but among your products you will be managing an API-enabled storage platform used by developers, and that can’t be scary.

Embrace life at a growth-stage company: You are comfortable with an environment where product management hasn’t been defined role. You embrace the challenge of defining and establishing the role within an existing company.

Have launched multiple successful products: preferably on multiple platforms (Windows, macOS, Web, iOS, Android)

Are familiar with storage/backup: You don’t have to know about storage or backup. (Most of us didn’t when we started.) But it wouldn’t hurt.

If this sounds like you, I’d love to chat with you about taking ownership of our product management function to create the next evolution of our products.

The post Wanted: Senior Director of Product Management appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Senior Director of Product Marketing

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/wanted-senior-director-of-product-marketing/

We're hiring

That tweet, along with hundreds of other positive comments, came in the day we announced a price increase. We’re proud (and humbled) to have such strong relationships with our customers that they root for our success.

We’re looking for a Product Marketing leader who understands and loves focusing on the customer.

About Backblaze

Backblaze provides cloud storage that’s astonishingly easy to use and low cost. Our customers use our services so they can pursue dreams like curing cancer (genome mapping is data intensive), archiving the work of some of the greatest artists on the planet (learn more about how Austin City Limits uses B2), or simply sleeping well at night (anyone that’s spilled a cup of coffee on a laptop knows the relief that comes with complete, secure backups. We are entrusted with over 750 PBs of data from customers in more than 150 countries. From a storage standpoint, our platform is on the scale of Dropbox & Facebook. We exited 2018 with a strong growth rate, with a cash flow positive (read — our customer acquisition efforts are profitable). We’ve done all this with just $3M of funding.

How? Our team is maniacally focused on understanding customer needs and then providing solutions. More than half of our business comes from self-service customers — they get started without needing to talk to us.

How Marketing at Backblaze Works

  • We think of Product Marketing as verticalized business owners. Our product marketers are expected to be athletes that can define and marshall our resources towards clear objectives. We are looking for someone that will lead the team — both as a manager and a contributor.
  • Acquisition via content marketing. We’ll have ~3M visitors to the blog this year that come to read compelling, if a little wonky, content on how things actually work. Our secret is writing about customer problems and solutions.
  • Conversion of website traffic. That means working cross-functionally to continually remove friction from the user experience — website flows and content that directly helps customers solve problems.
  • Sales enablement for a growing team. Increasingly, we have customers that want/need to engage with Sales. That is great, and, Marketing needs to provide our Sales team with the tools to succeed.
  • Collaboration with technology partners. B2 is integrated into leading hardware and software solutions. Because of our brand reputation and marketing reach, many partners integrate B2 because they want to run joint campaigns to promote our solutions. Once and integration is validated, Product Marketing owns the relationship with the partner.
  • Provide insights and feedback. We are a collaborative organization — our product marketers are key voices in representing our customers.

The Role: Senior Director of Product Marketing

Reporting directly to the VP of Marketing, you will lead a growing team of product marketers.

The Right Fit for our Sr. Director of Product Marketing

  • Loves being a marketer. Backblaze spans a variety of customer segments including Consumer, SMBs, and Developers. We’re looking for someone that enjoys serving multiple segments.
    • Capable of leading all of Product Marketing, not just one vertical.
  • Possesses the right amount of experience. 10+ years of product/solutions marketing within technical infrastructure, at least 3 years in storage or cloud. Experience with eCommerce/self-service SaaS preferred.
    • Foundation in place to succeed from day one.
  • Demonstrates passion for talent development. You’ll be leading a talented team and you must ensure that Backblaze remains a place that people enjoy working.
    • Has led and grown successful teams.
  • Obsesses over the story. Whether creating a new webpage or writing up a case study, we strive to create stories that engage. We are looking for someone that is a polished writer and talented editor.
    • Superior communication skills combined with the necessary technical proficiency to tell cloud storage stories.
  • Builds enough process. You build clear processes/guidelines/documents that are necessary to communicate and scale an organization, but know that if there’s a 100 page manual, we’re probably making things too hard.
    • Articulates a considered approach to trying something new and, when successful, scaling to a repeatable process for others.
  • Blends analysis with instinct. Marketers should own the data wherever possible. However, particularly in hypergrowth settings, there are times we just need to take a bet and be willing to fail.
    • A track record of driving quantified results.

Some of Our More Popular Perks

Backblaze offers an unlimited vacation policy, fully stocked kitchens, twice a week catered breakfast and lunch, superior coffee, and a generous skills training policy to continue your professional development. Our dog friendly office in San Mateo is easily accessible from CalTrain, 280, and 101.

If this all sounds like you:

  • Send an email to jobscontact@backblaze.com with the position in the subject line.
  • Tell us a bit about your work history.
  • Include your resume.
  • Share an example of talent development, either from your past or something you found notable. Why did it stick out to you? How have you applied it to your work? (less than 500 words)

Backblaze is an Equal Opportunity Employer.

The post Wanted: Senior Director of Product Marketing appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Senior Android Developer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-senior-android-developer/

We're Hiring

You will lead the development of Backblaze’s Android application. Your responsibilities will include designing and implementing features as well as fixing bugs in a timely manner within project constraints. There are significant enhancements in the works, and you’ll be a part of making them happen.

Qualifications

  • 10+ years of overall experience in software programming with 6 years minimum specializing in Android application development
  • At least 3 published applications on the Android Marketplace available for immediate download and review by our team
  • Experienced practicing Agile process
  • Extensive experience in delivering highly modular, model-driven application
  • Extensive experience with building applications using Java I/O, Collections, Algorithms and other well known application frameworks
  • Experienced developing rich GUIs for Android
  • Experienced with source code management techniques using Git
  • Solid understanding of algorithms, memory management, object oriented programming, MVC programming, and concurrent programming
  • Extensive experience detecting and correcting memory usage issues, as well as optimizing code for application performance
  • A solid understanding of operating system fundamentals such as processes, inter-process communication, multi-threading primitives, race conditions and deadlocks
  • Experienced with JSON, XML, and interfacing Android applications to server side APIs
  • Ability to function effectively and communicate with the following cross-functional teams: UX Designer, Project/Product Management, Quality Assurance, Marketing, and manage continuously changing business needs
  • Desire to work in a startup environment that includes rapid release, quick and reliable decision making, and a high degree of autonomy

Must have a strong background in:

  • Computer Science
  • Multi-threaded programming
  • Building reliable, testable software

Benefits

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New parent childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office — located near Caltrain and Highways 101 & 280

If this all sounds like you:

  • Send an email to jobscontact@backblaze.com with the position in the subject line.
  • Tell us a bit about your work history.
  • Include your resume.

Backblaze is an Equal Opportunity Employer.

The post Wanted: Senior Android Developer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Inside Sales Account Executive

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-inside-sales-account-executive/

Job Description
Backblaze is looking for an Inside Sales Account Executive to join our Backblaze for Business sales team. This is an important role that will help determine the growth path of the company. Finding exceptional individuals who love working in a dynamic and challenging environment is our highest priority. We are searching for people who have successfully met and exceeded sales goals and would prefer to hire someone who has worked at a B2B Software as a Service or Data Protection company.

The History of Backblaze from our CEO
In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to backup their data.

Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round.

10 Years Later We Are a Growing & Thriving Company with

  • A brand millions recognize for openness, ease-of-use, and affordability.
  • A computer backup service that stores over 700 petabytes of data, has recovered over 35 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends.
  • A growing, profitable and cash-flow positive company.
  • And last, but most definitely not least: a great sales team.

The Ideal Account Executive Candidate will

  • Have extensive B2B sales experience with a primary focus on companies <500 employees with strong brand recognition and demand generation activities.
  • Give excellent product demonstrations that use the customer’s pain points and needs to create a uniquely customized value proposition.
  • Have a track record of managing inbound interest while generating new pipeline through prospecting and referrals.
  • Knowing how to sell on value instead of relying on discounting.

Responsibilities

  • Manage the entire SaaS sales cycle from qualification to successful trial completion.
  • Focus primarily on qualifying and closing inbound leads while also self-generating sales pipeline with accounts that match our Ideal Customer Profile.
  • Deliver a compelling and customer-centric product demonstration.
  • Keep opportunities updated with accurate next steps to improve monthly forecast accuracy.
  • Provide input to product and marketing teams from conversations with customers and prospects.
  • Manage contract T’s & C’s negotiations during the sales process.
  • Regularly meet and exceed monthly quota targets after onboarding and ramp period.
  • Approach all deals with a customer-first “ready to help” mentality.
  • Work cross-functionally with different organizations such as Sales, Marketing, Customer Success, Support, Product Development, and Engineering.

Requirements

  • Articulate and execute a detailed plan for exceeding monthly sales targets.
  • 1-3 years of sales experience (in a closing role), preferably in SaaS platform software.
  • Ability to speak to executive-level decision-makers and ask intelligent questions about their business challenges and professional goals.
  • Track record of success by exceeding quota on a regular basis.
  • History of building new net business through prospecting and referrals.
  • Great communicator with the ability to convey authenticity through active listening.
  • Give persuasive and engaging product demonstrations.
  • Highly organized and able to balance dozens of accounts.
  • Adaptability to thrive in times of ambiguity but willing to raise your hand and ask for help when necessary.
  • Strong personal ethics with a competitive mindset. You want to earn a spot at the top of the leaderboard through hard work and continuous effort.
  • You have the desire and willingness to learn new job skills through independent study.
  • Knowledge of modern sales software including (but not limited to) Salesforce, Outreach, DiscoverOrg, Cirrus Insights, and UberConference.

This position is an experienced sales role with little to no travel requirements. Must be able to report into our San Mateo office daily.

Desirables

  • Understanding of modern sales process for SaaS.
  • Cloud backup solution landscape familiarity.
  • File backup terminology, use cases, and common pain points.
  • Creative and/or highly interpersonal background.
  • Achieving 90% of sales targets within 90 days of hire.

Benefits

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New parent childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office — located near Caltrain and Highways 101 & 280

If this all sounds like you:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Tell us a bit about your work history.
  3. Include your resume.

The post Wanted: Inside Sales Account Executive appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Paralegal and Compliance Analyst

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-paralegal-and-compliance-analyst/

Want to work at a company that helps customers in 156 countries around the world protect the memories they hold dear — a company that stores over 500 petabytes of customers’ photos, music, documents, and work files in a purpose-built cloud storage system?

Here’s your chance. Backblaze is looking for a Paralegal and Compliance Analyst!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 — robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple: grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New parent childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office — located near Caltrain and Highways 101 & 280

Paralegal and Compliance Analyst Responsibilities:

  • Collaborate and develop relationships with internal business teams and third parties to draft a variety of corporate, contract and operational matters, obtaining approvals and signatures, and maintaining electronic and physical files
  • Serve as the primary representative for all privacy inquiries, audits and on-going efforts
  • Lead the continuous improvement of the privacy compliance program and risk assessment process, including maintenance of written policies and procedures
  • Work across organizational teams to ensure that privacy processes are fully understood and implemented, including remediation of instances of noncompliance
  • Draft legal documents for corporate governance including Board meeting minutes, corporate resolutions and consents, corporate entity management, and equity matters
  • Assist in gathering information and documents in response to legal cases, subpoenas, and administrative agency requests
  • Manage tools for documenting and tracking compliance with data privacy obligations
  • Coordinate annual review of privacy-related certifications
  • Assist with the management of external counsel, track invoices for compliance with policies and budget

Qualifications:

  • Minimum of 3 – 5 years of paralegal experience, with experience in both law firm and/or in-house technology company environments
  • Bachelor’s degree from a 4-year university or equivalent
  • Familiarity with regulations and certifications like GDPR, SOC II, HIPPA, PCI, etc.
  • Experience with data analytics software and tools such as SQL, Tableau, Excel, Google Sheets
  • Must be organized and have excellent written and oral communication skills
  • Must be proactive, flexible, self-starter, and a good team player

If this all sounds like you:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Tell us a bit about your work history.
  3. Include your resume.

The post Wanted: Paralegal and Compliance Analyst appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Help Wanted: Data Center Technician

Post Syndicated from Yev original https://www.backblaze.com/blog/help-wanted-data-center-technician/

Wanted

Want to work at a company that helps customers in 156 countries around the world protect the memories they hold dear — a company that stores over 500 petabytes of customers’ photos, music, documents, and work files in a purpose-built cloud storage system?

Here’s your chance. Backblaze is looking for a Data Center Technician!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2, robust and reliable object storage, at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team-oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple: grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked micro kitchen
  • Catered lunches
  • Awesome people who work on awesome projects
  • New parent childcare bonus
  • Normal work hours

Want to know what you’ll be doing?

  • Work as Backblaze’s physical presence in Sacramento area datacenter(s)
  • Help maintain physical infrastructure including racking equipment, replacing hard drives and other system components
  • Repair and troubleshoot defective equipment with minimal supervision
  • Support datacenter’s 24×7 staff to install new equipment, handle after hours emergencies, and other tasks
  • Help manage onsite inventory of hard drives, cables, rails, and other spare parts
  • RMA defective components
  • Set up, test and activate new equipment via the Linux command line
  • Help train new datacenter technicians as needed
  • Help with projects to install new systems and services as time allows
  • Follow and improve data center best practices and documentation
  • Maintain a clean and well organized work environment
  • On-call responsibilities require being within an hour of the SunGard’s Rancho Cordova/Roseville facility and occasional trips onsite 24×7 to resolve issues that can’t be handled remotely
  • Work days may include Saturday and/or Sunday (e.g. working Tuesday – Saturday)

Requirements

  • Excellent communication, time management, problem solving, and organizational skills
  • Ability to learn quickly
  • Ability to lift/move 50-75 lbs and work down near the floor on a daily basis
  • Position based near Sacramento, California and may require periodic visits to the corporate office in San Mateo
  • May require travel to other data centers to provide coverage and/or to assist with new site set-up.

If this all sounds like you:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Tell us a bit about your work history.
  3. Include your resume.

The post Help Wanted: Data Center Technician appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: DevOps Business Analyst

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-devops-business-analyst/

As a DevOps Business Analyst, you will apply your expertise in data analytics to inform decision making, help optimize business growth, and improve the efficiency of Backblaze’s data center operations. A successful candidate must be able to develop tools and reports that provide insight and highlight potential issues. The tools developed should be easy to use, and reports should be clear, visual, on demand, interactive, and with a high level of detail.

Responsibilities:

  • Produce forecasts and analysis to provide insight into capacity planning, resource utilization, operational efficiency, SLA compliance, and risk management.
  • Analyze operational costs normalized to business metrics like storage capacity ($/GB), power provided ($/kW), etc.
  • Present and explain statistical/forecasting analyses visually to technical and non-technical audiences.
  • Bring together Backblaze internal information and market data to help leaders understand operational related growth issues.
  • Demonstrate judgment and discretion when dealing with sensitive information.

Sample projects:

  • Forecast capital equipment needs for servers and storage based on sales forecast.
  • Model total cost of ownership (TCO) for data centers including direct costs like power and rent, as well as indirect costs such as taxes.
  • Forecast future storage requirements, data center power and network bandwidth by data center and region.

Minimum Qualifications:

  • BS or BA in a quantitative field (e. g., Finance, Economics, Math, Operations Research, Statistics, Data Science, Management Science, Social Sciences, etc.).
  • Experience with analytic tools (spreadsheets, etc.).
  • 4+ years experience in an analytical role, ideally in an environment requiring supply chain management (e.g. manufacturing, high growth data centers, etc.).
  • Experience presenting to leadership and working collaboratively with other cross-functional partners.

Preferred Qualifications:

  • Experience with SQL and data visualization software, such as Tableau.
  • Experience in forecasting methods and techniques such as trend analysis, longitudinal analysis, clustering, categorical analysis, business simulations, etc.
  • MBA or Masters degree in a quantitative field.

Interested in Joining Our Team?
If this sounds like you, follow these steps:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Include your resume and cover letter.
  3. Tell us a bit about your experience.

Backblaze is an Equal Opportunity Employer.

The post Wanted: DevOps Business Analyst appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Help Wanted: Senior Staff Accountant

Post Syndicated from Yev original https://www.backblaze.com/blog/help-wanted-senior-staff-accountant/

Want to work at a company that helps customers in 156 countries around the world protect the memories they hold dear? A company that stores over 500 petabytes of customers’ photos, music, documents and work files in a purpose-built cloud storage system?

Well here’s your chance. Backblaze is looking for a Senior Staff Accountant!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 — robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple — grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New parent childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office — located near Caltrain and Highways 101 & 280

Senior Staff Accountant Responsibilities:

  • Accurately prepare complex accruals, journal entries and balance sheet reconciliations as part of the monthly, quarterly and annual close process.
  • Ability to research and apply fundamental accounting theories and concepts under US GAAP.
  • Maintain fixed asset ledger, which includes interacting with equipment leasing companies, facilitating leasing documentation and interactions with various departments.
  • Conduct periodic physical inventory counts working with Operations.
  • Prepare schedules and documentation for external audits, internal control audits and various other regulatory audits.
  • Identify and implement process improvements to help reduce time to close, improve upon accuracy of underlying accounting records and enhance internal controls.
  • Perform other duties and special projects as assigned.

Qualifications:

  • Bachelor Degree in Accounting or Finance; CPA highly preferred.
  • 5+ years relevant accounting experience.
  • Big 4 experience is a plus.
  • Knowledge of inventory and cycle counting preferred.
  • Prior experience working with Quickbooks, Excel, Word experience desired.
  • Positive work ethic, strong analytical and organizational skills with a high level of attention to detail.
  • Excellent interpersonal skills and ability to work effectively across functional areas in a collaborative environment.
  • Demonstrated ability to thrive in a dynamic and fast-paced environment.

If this all sounds like you:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Tell us a bit about your work history.
  3. Include your resume.

The post Help Wanted: Senior Staff Accountant appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

Hiring a Director of Sales

Post Syndicated from Yev original https://www.backblaze.com/blog/hiring-a-director-of-sales/

Backblaze is hiring a Director of Sales. This is a critical role for Backblaze as we continue to grow the team. We need a strong leader who has experience in scaling a sales team and who has an excellent track record for exceeding goals by selling Software as a Service (SaaS) solutions. In addition, this leader will need to be highly motivated, as well as able to create and develop a highly-motivated, success oriented sales team that has fun and enjoys what they do.

The History of Backblaze from our CEO
In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to back up their data.

Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round.

Fast forward 10 years later and our world looks quite different. You’ll have some fantastic assets to work with:

  • A brand millions recognize for openness, ease-of-use, and affordability.
  • A computer backup service that stores over 500 petabytes of data, has recovered over 30 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends.
  • Our B2 service that provides the lowest cost cloud storage on the planet at 1/4th the price Amazon, Google or Microsoft charges. While being a newer product on the market, it already has over 100,000 IT and developers signed up as well as an ecosystem building up around it.
  • A growing, profitable and cash-flow positive company.
  • And last, but most definitely not least: a great sales team.

You might be saying, “sounds like you’ve got this under control — why do you need me?” Don’t be misled. We need you. Here’s why:

  • We have a great team, but we are in the process of expanding and we need to develop a structure that will easily scale and provide the most success to drive revenue.
  • We just launched our outbound sales efforts and we need someone to help develop that into a fully successful program that’s building a strong pipeline and closing business.
  • We need someone to work with the marketing department and figure out how to generate more inbound opportunities that the sales team can follow up on and close.
  • We need someone who will work closely in developing the skills of our current sales team and build a path for career growth and advancement.
  • We want someone to manage our Customer Success program.

So that’s a bit about us. What are we looking for in you?

Experience: As a sales leader, you will strategically build and drive the territory’s sales pipeline by assembling and leading a skilled team of sales professionals. This leader should be familiar with generating, developing and closing software subscription (SaaS) opportunities. We are looking for a self-starter who can manage a team and make an immediate impact of selling our Backup and Cloud Storage solutions. In this role, the sales leader will work closely with the VP of Sales, marketing staff, and service staff to develop and implement specific strategic plans to achieve and exceed revenue targets, including new business acquisition as well as build out our customer success program.

Leadership: We have an experienced team who’s brought us to where we are today. You need to have the people and management skills to get them excited about working with you. You need to be a strong leader and compassionate about developing and supporting your team.

Data driven and creative: The data has to show something makes sense before we scale it up. However, without creativity, it’s easy to say “the data shows it’s impossible” or to find a local maximum. Whether it’s deciding how to scale the team, figuring out what our outbound sales efforts should look like or putting a plan in place to develop the team for career growth, we’ve seen a bit of creativity get us places a few extra dollars couldn’t.

Jive with our culture: Strong leaders affect culture and the person we hire for this role may well shape, not only fit into, ours. But to shape the culture you have to be accepted by the organism, which means a certain set of shared values. We default to openness with our team, our customers, and everyone if possible. We love initiative — without arrogance or dictatorship. We work to create a place people enjoy showing up to work. That doesn’t mean ping pong tables and foosball (though we do try to have perks & fun), but it means people are friendly, non-political, working to build a good service but also a good place to work.

Do the work: Ideas and strategy are critical, but good execution makes them happen. We’re looking for someone who can help the team execute both from the perspective of being capable of guiding and organizing, but also someone who is hands-on themselves.

Additional Responsibilities needed for this role:

  • Recruit, coach, mentor, manage and lead a team of sales professionals to achieve yearly sales targets. This includes closing new business and expanding upon existing clientele.
  • Expand the customer success program to provide the best customer experience possible resulting in upsell opportunities and a high retention rate.
  • Develop effective sales strategies and deliver compelling product demonstrations and sales pitches.
  • Acquire and develop the appropriate sales tools to make the team efficient in their daily work flow.
  • Apply a thorough understanding of the marketplace, industry trends, funding developments, and products to all management activities and strategic sales decisions.
  • Ensure that sales department operations function smoothly, with the goal of facilitating sales and/or closings; operational responsibilities include accurate pipeline reporting and sales forecasts.
  • This position will report directly to the VP of Sales and will be staffed in our headquarters in San Mateo, CA.

Requirements:

  • 7 – 10+ years of successful sales leadership experience as measured by sales performance against goals.
    Experience in developing skill sets and providing career growth and opportunities through advancement of team members.
  • Background in selling SaaS technologies with a strong track record of success.
  • Strong presentation and communication skills.
  • Must be able to travel occasionally nationwide.
  • BA/BS degree required

Think you want to join us on this adventure?
Send an email to jobscontact@backblaze.com with the subject “Director of Sales.” (Recruiters and agencies, please don’t email us.) Include a resume and answer these two questions:

  1. How would you approach evaluating the current sales team and what is your process for developing a growth strategy to scale the team?
  2. What are the goals you would set for yourself in the 3 month and 1-year timeframes?

Thank you for taking the time to read this and I hope that this sounds like the opportunity for which you’ve been waiting.

Backblaze is an Equal Opportunity Employer.

The post Hiring a Director of Sales appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Wanted: Product Marketing Manager

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-product-marketing-manager/

We’re thrilled to announce that we’re looking for a Product Marketing Manager for our Backblaze for Business line. We’ve made this post to give you a better idea about the role, what we’re looking for, and why we think it’s a phenomenal position. If you are somebody or know somebody that fits the role, please send your/their cover letter and resume. Instructions on how to apply are found below.

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable, low cost cloud backup. Our computer backup product is the industry leading solution — for $50 / year / computer, our customers receive unlimited data backup of their computer. Our second product, B2 is an object storage cloud competing with Amazon’s S3; the biggest difference is, at $5 / Terabyte / Month, B2 is ¼ of the price of S3.

Backblaze serves a wide variety of customers, from individual consumers, to SMBs, through massive enterprise. If you’re looking for robust, reliable, affordable cloud storage, Backblaze is your answer.

We are a cash flow positive business and growing rapidly. Over the last 11 years, we have taken in only $3M of outside capital. We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple — grow sustainably and profitably. Throughout our journey, we’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families.

A Sample of Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New parent childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office — located near Caltrain and Highways 101 & 280.

More About The Role:
Backblaze’s Product Marketing Manager for Business Backup is an essential member of our Marketing team, reporting to the VP of Marketing.

The best PMM for Backblaze is a customer focused story teller. The role requires an understanding of both the Backblaze product offerings and the unique dynamics businesses face in backing up their data. We do not expect our PMM to be a storage expert. We do expect this person to be posses a deep understanding of the dynamics of marketing SaaS solutions to businesses.

Our PMM partners directly with our Business Backup sales team to shape our go to market strategy, deliver the appropriate content and collateral, and ultimately is an owner for hitting the forecast. One unique aspect of our Business Backup line is that over 50% of the revenue comes from “self-service” — inbound customers who get started on their own. As such, being a PMM at Backblaze is an opportunity to straddle “traditional” product marketing through supporting sales while also owning an direct-to-business “eCommerce” offering.

A Backblaze PMM:

  • Defines, creates, and delivers all content for the vertical. This person is the subject matter expert for that vertical for Backblaze and is capable of producing collateral for multiple mediums (email, web pages, blog posts, one-pagers)
  • Works collaboratively with Sales to design and execute go-to-market strategy
  • Delivers our revenue goals through sales enablement and direct response marketing

The Perfect PMM excels at:

  • Communication. Data storage can be complicated, but customers and co-workers want simple solutions.
  • Prioritization & Relentless Execution. Our business is growing fast. We need someone that can help set our strategic course, be process oriented, and then execute diligently and efficiently.
  • Collateral Creation. Case studies, emails, web pages, one pagers, presentations, Blog posts (to an audience of over 3 million readers.)
  • Learning. You’ll need to become an expert on our competitors. You’ll also have the opportunity to participate in ways you probably never had to do before. We value an “athlete” that’s willing and able to learn.
  • Being Evidence Driven. Numbers win. But when we don’t have numbers, informed guesses — customer profiles, feedback from Sales, market dynamics — take the day.
  • Working Cross Functionally. You will be the vertical expert for our organization. In that capacity, you will help inform the work of all of our departments.

The Ideal PMM background:

  • 3+ years of product marketing with a preference for SaaS experience.
  • Excellent time management and project prioritization skills
  • Demonstrated creative problem solving abilities
  • Ability to learn new markets, diagnose customer segments, and translate all that into actionable insights
  • Fluency with metrics: Saas sales funnel (MQL, SQL, etc), and eCommerce (CTR, visits, conversion)

Interested in Joining Our Team?
If this sounds like you, follow these steps:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Include your resume and cover letter.
  3. Tell us a bit about your experience.

Backblaze is an Equal Opportunity Employer.

The post Wanted: Product Marketing Manager appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Replacing macOS Server with Synology NAS

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/replacing-macos-server-with-synology-nas/

Synology NAS boxes backed up to the cloud

Businesses and organizations that rely on macOS server for essential office and data services are facing some decisions about the future of their IT services.

Apple recently announced that it is deprecating a significant portion of essential network services in macOS Server, as they described in a support statement posted on April 24, 2018, “Prepare for changes to macOS Server.” Apple’s note includes:

macOS Server is changing to focus more on management of computers, devices, and storage on your network. As a result, some changes are coming in how Server works. A number of services will be deprecated, and will be hidden on new installations of an update to macOS Server coming in spring 2018.

The note lists the services that will be removed in a future release of macOS Server, including calendar and contact support, Dynamic Host Configuration Protocol (DHCP), Domain Name Services (DNS), mail, instant messages, virtual private networking (VPN), NetInstall, Web server, and the Wiki.

Apple assures users who have already configured any of the listed services that they will be able to use them in the spring 2018 macOS Server update, but the statement ends with links to a number of alternative services, including hosted services, that macOS Server users should consider as viable replacements to the features it is removing. These alternative services are all FOSS (Free and Open-Source Software).

As difficult as this could be for organizations that use macOS server, this is not unexpected. Apple left the server hardware space back in 2010, when Steve Jobs announced the company was ending its line of Xserve rackmount servers, which were introduced in May, 2002. Since then, macOS Server has hardly been a prominent part of Apple’s product lineup. It’s not just the product itself that has lost some luster, but the entire category of SMB office and business servers, which has been undergoing a gradual change in recent years.

Some might wonder how important the news about macOS Server is, given that macOS Server represents a pretty small share of the server market. macOS Server has been important to design shops, agencies, education users, and small businesses that likely have been on Macs for ages, but it’s not a significant part of the IT infrastructure of larger organizations and businesses.

What Comes After macOS Server?

Lovers of macOS Server don’t have to fear having their Mac minis pried from their cold, dead hands quite yet. Installed services will continue to be available. In the fall of 2018, new installations and upgrades of macOS Server will require users to migrate most services to other software. Since many of the services of macOS Server were already open-source, this means that a change in software might not be required. It does mean more configuration and management required from those who continue with macOS Server, however.

Users can continue with macOS Server if they wish, but many will see the writing on the wall and look for a suitable substitute.

The Times They Are A-Changin’

For many people working in organizations, what is significant about this announcement is how it reflects the move away from the once ubiquitous server-based IT infrastructure. Services that used to be centrally managed and office-based, such as storage, file sharing, communications, and computing, have moved to the cloud.

In selecting the next office IT platforms, there’s an opportunity to move to solutions that reflect and support how people are working and the applications they are using both in the office and remotely. For many, this means including cloud-based services in office automation, backup, and business continuity/disaster recovery planning. This includes Software as a Service, Platform as a Service, and Infrastructure as a Service (Saas, PaaS, IaaS) options.

IT solutions that integrate well with the cloud are worth strong consideration for what comes after a macOS Server-based environment.

Synology NAS as a macOS Server Alternative

One solution that is becoming popular is to replace macOS Server with a device that has the ability to provide important office services, but also bridges the office and cloud environments. Using Network-Attached Storage (NAS) to take up the server slack makes a lot of sense. Many customers are already using NAS for file sharing, local data backup, automatic cloud backup, and other uses. In the case of Synology, their operating system, Synology DiskStation Manager (DSM), is Linux based, and integrates the basic functions of file sharing, centralized backup, RAID storage, multimedia streaming, virtual storage, and other common functions.

Synology NAS box

Synology NAS

Since DSM is based on Linux, there are numerous server applications available, including many of the same ones that are available for macOS Server, which shares conceptual roots with Linux as it comes from BSD Unix.

Synology DiskStation Manager Package Center screenshot

Synology DiskStation Manager Package Center

According to Ed Lukacs, COO at 2FIFTEEN Systems Management in Salt Lake City, their customers have found the move from macOS Server to Synology NAS not only painless, but positive. DSM works seamlessly with macOS and has been faster for their customers, as well. Many of their customers are running Adobe Creative Suite and Google G Suite applications, so a workflow that combines local storage, remote access, and the cloud, is already well known to them. Remote users are supported by Synology’s QuickConnect or VPN.

Business continuity and backup are simplified by the flexible storage capacity of the NAS. Synology has built-in backup to Backblaze B2 Cloud Storage with Synology’s Cloud Sync, as well as a choice of a number of other B2-compatible applications, such as Cloudberry, Comet, and Arq.

Customers have been able to get up and running quickly, with only initial data transfers requiring some time to complete. After that, management of the NAS can be handled in-house or with the support of a Managed Service Provider (MSP).

Are You Sticking with macOS Server or Moving to Another Platform?

If you’re affected by this change in macOS Server, please let us know in the comments how you’re planning to cope. Are you using Synology NAS for server services? Please tell us how that’s working for you.

The post Replacing macOS Server with Synology NAS appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

BPI Wants Piracy Dealt With Under New UK Internet ‘Clean-Up’ Laws

Post Syndicated from Andy original https://torrentfreak.com/bpi-wants-music-piracy-dealt-with-under-uk-internet-clean-up-laws-180523/

For the past several years, the UK Government has expressed a strong desire to “clean up” the Internet.

Strong emphasis has been placed on making the Internet safer for children but that’s just the tip of a much larger iceberg.

This week, the Government published its response to the Internet Safety Strategy green paper, stating unequivocally that more needs to be done to tackle “online harm”.

Noting that six out of ten people report seeing inappropriate or harmful content online, the Government said that work already underway with social media companies to protect users had borne fruit but overall industry response has been less satisfactory.

As a result, the Government will now carry through with its threat to introduce new legislation, albeit with the assistance of technology companies, children’s charities and other stakeholders.

“Digital technology is overwhelmingly a force for good across the world and we must always champion innovation and change for the better,” said Matt Hancock, Secretary of State for Digital, Culture, Media and Sport.

“At the same time I have been clear that we have to address the Wild West elements of the Internet through legislation, in a way that supports innovation. We strongly support technology companies to start up and grow, and we want to work with them to keep our citizens safe.”

While emphasis is being placed on hot-button topics such as cyberbullying and online child exploitation, the Government is clear that it wishes to tackle “the full range” of online harms. That has been greeted by UK music group BPI with a request that the Government introduces new measures to tackle Internet piracy.

In a statement issued this week, BPI chief executive Geoff Taylor welcomed the move towards legislative change and urged the Government to encompass the music industry and beyond.

“This is a vital opportunity to protect consumers and boost the UK’s music and creative industries. The BPI has long pressed for internet intermediaries and online platforms to take responsibility for the content that they promote to users,” Taylor said.

“Government should now take the power in legislation to require online giants to take effective, proactive measures to clean illegal content from their sites and services. This will keep fans away from dodgy sites full of harmful content and prevent criminals from undermining creative businesses that create UK jobs.”

The BPI has published four initial requests, each of which provides food for thought.

The demand to “establish a new fast-track process for blocking illegal sites” is not entirely unexpected, particularly given the expense of launching applications for blocking injunctions at the High Court.

“The BPI has taken a large number of actions against individual websites – 63 injunctions are in place against sites that are wholly or mainly infringing and whose business is simply to profit from criminal activity,” the BPI says.

Those injunctions can be expanded fairly easily to include new sites operating under similar banners or facilitating access to those already covered, but it’s clear the BPI would like something more streamlined. Voluntary schemes, such as the one in place in Portugal, could be an option but it’s unclear how troublesome that could be for ISPs. New legislation could solve that dilemma, however.

Another big thorn in the side for groups like the BPI are people and entities that post infringing content. The BPI is very good at taking these listings down from sites and search engines in particular (more than 600 million requests to date) but it’s a game of whac-a-mole the group would rather not engage in.

With that in mind, the BPI would like the Government to impose new rules that would compel online platforms to stop content from being re-posted after it’s been taken down while removing the accounts of repeat infringers.

Thirdly, the BPI would like the Government to introduce penalties for “online operators” who do not provide “transparent contact and ownership information.” The music group isn’t any more specific than that, but the suggestion is that operators of some sites have a tendency to hide in the shadows, something which frustrates enforcement activity.

Finally, and perhaps most interestingly, the BPI is calling on the Government to legislate for a new “duty of care” for online intermediaries and platforms. Specifically, the BPI wants “effective action” taken against businesses that use the Internet to “encourage” consumers to access content illegally.

While this could easily encompass pirate sites and services themselves, this proposal has the breadth to include a wide range of offenders, from people posting piracy-focused tutorials on monetized YouTube channels to those selling fully-loaded Kodi devices on eBay or social media.

Overall, the BPI clearly wants to place pressure on intermediaries to take action against piracy when they’re in a position to do so, and particularly those who may not have shown much enthusiasm towards industry collaboration in the past.

“Legislation in this Bill, to take powers to intervene with respect to operators that do not co-operate, would bring focus to the roundtable process and ensure that intermediaries take their responsibilities seriously,” the BPI says.

The Department for Digital, Culture, Media & Sport and the Home Office will now work on a White Paper, to be published later this year, to set out legislation to tackle “online harms”. The BPI and similar entities will hope that the Government takes their concerns on board.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Singapore ISPs Block 53 Pirate Sites Following MPAA Legal Action

Post Syndicated from Andy original https://torrentfreak.com/singapore-isps-block-53-pirate-sites-following-mpaa-legal-action-180521/

Under increasing pressure from copyright holders, in 2014 Singapore passed amendments to copyright law that allow ISPs to block ‘pirate’ sites.

“The prevalence of online piracy in Singapore turns customers away from legitimate content and adversely affects Singapore’s creative sector,” said then Senior Minister of State for Law Indranee Rajah.

“It can also undermine our reputation as a society that respects the protection of intellectual property.”

After the amendments took effect in December 2014, there was a considerable pause before any websites were targeted. However, in September 2016, at the request of the MPA(A), Solarmovie.ph became the first website ordered to be blocked under Singapore’s amended Copyright Act. The High Court subsequently ordering several major ISPs to disable access to the site.

A new wave of blocks announced this morning are the country’s most significant so far, with dozens of ‘pirate’ sites targeted following a successful application by the MPAA earlier this year.

In total, 53 sites across 154 domains – including those operated by The Pirate Bay plus KickassTorrents and Solarmovie variants – have been rendered inaccessible by ISPs including Singtel, StarHub, M1, MyRepublic and ViewQwest.

“In Singapore, these sites are responsible for a major portion of copyright infringement of films and television shows,” an MPAA spokesman told The Straits Times (paywall).

“This action by rights owners is necessary to protect the creative industry, enabling creators to create and keep their jobs, protect their works, and ensure the continued provision of high-quality content to audiences.”

Before granting a blocking injunction, the High Court must satisfy itself that the proposed online locations meet the threshold of being “flagrantly infringing”. This means that a site like YouTube, which carries a lot of infringing content but is not dedicated to infringement, would not ordinarily get caught up in the dragnet.

Sites considered for blocking must have a primary purpose to infringe, a threshold that is tipped in copyright holders’ favor when the sites’ operators display a lack of respect for copyright law and have already had their domains blocked in other jurisdictions.

The Court also weighs a number of additional factors including whether blocking would place an unacceptable burden on the shoulders of ISPs, whether the blocking demand is technically possible, and whether it will be effective.

In common with other regions such as the UK and Australia, for example, sites targeted for blocking must be informed of the applications made against them, to ensure they’re given a chance to defend themselves in court. No fully-fledged ‘pirate’ site has ever defended a blocking application in Singapore or indeed any jurisdiction in the world.

Finally, should any measures be taken by ‘pirate’ sites to evade an ISP blockade, copyright holders can apply to the Singapore High Court to amend the blocking order. This is similar to the Australian model where each application must be heard on its merits, rather than the UK model where a more streamlined approach is taken.

According to a recent report by Motion Picture Association Canada, at least 42 countries are now obligated to block infringing sites. In Europe alone, 1,800 sites and 5,300 domains have been rendered inaccessible, with Portugal, Italy, the UK, and Denmark leading the way.

In Canada, where copyright holders are lobbying hard for a site-blocking regime of their own, there’s pressure to avoid the “uncertain, slow and expensive” route of going through the courts.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 

Analyze data in Amazon DynamoDB using Amazon SageMaker for real-time prediction

Post Syndicated from YongSeong Lee original https://aws.amazon.com/blogs/big-data/analyze-data-in-amazon-dynamodb-using-amazon-sagemaker-for-real-time-prediction/

Many companies across the globe use Amazon DynamoDB to store and query historical user-interaction data. DynamoDB is a fast NoSQL database used by applications that need consistent, single-digit millisecond latency.

Often, customers want to turn their valuable data in DynamoDB into insights by analyzing a copy of their table stored in Amazon S3. Doing this separates their analytical queries from their low-latency critical paths. This data can be the primary source for understanding customers’ past behavior, predicting future behavior, and generating downstream business value. Customers often turn to DynamoDB because of its great scalability and high availability. After a successful launch, many customers want to use the data in DynamoDB to predict future behaviors or provide personalized recommendations.

DynamoDB is a good fit for low-latency reads and writes, but it’s not practical to scan all data in a DynamoDB database to train a model. In this post, I demonstrate how you can use DynamoDB table data copied to Amazon S3 by AWS Data Pipeline to predict customer behavior. I also demonstrate how you can use this data to provide personalized recommendations for customers using Amazon SageMaker. You can also run ad hoc queries using Amazon Athena against the data. DynamoDB recently released on-demand backups to create full table backups with no performance impact. However, it’s not suitable for our purposes in this post, so I chose AWS Data Pipeline instead to create managed backups are accessible from other services.

To do this, I describe how to read the DynamoDB backup file format in Data Pipeline. I also describe how to convert the objects in S3 to a CSV format that Amazon SageMaker can read. In addition, I show how to schedule regular exports and transformations using Data Pipeline. The sample data used in this post is from Bank Marketing Data Set of UCI.

The solution that I describe provides the following benefits:

  • Separates analytical queries from production traffic on your DynamoDB table, preserving your DynamoDB read capacity units (RCUs) for important production requests
  • Automatically updates your model to get real-time predictions
  • Optimizes for performance (so it doesn’t compete with DynamoDB RCUs after the export) and for cost (using data you already have)
  • Makes it easier for developers of all skill levels to use Amazon SageMaker

All code and data set in this post are available in this .zip file.

Solution architecture

The following diagram shows the overall architecture of the solution.

The steps that data follows through the architecture are as follows:

  1. Data Pipeline regularly copies the full contents of a DynamoDB table as JSON into an S3
  2. Exported JSON files are converted to comma-separated value (CSV) format to use as a data source for Amazon SageMaker.
  3. Amazon SageMaker renews the model artifact and update the endpoint.
  4. The converted CSV is available for ad hoc queries with Amazon Athena.
  5. Data Pipeline controls this flow and repeats the cycle based on the schedule defined by customer requirements.

Building the auto-updating model

This section discusses details about how to read the DynamoDB exported data in Data Pipeline and build automated workflows for real-time prediction with a regularly updated model.

Download sample scripts and data

Before you begin, take the following steps:

  1. Download sample scripts in this .zip file.
  2. Unzip the src.zip file.
  3. Find the automation_script.sh file and edit it for your environment. For example, you need to replace 's3://<your bucket>/<datasource path>/' with your own S3 path to the data source for Amazon ML. In the script, the text enclosed by angle brackets—< and >—should be replaced with your own path.
  4. Upload the json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar file to your S3 path so that the ADD jar command in Apache Hive can refer to it.

For this solution, the banking.csv  should be imported into a DynamoDB table.

Export a DynamoDB table

To export the DynamoDB table to S3, open the Data Pipeline console and choose the Export DynamoDB table to S3 template. In this template, Data Pipeline creates an Amazon EMR cluster and performs an export in the EMRActivity activity. Set proper intervals for backups according to your business requirements.

One core node(m3.xlarge) provides the default capacity for the EMR cluster and should be suitable for the solution in this post. Leave the option to resize the cluster before running enabled in the TableBackupActivity activity to let Data Pipeline scale the cluster to match the table size. The process of converting to CSV format and renewing models happens in this EMR cluster.

For a more in-depth look at how to export data from DynamoDB, see Export Data from DynamoDB in the Data Pipeline documentation.

Add the script to an existing pipeline

After you export your DynamoDB table, you add an additional EMR step to EMRActivity by following these steps:

  1. Open the Data Pipeline console and choose the ID for the pipeline that you want to add the script to.
  2. For Actions, choose Edit.
  3. In the editing console, choose the Activities category and add an EMR step using the custom script downloaded in the previous section, as shown below.

Paste the following command into the new step after the data ­­upload step:

s3://#{myDDBRegion}.elasticmapreduce/libs/script-runner/script-runner.jar,s3://<your bucket name>/automation_script.sh,#{output.directoryPath},#{myDDBRegion}

The element #{output.directoryPath} references the S3 path where the data pipeline exports DynamoDB data as JSON. The path should be passed to the script as an argument.

The bash script has two goals, converting data formats and renewing the Amazon SageMaker model. Subsequent sections discuss the contents of the automation script.

Automation script: Convert JSON data to CSV with Hive

We use Apache Hive to transform the data into a new format. The Hive QL script to create an external table and transform the data is included in the custom script that you added to the Data Pipeline definition.

When you run the Hive scripts, do so with the -e option. Also, define the Hive table with the 'org.openx.data.jsonserde.JsonSerDe' row format to parse and read JSON format. The SQL creates a Hive EXTERNAL table, and it reads the DynamoDB backup data on the S3 path passed to it by Data Pipeline.

Note: You should create the table with the “EXTERNAL” keyword to avoid the backup data being accidentally deleted from S3 if you drop the table.

The full automation script for converting follows. Add your own bucket name and data source path in the highlighted areas.

#!/bin/bash
hive -e "
ADD jar s3://<your bucket name>/json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar ; 
DROP TABLE IF EXISTS blog_backup_data ;
CREATE EXTERNAL TABLE blog_backup_data (
 customer_id map<string,string>,
 age map<string,string>, job map<string,string>, 
 marital map<string,string>,education map<string,string>, 
 default map<string,string>, housing map<string,string>,
 loan map<string,string>, contact map<string,string>, 
 month map<string,string>, day_of_week map<string,string>, 
 duration map<string,string>, campaign map<string,string>,
 pdays map<string,string>, previous map<string,string>, 
 poutcome map<string,string>, emp_var_rate map<string,string>, 
 cons_price_idx map<string,string>, cons_conf_idx map<string,string>,
 euribor3m map<string,string>, nr_employed map<string,string>, 
 y map<string,string> ) 
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' 
LOCATION '$1/';

INSERT OVERWRITE DIRECTORY 's3://<your bucket name>/<datasource path>/' 
SELECT concat( customer_id['s'],',', 
 age['n'],',', job['s'],',', 
 marital['s'],',', education['s'],',', default['s'],',', 
 housing['s'],',', loan['s'],',', contact['s'],',', 
 month['s'],',', day_of_week['s'],',', duration['n'],',', 
 campaign['n'],',',pdays['n'],',',previous['n'],',', 
 poutcome['s'],',', emp_var_rate['n'],',', cons_price_idx['n'],',',
 cons_conf_idx['n'],',', euribor3m['n'],',', nr_employed['n'],',', y['n'] ) 
FROM blog_backup_data
WHERE customer_id['s'] > 0 ; 

After creating an external table, you need to read data. You then use the INSERT OVERWRITE DIRECTORY ~ SELECT command to write CSV data to the S3 path that you designated as the data source for Amazon SageMaker.

Depending on your requirements, you can eliminate or process the columns in the SELECT clause in this step to optimize data analysis. For example, you might remove some columns that have unpredictable correlations with the target value because keeping the wrong columns might expose your model to “overfitting” during the training. In this post, customer_id  columns is removed. Overfitting can make your prediction weak. More information about overfitting can be found in the topic Model Fit: Underfitting vs. Overfitting in the Amazon ML documentation.

Automation script: Renew the Amazon SageMaker model

After the CSV data is replaced and ready to use, create a new model artifact for Amazon SageMaker with the updated dataset on S3.  For renewing model artifact, you must create a new training job.  Training jobs can be run using the AWS SDK ( for example, Amazon SageMaker boto3 ) or the Amazon SageMaker Python SDK that can be installed with “pip install sagemaker” command as well as the AWS CLI for Amazon SageMaker described in this post.

In addition, consider how to smoothly renew your existing model without service impact, because your model is called by applications in real time. To do this, you need to create a new endpoint configuration first and update a current endpoint with the endpoint configuration that is just created.

#!/bin/bash
## Define variable 
REGION=$2
DTTIME=`date +%Y-%m-%d-%H-%M-%S`
ROLE="<your AmazonSageMaker-ExecutionRole>" 


# Select containers image based on region.  
case "$REGION" in
"us-west-2" )
    IMAGE="174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest"
    ;;
"us-east-1" )
    IMAGE="382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest" 
    ;;
"us-east-2" )
    IMAGE="404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest" 
    ;;
"eu-west-1" )
    IMAGE="438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest" 
    ;;
 *)
    echo "Invalid Region Name"
    exit 1 ;  
esac

# Start training job and creating model artifact 
TRAINING_JOB_NAME=TRAIN-${DTTIME} 
S3OUTPUT="s3://<your bucket name>/model/" 
INSTANCETYPE="ml.m4.xlarge"
INSTANCECOUNT=1
VOLUMESIZE=5 
aws sagemaker create-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --algorithm-specification TrainingImage=${IMAGE},TrainingInputMode=File --role-arn ${ROLE}  --input-data-config '[{ "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://<your bucket name>/<datasource path>/", "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "text/csv", "CompressionType": "None" , "RecordWrapperType": "None"  }]'  --output-data-config S3OutputPath=${S3OUTPUT} --resource-config  InstanceType=${INSTANCETYPE},InstanceCount=${INSTANCECOUNT},VolumeSizeInGB=${VOLUMESIZE} --stopping-condition MaxRuntimeInSeconds=120 --hyper-parameters feature_dim=20,predictor_type=binary_classifier  

# Wait until job completed 
aws sagemaker wait training-job-completed-or-stopped --training-job-name ${TRAINING_JOB_NAME}  --region ${REGION}

# Get newly created model artifact and create model
MODELARTIFACT=`aws sagemaker describe-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --query 'ModelArtifacts.S3ModelArtifacts' --output text `
MODELNAME=MODEL-${DTTIME}
aws sagemaker create-model --region ${REGION} --model-name ${MODELNAME}  --primary-container Image=${IMAGE},ModelDataUrl=${MODELARTIFACT}  --execution-role-arn ${ROLE}

# create a new endpoint configuration 
CONFIGNAME=CONFIG-${DTTIME}
aws sagemaker  create-endpoint-config --region ${REGION} --endpoint-config-name ${CONFIGNAME}  --production-variants  VariantName=Users,ModelName=${MODELNAME},InitialInstanceCount=1,InstanceType=ml.m4.xlarge

# create or update the endpoint
STATUS=`aws sagemaker describe-endpoint --endpoint-name  ServiceEndpoint --query 'EndpointStatus' --output text --region ${REGION} `
if [[ $STATUS -ne "InService" ]] ;
then
    aws sagemaker  create-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}    
else
    aws sagemaker  update-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}
fi

Grant permission

Before you execute the script, you must grant proper permission to Data Pipeline. Data Pipeline uses the DataPipelineDefaultResourceRole role by default. I added the following policy to DataPipelineDefaultResourceRole to allow Data Pipeline to create, delete, and update the Amazon SageMaker model and data source in the script.

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "sagemaker:CreateTrainingJob",
 "sagemaker:DescribeTrainingJob",
 "sagemaker:CreateModel",
 "sagemaker:CreateEndpointConfig",
 "sagemaker:DescribeEndpoint",
 "sagemaker:CreateEndpoint",
 "sagemaker:UpdateEndpoint",
 "iam:PassRole"
 ],
 "Resource": "*"
 }
 ]
}

Use real-time prediction

After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint. This approach is useful for interactive web, mobile, or desktop applications.

Following, I provide a simple Python code example that queries against Amazon SageMaker endpoint URL with its name (“ServiceEndpoint”) and then uses them for real-time prediction.

=== Python sample for real-time prediction ===

#!/usr/bin/env python
import boto3
import json 

client = boto3.client('sagemaker-runtime', region_name ='<your region>' )
new_customer_info = '34,10,2,4,1,2,1,1,6,3,190,1,3,4,3,-1.7,94.055,-39.8,0.715,4991.6'
response = client.invoke_endpoint(
    EndpointName='ServiceEndpoint',
    Body=new_customer_info, 
    ContentType='text/csv'
)
result = json.loads(response['Body'].read().decode())
print(result)
--- output(response) ---
{u'predictions': [{u'score': 0.7528127431869507, u'predicted_label': 1.0}]}

Solution summary

The solution takes the following steps:

  1. Data Pipeline exports DynamoDB table data into S3. The original JSON data should be kept to recover the table in the rare event that this is needed. Data Pipeline then converts JSON to CSV so that Amazon SageMaker can read the data.Note: You should select only meaningful attributes when you convert CSV. For example, if you judge that the “campaign” attribute is not correlated, you can eliminate this attribute from the CSV.
  2. Train the Amazon SageMaker model with the new data source.
  3. When a new customer comes to your site, you can judge how likely it is for this customer to subscribe to your new product based on “predictedScores” provided by Amazon SageMaker.
  4. If the new user subscribes your new product, your application must update the attribute “y” to the value 1 (for yes). This updated data is provided for the next model renewal as a new data source. It serves to improve the accuracy of your prediction. With each new entry, your application can become smarter and deliver better predictions.

Running ad hoc queries using Amazon Athena

Amazon Athena is a serverless query service that makes it easy to analyze large amounts of data stored in Amazon S3 using standard SQL. Athena is useful for examining data and collecting statistics or informative summaries about data. You can also use the powerful analytic functions of Presto, as described in the topic Aggregate Functions of Presto in the Presto documentation.

With the Data Pipeline scheduled activity, recent CSV data is always located in S3 so that you can run ad hoc queries against the data using Amazon Athena. I show this with example SQL statements following. For an in-depth description of this process, see the post Interactive SQL Queries for Data in Amazon S3 on the AWS News Blog. 

Creating an Amazon Athena table and running it

Simply, you can create an EXTERNAL table for the CSV data on S3 in Amazon Athena Management Console.

=== Table Creation ===
CREATE EXTERNAL TABLE datasource (
 age int, 
 job string, 
 marital string , 
 education string, 
 default string, 
 housing string, 
 loan string, 
 contact string, 
 month string, 
 day_of_week string, 
 duration int, 
 campaign int, 
 pdays int , 
 previous int , 
 poutcome string, 
 emp_var_rate double, 
 cons_price_idx double,
 cons_conf_idx double, 
 euribor3m double, 
 nr_employed double, 
 y int 
)
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n' 
LOCATION 's3://<your bucket name>/<datasource path>/';

The following query calculates the correlation coefficient between the target attribute and other attributes using Amazon Athena.

=== Sample Query ===

SELECT corr(age,y) AS correlation_age_and_target, 
 corr(duration,y) AS correlation_duration_and_target, 
 corr(campaign,y) AS correlation_campaign_and_target,
 corr(contact,y) AS correlation_contact_and_target
FROM ( SELECT age , duration , campaign , y , 
 CASE WHEN contact = 'telephone' THEN 1 ELSE 0 END AS contact 
 FROM datasource 
 ) datasource ;

Conclusion

In this post, I introduce an example of how to analyze data in DynamoDB by using table data in Amazon S3 to optimize DynamoDB table read capacity. You can then use the analyzed data as a new data source to train an Amazon SageMaker model for accurate real-time prediction. In addition, you can run ad hoc queries against the data on S3 using Amazon Athena. I also present how to automate these procedures by using Data Pipeline.

You can adapt this example to your specific use case at hand, and hopefully this post helps you accelerate your development. You can find more examples and use cases for Amazon SageMaker in the video AWS 2017: Introducing Amazon SageMaker on the AWS website.

 


Additional Reading

If you found this post useful, be sure to check out Serving Real-Time Machine Learning Predictions on Amazon EMR and Analyzing Data in S3 using Amazon Athena.

 


About the Author

Yong Seong Lee is a Cloud Support Engineer for AWS Big Data Services. He is interested in every technology related to data/databases and helping customers who have difficulties in using AWS services. His motto is “Enjoy life, be curious and have maximum experience.”