Tag Archives: reputation

Amazon Is Losing the War on Fraudulent Sellers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/amazon_is_losin.html

Excellent article on fraudulent seller tactics on Amazon.

The most prominent black hat companies for US Amazon sellers offer ways to manipulate Amazon’s ranking system to promote products, protect accounts from disciplinary actions, and crush competitors. Sometimes, these black hat companies bribe corporate Amazon employees to leak information from the company’s wiki pages and business reports, which they then resell to marketplace sellers for steep prices. One black hat company charges as much as $10,000 a month to help Amazon sellers appear at the top of product search results. Other tactics to promote sellers’ products include removing negative reviews from product pages and exploiting technical loopholes on Amazon’s site to lift products’ overall sales rankings.

[…]

AmzPandora’s services ranged from small tasks to more ambitious strategies to rank a product higher using Amazon’s algorithm. While it was online, it offered to ping internal contacts at Amazon for $500 to get information about why a seller’s account had been suspended, as well as advice on how to appeal the suspension. For $300, the company promised to remove an unspecified number of negative reviews on a listing within three to seven days, which would help increase the overall star rating for a product. For $1.50, the company offered a service to fool the algorithm into believing a product had been added to a shopper’s cart or wish list by writing a super URL. And for $1,200, an Amazon seller could purchase a “frequently bought together” spot on another marketplace product’s page that would appear for two weeks, which AmzPandora promised would lead to a 10% increase in sales.

This was a good article on this from last year. (My blog post.)

Amazon has a real problem here, primarily because trust in the system is paramount to Amazon’s success. As much as they need to crack down on fraudulent sellers, they really want articles like these to not be written.

Slashdot thread. Boing Boing post.

Flight Sim Company Threatens Reddit Mods Over “Libelous” DRM Posts

Post Syndicated from Andy original https://torrentfreak.com/flight-sim-company-threatens-reddit-mods-over-libellous-drm-posts-180604/

Earlier this year, in an effort to deal with piracy of their products, flight simulator company FlightSimLabs took drastic action by installing malware on customers’ machines.

The story began when a Reddit user reported something unusual in his download of FlightSimLabs’ A320X module. A file – test.exe – was being flagged up as a ‘Chrome Password Dump’ tool, something which rang alarm bells among flight sim fans.

As additional information was made available, the story became even more sensational. After first dodging the issue with carefully worded statements, FlightSimLabs admitted that it had installed a password dumper onto ALL users’ machines – whether they were pirates or not – in an effort to catch a particular software cracker and launch legal action.

It was an incredible story that no doubt did damage to FlightSimLabs’ reputation. But the now the company is at the center of a new storm, again centered around anti-piracy measures and again focused on Reddit.

Just before the weekend, Reddit user /u/walkday reported finding something unusual in his A320X module, the same module that caused the earlier controversy.

“The latest installer of FSLabs’ A320X puts two cmdhost.exe files under ‘system32\’ and ‘SysWOW64\’ of my Windows directory. Despite the name, they don’t open a command-line window,” he reported.

“They’re a part of the authentication because, if you remove them, the A320X won’t get loaded. Does someone here know more about cmdhost.exe? Why does FSLabs give them such a deceptive name and put them in the system folders? I hate them for polluting my system folder unless, of course, it is a dll used by different applications.”

Needless to say, the news that FSLabs were putting files into system folders named to make them look like system files was not well received.

“Hiding something named to resemble Window’s “Console Window Host” process in system folders is a huge red flag,” one user wrote.

“It’s a malware tactic used to deceive users into thinking the executable is a part of the OS, thus being trusted and not deleted. Really dodgy tactic, don’t trust it and don’t trust them,” opined another.

With a disenchanted Reddit userbase simmering away in the background, FSLabs took to Facebook with a statement to quieten down the masses.

“Over the past few hours we have become aware of rumors circulating on social media about the cmdhost file installed by the A320-X and wanted to clear up any confusion or misunderstanding,” the company wrote.

“cmdhost is part of our eSellerate infrastructure – which communicates between the eSellerate server and our product activation interface. It was designed to reduce the number of product activation issues people were having after the FSX release – which have since been resolved.”

The company noted that the file had been checked by all major anti-virus companies and everything had come back clean, which does indeed appear to be the case. Nevertheless, the critical Reddit thread remained, bemoaning the actions of a company which probably should have known better than to irritate fans after February’s debacle. In response, however, FSLabs did just that once again.

In private messages to the moderators of the /r/flightsim sub-Reddit, FSLabs’ Marketing and PR Manager Simon Kelsey suggested that the mods should do something about the thread in question or face possible legal action.

“Just a gentle reminder of Reddit’s obligations as a publisher in order to ensure that any libelous content is taken down as soon as you become aware of it,” Kelsey wrote.

Noting that FSLabs welcomes “robust fair comment and opinion”, Kelsey gave the following advice.

“The ‘cmdhost.exe’ file in question is an entirely above board part of our anti-piracy protection and has been submitted to numerous anti-virus providers in order to verify that it poses no threat. Therefore, ANY suggestion that current or future products pose any threat to users is absolutely false and libelous,” he wrote, adding:

“As we have already outlined in the past, ANY suggestion that any user’s data was compromised during the events of February is entirely false and therefore libelous.”

Noting that FSLabs would “hate for lawyers to have to get involved in this”, Kelsey advised the /r/flightsim mods to ensure that no such claims were allowed to remain on the sub-Reddit.

But after not receiving the response he would’ve liked, Kelsey wrote once again to the mods. He noted that “a number of unsubstantiated and highly defamatory comments” remained online and warned that if something wasn’t done to clean them up, he would have “no option” than to pass the matter to FSLabs’ legal team.

Like the first message, this second effort also failed to have the desired effect. In fact, the moderators’ response was to post an open letter to Kelsey and FSLabs instead.

“We sincerely disagree that you ‘welcome robust fair comment and opinion’, demonstrated by the censorship on your forums and the attempted censorship on our subreddit,” the mods wrote.

“While what you do on your forum is certainly your prerogative, your rules do not extend to Reddit nor the r/flightsim subreddit. Removing content you disagree with is simply not within our purview.”

The letter, which is worth reading in full, refutes Kelsey’s claims and also suggests that critics of FSLabs may have been subjected to Reddit vote manipulation and coordinated efforts to discredit them.

What will happen next is unclear but the matter has now been placed in the hands of Reddit’s administrators who have agreed to deal with Kelsey and FSLabs’ personally.

It’s a little early to say for sure but it seems unlikely that this will end in a net positive for FSLabs, no matter what decision Reddit’s admins take.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Singapore ISPs Block 53 Pirate Sites Following MPAA Legal Action

Post Syndicated from Andy original https://torrentfreak.com/singapore-isps-block-53-pirate-sites-following-mpaa-legal-action-180521/

Under increasing pressure from copyright holders, in 2014 Singapore passed amendments to copyright law that allow ISPs to block ‘pirate’ sites.

“The prevalence of online piracy in Singapore turns customers away from legitimate content and adversely affects Singapore’s creative sector,” said then Senior Minister of State for Law Indranee Rajah.

“It can also undermine our reputation as a society that respects the protection of intellectual property.”

After the amendments took effect in December 2014, there was a considerable pause before any websites were targeted. However, in September 2016, at the request of the MPA(A), Solarmovie.ph became the first website ordered to be blocked under Singapore’s amended Copyright Act. The High Court subsequently ordering several major ISPs to disable access to the site.

A new wave of blocks announced this morning are the country’s most significant so far, with dozens of ‘pirate’ sites targeted following a successful application by the MPAA earlier this year.

In total, 53 sites across 154 domains – including those operated by The Pirate Bay plus KickassTorrents and Solarmovie variants – have been rendered inaccessible by ISPs including Singtel, StarHub, M1, MyRepublic and ViewQwest.

“In Singapore, these sites are responsible for a major portion of copyright infringement of films and television shows,” an MPAA spokesman told The Straits Times (paywall).

“This action by rights owners is necessary to protect the creative industry, enabling creators to create and keep their jobs, protect their works, and ensure the continued provision of high-quality content to audiences.”

Before granting a blocking injunction, the High Court must satisfy itself that the proposed online locations meet the threshold of being “flagrantly infringing”. This means that a site like YouTube, which carries a lot of infringing content but is not dedicated to infringement, would not ordinarily get caught up in the dragnet.

Sites considered for blocking must have a primary purpose to infringe, a threshold that is tipped in copyright holders’ favor when the sites’ operators display a lack of respect for copyright law and have already had their domains blocked in other jurisdictions.

The Court also weighs a number of additional factors including whether blocking would place an unacceptable burden on the shoulders of ISPs, whether the blocking demand is technically possible, and whether it will be effective.

In common with other regions such as the UK and Australia, for example, sites targeted for blocking must be informed of the applications made against them, to ensure they’re given a chance to defend themselves in court. No fully-fledged ‘pirate’ site has ever defended a blocking application in Singapore or indeed any jurisdiction in the world.

Finally, should any measures be taken by ‘pirate’ sites to evade an ISP blockade, copyright holders can apply to the Singapore High Court to amend the blocking order. This is similar to the Australian model where each application must be heard on its merits, rather than the UK model where a more streamlined approach is taken.

According to a recent report by Motion Picture Association Canada, at least 42 countries are now obligated to block infringing sites. In Europe alone, 1,800 sites and 5,300 domains have been rendered inaccessible, with Portugal, Italy, the UK, and Denmark leading the way.

In Canada, where copyright holders are lobbying hard for a site-blocking regime of their own, there’s pressure to avoid the “uncertain, slow and expensive” route of going through the courts.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AWS Documentation is Now Open Source and on GitHub

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-documentation-is-now-open-source-and-on-github/

Earlier this year we made the AWS SDK developer guides available as GitHub repos (all found within the awsdocs organization) and invited interested parties to contribute changes and improvements in the form of pull requests.

Today we are adding over 138 additional developer and user guides to the organization, and we are looking forward to receiving your requests. You can fix bugs, improve code samples (or submit new ones), add detail, and rewrite sentences and paragraphs in the interest of accuracy or clarity. You can also look at the commit history in order to learn more about new feature and service launches and to track improvements to the documents.

Making a Contribution
Before you get started, read the Amazon Open Source Code of Conduct and take a look at the Contributing Guidelines document (generally named CONTRIBUTING.md) for the AWS service of interest. Then create a GitHub account if you don’t already have one.

Once you find something to change or improve, visit the HTML version of the document and click on Edit on GitHub button at the top of the page:

This will allow you to edit the document in source form (typically Markdown or reStructuredText). The source code is used to produce the HTML, PDF, and Kindle versions of the documentation.

Once you are in GitHub, click on the pencil icon:

This creates a “fork” — a separate copy of the file that you can edit in isolation.

Next, make an edit. In general, as a new contributor to an open source project, you should gain experience and build your reputation by making small, high-quality edits. I’ll change “dozens of services” to “over one hundred services” in this document:

Then I summarize my change and click Propose file change:

I examine the differences to verify my changes and then click Create pull request:

Then I review the details and click Create pull request again:

The pull request (also known as a PR) makes its way to the Elastic Beanstalk documentation team and they get to decide if they want to accept it, reject it, or to engage in a conversation with me to learn more. The teams endeavor to respond to PRs within 48 hours, and I’ll be notified via GitHub whenever the status of the PR changes.

As is the case with most open source projects, a steady stream of focused, modest-sized pull requests is preferable to the occasional king-sized request with dozens of edits inside.

If I am interested in tracking changes to a repo over time, I can Watch and/or Star it:

If I Watch a repo, I’ll receive an email whenever there’s a new release, issue, or pull request for that service guide.

Go Fork It
This launch gives you another way to help us to improve AWS. Let me know what you think!

Jeff;

Community Profile: Estefannie Explains It All

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-estefannie/

This column is from The MagPi issue 59. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

“Hey, world!” Estefannie exclaims, a wide grin across her face as the camera begins to roll for another YouTube tutorial video. With a growing number of followers and wonderful support from her fans, Estefannie is building a solid reputation as an online maker, creating unique, fun content accessible to all.

A woman sitting at a desk with a laptop and papers — Estefannie Explains it All Raspberry Pi

It’s as if she was born into performing and making for an audience, but this fun, enjoyable journey to social media stardom came not from a desire to be in front of the camera, but rather as a unique approach to her own learning. While studying, Estefannie decided the best way to confirm her knowledge of a subject was to create an educational video explaining it. If she could teach a topic successfully, she knew she’d retained the information. And so her YouTube channel, Estefannie Explains It All, came into being.

Note taking — Estefannie Explains it All

Her first videos featured pages of notes with voice-over explanations of data structure and algorithm analysis. Then she moved in front of the camera, and expanded her skills in the process.

But YouTube isn’t her only outlet. With nearly 50000 followers, Estefannie’s Instagram game is strong, adding to an increasing number of female coders taking to the platform. Across her Instagram grid, you’ll find insights into her daily routine, from programming on location for work to behind-the-scenes troubleshooting as she begins to create another tutorial video. It’s hard work, with content creation for both Instagram and YouTube forever on her mind as she continues to work and progress successfully as a software engineer.

A woman showing off a game on a tablet — Estefannie Explains it All Raspberry Pi

As a thank you to her Instagram fans for helping her reach 10000 followers, Estefannie created a free game for Android and iOS called Gravitris — imagine Tetris with balance issues!

Estefannie was born and raised in Mexico, with ambitions to become a graphic designer and animator. However, a documentary on coding at Pixar, and the beauty of Merida’s hair in Brave, opened her mind to the opportunities of software engineering in animation. She altered her career path, moved to the United States, and switched to a Computer Science course.

A woman wearing safety goggles hugging a keyboard Estefannie Explains it All Raspberry Pi

With a constant desire to make and to learn, Estefannie combines her software engineering profession with her hobby to create fun, exciting content for YouTube.

While studying, Estefannie started a Computer Science Girls Club at the University of Houston, Texas, and she found herself eager to put more time and effort into the movement to increase the percentage of women in the industry. The club was a success, and still is to this day. While Estefannie has handed over the reins, she’s still very involved in the cause.

Through her YouTube videos, Estefannie continues the theme of inclusion, with every project offering a warm sense of approachability for all, regardless of age, gender, or skill. From exploring Scratch and Makey Makey with her young niece and nephew to creating her own Disney ‘Made with Magic’ backpack for a trip to Disney World, Florida, Estefannie’s videos are essentially a documentary of her own learning process, produced so viewers can learn with her — and learn from her mistakes — to create their own tech wonders.

Using the Raspberry Pi, she’s been able to broaden her skills and, in turn, her projects, creating a home-automated gingerbread house at Christmas, building a GPS-controlled GoPro for her trip to London, and making everyone’s life better with an Internet Button–controlled French press.

Estefannie Explains it All Raspberry Pi Home Automated Gingerbread House

Estefannie’s automated gingerbread house project was a labour of love, with electronics, wires, and candy strewn across both her living room and kitchen for weeks before completion. While she already was a skilled programmer, the world of physical digital making was still fairly new for Estefannie. Having ditched her hot glue gun in favour of a soldering iron in a previous video, she continued to experiment and try out new, interesting techniques that are now second nature to many members of the maker community. With the gingerbread house, Estefannie was able to research and apply techniques such as light controls, servos, and app making, although the latter was already firmly within her skill set. The result? A fun video of ups and downs that resulted in a wonderful, festive treat. She even gave her holiday home its own solar panel!

A DAY AT RASPBERRY PI TOWERS!! LINK IN BIO ⚡🎥 @raspberrypifoundation

1,910 Likes, 43 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “A DAY AT RASPBERRY PI TOWERS!! LINK IN BIO ⚡🎥 @raspberrypifoundation”

And that’s just the beginning of her adventures with Pi…but we won’t spoil her future plans by telling you what’s coming next. Sorry! However, since this article was written last year, Estefannie has released a few more Pi-based project videos, plus some awesome interviews and live-streams with other members of the maker community such as Simone Giertz. She even made us an awesome video for our Raspberry Pi YouTube channel! So be sure to check out her latest releases.

Best day yet!! I got to hangout, play Jenga with a huge arm robot, and have afternoon tea with @simonegiertz and robots!! 🤖👯 #shittyrobotnation

2,264 Likes, 56 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “Best day yet!! I got to hangout, play Jenga with a huge arm robot, and have afternoon tea with…”

While many wonderful maker videos show off a project without much explanation, or expect a certain level of skill from viewers hoping to recreate the project, Estefannie’s videos exist almost within their own category. We can’t wait to see where Estefannie Explains It All goes next!

The post Community Profile: Estefannie Explains It All appeared first on Raspberry Pi.

Protect your Reputation with Email Pausing and Configuration Set Metrics

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/protect-your-reputation-with-email-pausing-and-configuration-set-metrics/

In August, we launched the reputation dashboard, which helps you track important metrics that could impact your ability to send emails. By monitoring the metrics in this dashboard, you can protect your sender reputation, which can increase the likelihood that the emails you send will reach your customers’ inboxes.

Today, we’re launching two features that build upon the capabilities of the reputation dashboard. The first is the ability to temporarily pause email sending, either at the configuration set level, or across your entire Amazon SES account. The second is the ability to export reputation metrics for individual configuration sets.

Email Pausing

Today’s update includes new API operations that can temporarily pause your ability to send email using Amazon SES. To disable email sending across your entire Amazon SES account, you can use the UpdateAccountSendingEnabled operation. To pause sending only for emails sent using a specific configuration set, you can use the UpdateConfigurationSetSendingEnabled operation.

Email pausing is helpful because Amazon SES uses automatic enforcement policies. If the bounce or complaint rates for your account are too high, your account is automatically placed on probation. If the bounce or complaint issues continue after the probation period has ended, your account may be suspended.

With email pausing, you can temporarily halt your ability to send email before your account is placed on probation. While your ability to send email is paused, you can identify the issues that were causing your account to register high bounce or complaint rates. You can then resume sending after the issues are resolved.

Email pausing helps ensure that your ability to send email using Amazon SES is not interrupted because of enforcement issues. It helps ensure that your sender reputation won’t be damaged by mistakes or unforeseen issues.

You can learn more about the UpdateAccountSendingEnabled and UpdateConfigurationSetSendingEnabled operations in the Amazon Simple Email Service API Reference.

Configuration Set Reputation Metrics

Amazon SES automatically publishes the bounce and complaint rates for your account to Amazon CloudWatch. In CloudWatch, you can monitor these metrics over time, and create alarms that notify you when your reputation metrics cross certain thresholds.

With today’s update, you can also publish reputation metrics for individual configuration sets to CloudWatch. This feature gives you additional information about the messages you send using Amazon SES. For example, if you send all of your marketing emails using one configuration set, and your transactional emails using a different configuration set, you can view distinct reputation metrics for each type of email.

Because we anticipate that this feature will lead to the creation of many new configuration sets, we’re increasing the maximum number of configuration sets you can create from 50 to 10,000.

For more information about exporting reputation metrics for configuration sets, see Exporting Reputation Metrics for a Configuration Set to CloudWatch in the Amazon Simple Email Service Developer Guide.

Automating These Features

You can use AWS services—including Amazon SNS, AWS Lambda, and Amazon CloudWatch—to create a solution that automatically pauses email sending for your account when your overall reputation metrics cross a certain threshold. Or, to minimize disruption to your email sending program, you can pause email sending for a specific configuration set when the metrics for that configuration set cross a threshold. The following image illustrates the processes that occur when you implement these solutions.

A flow diagram that illustrates a solution for automatically pausing Amazon SES email sending. Amazon SES provides reputation metrics to CloudWatch. If those metrics exceed a threshold, a CloudWatch alarm is triggered, which triggers an SNS topic. The SNS topic sends notifications (email, SMS), and executes a Lambda function, which pauses email sending in SES.

For more information on both of these solutions, see Automatically Pausing Email Sending in the Amazon Simple Email Service Developer Guide.

We’re always looking for ways to help safeguard the reputation you’ve worked hard to build. If you have suggestions, questions, or comments, we’d love to hear from you in the comments below, or in the Amazon SES Forum.

These features are now available in the following AWS Regions: US West (Oregon), US East (N. Virginia), and EU (Ireland).

Just in Case You Missed It: Catching Up on Some Recent AWS Launches

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/just-in-case-you-missed-it-catching-up-on-some-recent-aws-launches/

So many launches and cloud innovations, that you simply may not believe.  In order to catch up on some service launches and features, this post will be a round-up of some cool releases that happened this summer and through the end of September.

The launches and features I want to share with you today are:

  • AWS IAM for Authenticating Database Users for RDS MySQL and Amazon Aurora
  • Amazon SES Reputation Dashboard
  • Amazon SES Open and Click Tracking Metrics
  • Serverless Image Handler by the Solutions Builder Team
  • AWS Ops Automator by the Solutions Builder Team

Let’s dive in, shall we!

AWS IAM for Authenticating Database Users for RDS MySQL and Amazon Aurora

Wished you could manage access to your Amazon RDS database instances and clusters using AWS IAM? Well, wish no longer. Amazon RDS has launched the ability for you to use IAM to manage database access for Amazon RDS for MySQL and Amazon Aurora DB.

What I like most about this new service feature is, it’s very easy to get started.  To enable database user authentication using IAM, you would select a checkbox Enable IAM DB Authentication when creating, modifying, or restoring your DB instance or cluster. You can enable IAM access using the RDS console, the AWS CLI, and/or the Amazon RDS API.

After configuring the database for IAM authentication, client applications authenticate to the database engine by providing temporary security credentials generated by the IAM Security Token Service. These credentials can be used instead of providing a password to the database engine.

You can learn more about using IAM to provide targeted permissions and authentication to MySQL and Aurora by reviewing the Amazon RDS user guide.

Amazon SES Reputation Dashboard

In order to aid Amazon Simple Email Service customers’ in utilizing best practice guidelines for sending email, I am thrilled to announce we launched the Reputation Dashboard to provide comprehensive reporting on email sending health. To aid in proactively managing emails being sent, customers now have visibility into overall account health, sending metrics, and compliance or enforcement status.

The Reputation Dashboard will provide the following information:

  • Account status: A description of your account health status.
    • Healthy – No issues currently impacting your account.
    • Probation – Account is on probation; Issues causing probation must be resolved to prevent suspension
    • Pending end of probation decision – Your account is on probation. Amazon SES team member must review your account prior to action.
    • Shutdown – Your account has been shut down. No email will be able to be sent using Amazon SES.
    • Pending shutdown – Your account is on probation and issues causing probation are unresolved.
  • Bounce Rate: Percentage of emails sent that have bounced and bounce rate status messages.
  • Complaint Rate: Percentage of emails sent that recipients have reported as spam and complaint rate status messages.
  • Notifications: Messages about other account reputation issues.

Amazon SES Open and Click Tracking Metrics

Another exciting feature recently added to Amazon SES is support for Email Open and Click Tracking Metrics. With Email Open and Click Tracking Metrics feature, SES customers can now track when email they’ve sent has been opened and track when links within the email have been clicked.  Using this SES feature will allow you to better track email campaign engagement and effectiveness.

How does this work?

When using the email open tracking feature, SES will add a transparent, miniature image into the emails that you choose to track. When the email is opened, the mail application client will load the aforementioned tracking which triggers an open track event with Amazon SES. For the email click (link) tracking, links in email and/or email templates are replaced with a custom link.  When the custom link is clicked, a click event is recorded in SES and the custom link will redirect the email user to the link destination of the original email.

You can take advantage of the new open tracking and click tracking features by creating a new configuration set or altering an existing configuration set within SES. After choosing either; Amazon SNS, Amazon CloudWatch, or Amazon Kinesis Firehose as the AWS service to receive the open and click metrics, you would only need to select a new configuration set to successfully enable these new features for any emails you want to send.

AWS Solutions: Serverless Image Handler & AWS Ops Automator

The AWS Solution Builder team has been hard at work helping to make it easier for you all to find answers to common architectural questions to aid in building and running applications on AWS. You can find these solutions on the AWS Answers page. Two new solutions released earlier this fall on AWS Answers are  Serverless Image Handler and the AWS Ops Automator.
Serverless Image Handler was developed to provide a solution to help customers dynamically process, manipulate, and optimize the handling of images on the AWS Cloud. The solution combines Amazon CloudFront for caching, AWS Lambda to dynamically retrieve images and make image modifications, and Amazon S3 bucket to store images. Additionally, the Serverless Image Handler leverages the open source image-processing suite, Thumbor, for additional image manipulation, processing, and optimization.

AWS Ops Automator solution helps you to automate manual tasks using time-based or event-based triggers to automatically such as snapshot scheduling by providing a framework for automated tasks and includes task audit trails, logging, resource selection, scaling, concurrency handling, task completion handing, and API request retries. The solution includes the following AWS services:

  • AWS CloudFormation: a templates to launches the core framework of microservices and solution generated task configurations
  • Amazon DynamoDB: a table which stores task configuration data to defines the event triggers, resources, and saves the results of the action and the errors.
  • Amazon CloudWatch Logs: provides logging to track warning and error messages
  • Amazon SNS: topic to send messages to a subscribed email address to which to send the logging information from the solution

Have fun exploring and coding.

Tara

Announcing the Reputation Dashboard

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/announcing-the-reputation-dashboard/

The Amazon SES team is pleased to announce the addition of a reputation dashboard to the Amazon SES console. This new feature helps you track issues that could impact the sender reputation of your Amazon SES account.

What information does the reputation dashboard provide?

Amazon SES users must maintain bounce and complaint rates below a certain threshold. We put these rules in place to protect the sender reputations of all Amazon SES users, and to prevent Amazon SES from being used to deliver spam or malicious content. Users with very high rates of bounces or complaints may be put on probation. If the bounce or complaint rates are not within acceptable limits by the end of the probation period, these accounts may be shut down completely.

Previous versions of Amazon SES provided basic sending metrics, including information about bounces and complaints. However, the bounce and complaint metrics in this dashboard only included information for the past few days of email sent from your account, as opposed to an overall rate.

The new reputation dashboard provides overall bounce and complaint rates for your entire account. This enables you to more closely monitor the health of your account and adjust your email sending practices as needed.

Can’t I just calculate these values myself?

Because each Amazon SES account sends different volumes of email at different rates, we do not calculate bounce and complaint rates based on a fixed time period. Instead, we use a representative volume of email. This representative volume is the basis for the bounce and complaint rate calculations.

Why do we use representative volume in our calculations? Let’s imagine that you sent 1,000 emails one week, and 5 of them bounced. If we only considered a week of email sending, your metrics look good. Now imagine that the next week you only sent 5 emails, and one of them bounced. Suddenly, your bounce rate jumps from half a percent to 20%, and your account is automatically placed on probation. This example may be an extreme case, but it illustrates the reason that we don’t use fixed time periods when calculating bounce and complaint rates.

When you open the new reputation dashboard, you will see bounce and complaint rates calculated using the representative volume for your account. We automatically recalculate these rates every time you send email through Amazon SES.

What else can I do with these metrics?

The Bounce and Complaint Rate metrics in the reputation dashboard are automatically sent to Amazon CloudWatch. You can use CloudWatch to create dashboards that track your bounce and complaint rates over time, and to create alarms that send you notifications when these metrics cross certain thresholds. To learn more, see Creating Reputation Monitoring Alarms Using CloudWatch in the Amazon SES Developer Guide.

How can I see the reputation dashboard?

The reputation dashboard is now available to all Amazon SES users. To view the reputation dashboard, sign in to the Amazon SES console. On the left navigation menu, choose Reputation Dashboard. For more information, see Monitoring Your Sender Reputation in the Amazon SES Developer Guide.

We hope you find the information in the reputation dashboard to be useful in managing your email sending programs and campaigns. If you have any questions or comments, please leave a comment on this post, or let us know in the Amazon SES forum.

New – SES Dedicated IP Pools

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-ses-dedicated-ip-pools/

Today we released Dedicated IP Pools for Amazon Simple Email Service (SES). With dedicated IP pools, you can specify which dedicated IP addresses to use for sending different types of email. Dedicated IP pools let you use your SES for different tasks. For instance, you can send transactional emails from one set of IPs and you can send marketing emails from another set of IPs.

If you’re not familiar with Amazon SES these concepts may not make much sense. We haven’t had the chance to cover SES on this blog since 2016, which is a shame, so I want to take a few steps back and talk about the service as a whole and some of the enhancements the team has made over the past year. If you just want the details on this new feature I strongly recommend reading the Amazon Simple Email Service Blog.

What is SES?

So, what is SES? If you’re a customer of Amazon.com you know that we send a lot of emails. Bought something? You get an email. Order shipped? You get an email. Over time, as both email volumes and types increased Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. SES is the result of years of Amazon’s own work in dealing with email and maximizing deliverability.

In short: SES gives you the ability to send and receive many types of email with the monitoring and tools to ensure high deliverability.

Sending an email is easy; one simple API call:

import boto3
ses = boto3.client('ses')
ses.send_email(
    Source='[email protected]',
    Destination={'ToAddresses': ['[email protected]']},
    Message={
        'Subject': {'Data': 'Hello, World!'},
        'Body': {'Text': {'Data': 'Hello, World!'}}
    }
)

Receiving and reacting to emails is easy too. You can set up rulesets that forward received emails to Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), or AWS Lambda – you could even trigger a Amazon Lex bot through Lambda to communicate with your customers over email. SES is a powerful tool for building applications. The image below shows just a fraction of the capabilities:

Deliverability 101

Deliverability is the percentage of your emails that arrive in your recipients’ inboxes. Maintaining deliverability is a shared responsibility between AWS and the customer. AWS takes the fight against spam very seriously and works hard to make sure services aren’t abused. To learn more about deliverability I recommend the deliverability docs. For now, understand that deliverability is an important aspect of email campaigns and SES has many tools that enable a customer to manage their deliverability.

Dedicated IPs and Dedicated IP pools

When you’re starting out with SES your emails are sent through a shared IP. That IP is responsible for sending mail on behalf of many customers and AWS works to maintain appropriate volume and deliverability on each of those IPs. However, when you reach a sufficient volume shared IPs may not be the right solution.

By creating a dedicated IP you’re able to fully control the reputations of those IPs. This makes it vastly easier to troubleshoot any deliverability or reputation issues. It’s also useful for many email certification programs which require a dedicated IP as a commitment to maintaining your email reputation. Using the shared IPs of the Amazon SES service is still the right move for many customers but if you have sustained daily sending volume greater than hundreds of thousands of emails per day you might want to consider a dedicated IP. One caveat to be aware of: if you’re not sending a sufficient volume of email with a consistent pattern a dedicated IP can actually hurt your reputation. Dedicated IPs are $24.95 per address per month at the time of this writing – but you can find out more at the pricing page.

Before you can use a Dedicated IP you need to “warm” it. You do this by gradually increasing the volume of emails you send through a new address. Each IP needs time to build a positive reputation. In March of this year SES released the ability to automatically warm your IPs over the course of 45 days. This feature is on by default for all new dedicated IPs.

Customers who send high volumes of email will typically have multiple dedicated IPs. Today the SES team released dedicated IP pools to make managing those IPs easier. Now when you send email you can specify a configuration set which will route your email to an IP in a pool based on the pool’s association with that configuration set.

One of the other major benefits of this feature is that it allows customers who previously split their email sending across several AWS accounts (to manage their reputation for different types of email) to consolidate into a single account.

You can read the documentation and blog for more info.

Announcing Dedicated IP Pools

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/announcing-dedicated-ip-pools/

The Amazon SES team is pleased to announce that you can now create groups of dedicated IP addresses, called dedicated IP pools, for your email sending activities.

Prior to the availability of this feature, if you leased several dedicated IP addresses to use with Amazon SES, there was no way to specify which dedicated IP address to use for a specific email. Dedicated IP pools solve this problem by allowing you to send emails from specific IP addresses.

This post includes information and procedures related to dedicated IP pools.

What are dedicated IP pools?

In order to understand dedicated IP pools, you should first be familiar with the concept of dedicated IP addresses. Customers who send large volumes of email will typically lease one or more dedicated IP addresses to use when sending mail from Amazon SES. To learn more, see our blog post about dedicated IP addresses.

If you lease several dedicated IP addresses for use with Amazon SES, you can organize these addresses into groups, called pools. You can then associate each pool with a configuration set. When you send an email that specifies a configuration set, that email will be sent from the IP addresses in the associated pool.

When should I use dedicated IP pools?

Dedicated IP pools are especially useful for customers who send several different types of email using Amazon SES. For example, if you use Amazon SES to send both marketing emails and transactional emails, you can create a pool for marketing emails and another for transactional emails.

By using dedicated IP pools, you can isolate the sender reputations for each of these types of communications. Using dedicated IP pools gives you complete control over the sender reputations of the dedicated IP addresses you lease from Amazon SES.

How do I create and use dedicated IP pools?

There are two basic steps for creating and using dedicated IP pools. First, create a dedicated IP pool in the Amazon SES console and associate it with a configuration set. Next, when you send email, be sure to specify the configuration set associated with the IP pool you want to use.

For step-by-step procedures, see Creating Dedicated IP Pools in the Amazon SES Developer Guide.

Will my email sending process change?

If you do not use dedicated IP addresses with Amazon SES, then your email sending process will not change.

If you use dedicated IP pools, your email sending process may change slightly. In most cases, you will need to specify a configuration set in the emails you send. To learn more about using configuration sets, see Specifying a Configuration Set When You Send Email in the Amazon SES Developer Guide.

Any dedicated IP addresses that you lease that are not part of a dedicated IP pool will automatically be added to a default pool. If you send email without specifying a configuration set that is associated with a pool, then that email will be sent from one of the addresses in the default pool.

Dedicated IP pools are now available in the following AWS Regions: us-west-2 (Oregon), us-east-1 (Virginia), and eu-west-1 (Ireland).

We hope you enjoy this feature. If you have any questions or comments, please leave a comment on this post, or let us know in the Amazon SES Forum.

Open and Click Tracking Have Arrived

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/open-and-click-tracking-have-arrived/

We’re pleased to announce the addition of open and click tracking metrics to Amazon SES. These metrics will help you measure the effectiveness of the email campaigns you send using Amazon SES.

We’re also adding the ability to publish email sending metrics to Amazon Simple Notification Service (Amazon SNS) using event publishing. This feature gives you greater control over the sending notifications you receive through Amazon SNS.

What’s new in this release?

When you send an email using Amazon SES, we now collect metrics related to opens and clicks. Opens, in this sense, refers to the number of users who successfully received your email and opened it in their email clients; clicks refers to the number of users who received an email and clicked one or more links in it.

Additionally, you can now use event publishing to push email sending notifications—including open and click notifications—using Amazon SNS. Previously, you could send account-level notifications through Amazon SNS. These notifications were pretty limited: you could only receive notifications about bounces, complaints, and deliveries, and you would receive notifications about all of these events across your entire Amazon SES account. Now you can use event publishing to send notifications about deliveries, opens, clicks, bounces, and complaints. Furthermore, you can set up event publishing so that you only receive notifications about emails sent using the configuration sets you specify in those emails.

Why should I use open and click tracking?

Whether you are sending marketing emails, transactional emails, or notifications, you need to know how effective your communications are. The email sending metrics feature of Amazon SES gives you data about entire email response funnel—the total number of emails that were sent, bounced, viewed, and clicked. You can then transform those insights into action.

For example, the open and click tracking feature can help you identify the customers who are most interested in receiving the messages you send. By narrowing down your list of recipients and focusing on your most engaged customers, you can save money (by sending fewer messages), improve the response rates of your marketing campaigns (by targeting only the customers who are most interested in what you have to say), and protect your sender reputation (by reducing the number of bounces and complaints against your sending domain).

How do I enable open and click tracking?

If you’ve set up Sending Metrics in the past, then you can easily add open and click tracking to your existing configuration sets. On the Configuration Sets page, choose the configuration set that contains your sending event destination; edit the event destination, check the boxes for Open and Click (as shown in the image below), and then choose Save.

How does open and click tracking work?

Amazon SES makes very minor changes to your emails in order to make open and click tracking work. At the bottom of each message, we insert a 1 pixel by 1 pixel transparent GIF image. Each email includes a unique link to this image file; when the image is opened, we can tell exactly which message was opened and by whom.

To track clicks, we set up a redirect for each link in the message. When a recipient clicks a link, they are sent to an Amazon SES server, and are immediately forwarded to the destination address. As with open tracking, each of these redirect links is unique, allowing us to easily determine which recipient clicked the link, when they clicked it, and the email from which they arrived at the link.

Can I disable click tracking?

You can disable click tracking by adding a special tag to the anchor tags in your HTML. For example, if you were linking to the AWS home page, a normal anchor link would look something like this:

<a href="https://aws.amazon.com/">Amazon Web Services</a>

To disable click tracking for that same link, you would modify to look like this:

<a ses:no-track href="https://aws.amazon.com/">Amazon Web Services</a>

Because the ses:no-track attribute is non-standard HTML, we automatically remove it from the version of the email that arrives in your recipients’ inboxes.

How do I use event publishing with Amazon SNS?

If you’ve set up event destinations in the past, then the process of setting up an Amazon SNS event destination will be very familiar. You can add an Amazon SNS destination to an existing configuration set, or create a new configuration set that uses Amazon SNS as its event destination. To learn more, see “Set Up an Amazon SNS Event Destination for Amazon SES Event Publishing” in our Developer Guide.

We’re excited about this release. Let us know what you think of these new features in the SES Forum, or in the comments for this post.

Concerns About The Blockchain Technology

Post Syndicated from Bozho original https://techblog.bozho.net/concerns-blockchain-technology/

The so-called (and marketing-branded) “blockchain technology” is promised to revolutionize every industry. Anything, they say, will become decentralized, free from middle men or government control. Services will thrive on various installments of the blockchain, and smart contracts will automatically enforce any logic that is related to the particular domain.

I don’t mind having another technological leap (after the internet), and given that I’m technically familiar with the blockchain, I may even be part of it. But I’m not convinced it will happen, and I’m not convinced it’s going to be the next internet.

If we strip the hype, the technology behind Bitcoin is indeed a technical masterpiece. It combines existing techniques (likes hash chains and merkle trees) with a very good proof-of-work based consensus algorithm. And it creates a digital currency, which ontop of being worth billions now, is simply cool.

But will this technology be mass-adopted, and will mass adoption allow it to retain the technological benefits it has?

First, I’d like to nitpick a little bit – if anyone is speaking about “decentralized software” when referring to “the blockchain”, be suspicious. Bitcon and other peer-to-peer overlay networks are in fact “distributed” (see the pictures here). “Decentralized” means having multiple providers, but doesn’t mean each user will be full-featured nodes on the network. This nitpicking is actually part of another argument, but we’ll get to that.

If blockchain-based applications want to reach mass adoption, they have to be user-friendly. I know I’m being captain obvious here (and fortunately some of the people in the area have realized that), but with the current state of the technology, it’s impossible for end users to even get it, let alone use it.

My first serious concern is usability. To begin with, you need to download the whole blockchain on your machine. When I got my first bitcoin several years ago (when it was still 10 euro), the blockchain was kind of small and I didn’t notice that problem. Nowadays both the Bitcoin and Ethereum blockchains take ages to download. I still haven’t managed to download the ethereum one – after several bugs and reinstalls of the client, I’m still at 15%. And we are just at the beginning. A user just will not wait for days to download something in order to be able to start using a piece of technology.

I recently proposed downloading snapshots of the blockchain via bittorrent to be included in the Ethereum protocol itself. I know that snapshots of the Bitcoin blockchain have been distributed that way, but it has been a manual process. If a client can quickly download the huge file up to a recent point, and then only donwload the latest ones in the the traditional way, starting up may be easier. Of course, the whole chain would have to be verified, but maybe that can be a background process that doesn’t stop you from using whatever is built ontop of the particular blockchain. (I’m not sure if that will be secure enough, and that, say potential Sybil attacks on the bittorrent part won’t make it undesirable, it’s just an idea).

But even if such an approach works and is adopted, that would still mean that for every service you’d have to download a separate blockchain. Of course, projects like Ethereum may seem like the “one stop shop” for cool blockchain-based applications, but fragmentation is already happening – there are alt-coins bundled with various services like file storage, DNS, etc. That will not be workable for end-users. And it’s certainly not an option for mobile, which is the dominant client now. If instead of downloading the entire chain, something like consistent hashing is used to distribute the content in small portions among clients, it might be workable. But how will trust work in that case, I don’t know. Maybe it’s possible, maybe not.

And yes, I know that you don’t necessarily have to install a wallet/client in order to make use of a given blockchain – you can just have a cloud-based wallet. Which is fairly convenient, but that gets me to my nitpicking from a few paragraphs above and to may second concern – this effectively turns a distributed system into a decentralized one – a limited number of cloud providers hold most of the data (just as a limited number of miners hold most of the processing power). And then, even though the underlying technology allows for a distributed deployment, we’ll end-up again with simply decentralized or even de-facto cenetralized, if mergers and acquisitions lead us there (and they probably will). And in order to be able to access our wallets/accounts from multiple devices, we’d use a convenient cloud service where we’d login with our username and password (because the private key is just too technical and hard for regular users). And that seems to defeat the whole idea.

Not only that, but there is an inevitable centralization of decisions (who decides on the size of the block, who has commit rights to the client repository) as well as a hidden centralization of power – how much GPU power does the Chinese mining “farms” control and can they influence the network significantly? And will the average user ever know that or care (as they don’t care that Google is centralized). I think that overall, distributed technologies will follow the power law, and the majority of data/processing power/decision power will be controller by a minority of actors. And so our distributed utopia will not happen in its purest form we dream of.

My third concern is incentive. Distributed technologies that have been successful so far have a pretty narrow set of incentives. The internet was promoted by large public institutions, including government agencies and big universitives. Bittorrent was successful mainly because it allowed free movies and songs with 2 clicks of the mouse. And Bitcoin was successful because it offered financial benefits. I’m oversimplifying of course, but “government effort”, “free & easy” and “source of more money” seem to have been the successful incentives. On the other side of the fence there are dozens of failed distributed technologies. I’ve tried many of them – alternative search engines, alternative file storage, alternative ride-sharings, alternative social networks, alternative “internets” even. None have gained traction. Because they are not easier to use than their free competitors and you can’t make money out of them (and no government bothers promoting them).

Will blockchain-based services have sufficient incentives to drive customers? Will centralized competitors just easily crush the distributed alternatives by being cheaper, more-user friendly, having sales departments that can target more than hardcore geeks who have no problem syncing their blockchain via the command line? The utopian slogans seem very cool to idealists and futurists, but don’t sell. “Free from centralized control, full control over your data” – we’d have to go through a long process of cultural change before these things make sense to more than a handful of people.

Speaking of services, often examples include “the sharing economy”, where one stranger offers a service to another stranger. Blockchain technology seems like a good fit here indeed – the services are by nature distributed, why should the technology be centralized? Here comes my fourth concern – identity. While for the cryptocurrencies it’s actually beneficial to be anonymous, for most of the real-world services (i.e. the industries that ought to be revolutionized) this is not an option. You can’t just go in the car of publicKey=5389BC989A342…. “But there are already distributed reputation systems”, you may say. Yes, and they are based on technical, not real-world identities. That doesn’t build trust. I don’t trust that publicKey=5389BC989A342… is the same person that got the high reputation. There may be five people behind that private key. The private key may have been stolen (e.g. in a cloud-provider breach).

The values of companies like Uber and AirBNB is that they serve as trust brokers. They verify and vouch for their drivers and hosts (and passengers and guests). They verify their identity through government-issued documents, skype calls, selfies, compare pictures to documents, get access to government databases, credit records, etc. Can a fully distributed service do that? No. You’d need a centralized provider to do it. And how would the blockchain make any difference then? Well, I may not be entirely correct here. I’ve actually been thinking quite a lot about decentralized identity. E.g. a way to predictably generate a private key based on, say biometrics+password+government-issued-documents, and use the corresponding public key as your identifier, which is then fed into reputation schemes and ultimately – real-world services. But we’re not there yet.

And that is part of my fifth concern – the technology itself. We are not there yet. There are bugs, there are thefts and leaks. There are hard-forks. There isn’t sufficient understanding of the technology (I confess I don’t fully grasp all the implementation details, and they are always the key). Often the technology is advertised as “just working”, but it isn’t. The other day I read an article (lost the link) that clarifies a common misconception about smart contracts – they cannot interact with the outside world – they can’t call APIs (e.g. stock market prices, bank APIs), they can’t push or fetch data from anywhere but the blockchain. That mandates the need, again, for a centralized service that pushes the relevant information before smart contracts can pick it up. I’m pretty sure that all cool-sounding applications are not possible without extensive research. And even if/when they are, writing distributed code is hard. Debugging a smart contract is hard. Yes, hard is cool, but that doesn’t drive economic value.

I have mostly been referring to public blockchains so far. Private blockchains may have their practical application, but there’s one catch – they are not exactly the cool distributed technology that the Bitcoin uses. They may be called “blockchains” because they…chain blocks, but they usually centralize trust. For example the Hyperledger project uses PKI, with all its benefits and risks. In these cases, a centralized authority issues the identity “tokens”, and then nodes communicate and form a shared ledger. That’s a bit easier problem to solve, and the nodes would usually be on actual servers in real datacenters, and not on your uncle’s Windows XP.

That said, hash chaining has been around for quite a long time. I did research on the matter because of a side-project of mine and it seems providing a tamper-proof/tamper-evident log/database on semi-trusted machines has been discussed in many computer science papers since the 90s. That alone is not “the magic blockchain” that will solve all of our problems, no matter what gossip protocols you sprinkle ontop. I’m not saying that’s bad, on the contrary – any variation and combinations of the building blocks of the blockchain (the hash chain, the consensus algorithm, the proof-of-work (or stake), possibly smart contracts), has potential for making useful products.

I know I sound like the a naysayer here, but I hope I’ve pointed out particular issues, rather than aimlessly ranting at the hype (though that’s tempting as well). I’m confident that blockchain-like technologies will have their practical applications, and we will see some successful, widely-adopted services and solutions based on that, just as pointed out in this detailed report. But I’m not convinced it will be revolutionizing.

I hope I’m proven wrong, though, because watching a revolutionizing technology closely and even being part of it would be quite cool.

The post Concerns About The Blockchain Technology appeared first on Bozho's tech blog.

How To Get Your First 1,000 Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-1000-customers/

PR for getting your first 1000 customers

If you launch your startup and no one knows, did you actually launch? As mentioned in my last post, our initial launch target was to get a 1,000 people to use our service. But how do you get even 1,000 people to sign up for your service when no one knows who you are?

There are a variety of methods to attract your first 1,000 customers, but launching with the press is my favorite. I’ll explain why and how to do it below.

Paths to Attract Your First 1,000 Customers

Social following: If you have a massive social following, those people are a reasonable target for what you’re offering. In particular if your relationship with them is one where they would buy something you recommend, this can be one of the easiest ways to get your initial customers. However, building this type of following is non-trivial and often is done over several years.

Press not only provides awareness and customers, but credibility and SEO benefits as well

Paid advertising: The advantage of paid ads is you have control over when they are presented and what they say. The primary disadvantage is they tend to be expensive, especially before you have your positioning, messaging, and funnel nailed.

Viral: There are certainly examples of companies that launched with a hugely viral video, blog post, or promotion. While fantastic if it happens, even if you do everything right, the likelihood of massive virality is miniscule and the conversion rate is often low.

Press: As I said, this is my favorite. You don’t need to pay a PR agency and can go from nothing to launched in a couple weeks. Press not only provides awareness and customers, but credibility and SEO benefits as well.

How to Pitch the Press

It’s easy: Have a compelling story, find the right journalists, make their life easy, pitch and follow-up. Of course, each one of those has some nuance, so let’s dig in.

Have a compelling story

How to Get AttentionWhen you’ve been working for months on your startup, it’s easy to get lost in the minutiae when talking to others. Stories that a journalist will write about need to be something their readers will care about. Knowing what story to tell and how to tell it is part science and part art. Here’s how you can get there:

The basics of your story

Ask yourself the following questions, and write down the answers:

  • What are we doing? What product service are we offering?
  • Why? What problem are we solving?
  • What is interesting or unique? Either about what we’re doing, how we’re doing it, or for who we’re doing it.

“But my story isn’t that exciting”

Neither was announcing a data backup company, believe me. Look for angles that make it compelling. Here are some:

  • Did someone on your team do something major before? (build a successful company/product, create some innovation, market something we all know, etc.)
  • Do you have an interesting investor or board member?
  • Is there a personal story that drove you to start this company?
  • Are you starting it in a unique place?
  • Did you come upon the idea in a unique way?
  • Can you share something people want to know that’s not usually shared?
  • Are you partnered with a well-known company?
  • …is there something interesting/entertaining/odd/shocking/touching/etc.?

It doesn’t get much less exciting than, “We’re launching a company that will backup your data.” But there were still a lot of compelling stories:

  • Founded by serial entrepreneurs, bootstrapped a capital-intensive company, committed to each other for a year without salary.
  • Challenging the way that every backup company before was set up by not asking customers to pick and choose files to backup.
  • Designing our own storage system.
  • Etc. etc.

For the initial launch, we focused on “unlimited for $5/month” and statistics from a survey we ran with Harris Interactive that said that 94% of people did not regularly backup their data.

It’s an old adage that “Everyone has a story.” Regardless of what you’re doing, there is always something interesting to share. Dig for that.

The headline

Once you’ve captured what you think the interesting story is, you’ve got to boil it down. Yes, you need the elevator pitch, but this is shorter…it’s the headline pitch. Write the headline that you would love to see a journalist write.

Regardless of what you’re doing, there is always something interesting to share. Dig for that.

Now comes the part where you have to be really honest with yourself: if you weren’t involved, would you care?

The “Techmeme Test”

One way I try to ground myself is what I call the “Techmeme Test”. Techmeme lists the top tech articles. Read the headlines. Imagine the headline you wrote in the middle of the page. If you weren’t involved, would you click on it? Is it more or less compelling than the others. Much of tech news is dominated by the largest companies. If you want to get written about, your story should be more compelling. If not, go back above and explore your story some more.

Embargoes, exclusives and calls-to-action

Journalists write about news. Thus, if you’ve already announced something and are then pitching a journalist to cover it, unless you’re giving her something significant that hasn’t been said, it’s no longer news. As a result, there are ‘embargoes’ and ‘exclusives’.

Embargoes

    • : An embargo simply means that you are sharing news with a journalist that they need to keep private until a certain date and time.

If you’re Apple, this may be a formal and legal document. In our case, it’s as simple as saying, “Please keep embargoed until 4/13/17 at 8am California time.” in the pitch. Some sites explicitly will not keep embargoes; for example The Information will only break news. If you want to launch something later, do not share information with journalists at these sites. If you are only working with a single journalist for a story, and your announcement time is flexible, you can jointly work out a date and time to announce. However, if you have a fixed launch time or are working with a few journalists, embargoes are key.

Exclusives: An exclusive means you’re giving something specifically to that journalist. Most journalists love an exclusive as it means readers have to come to them for the story. One option is to give a journalist an exclusive on the entire story. If it is your dream journalist, this may make sense. Another option, however, is to give exclusivity on certain pieces. For example, for your launch you could give an exclusive on funding detail & a VC interview to a more finance-focused journalist and insight into the tech & a CTO interview to a more tech-focused journalist.

Call-to-Action: With our launch we gave TechCrunch, Ars Technica, and SimplyHelp URLs that gave the first few hundred of their readers access to the private beta. Once those first few hundred users from each site downloaded, the beta would be turned off.

Thus, we used a combination of embargoes, exclusives, and a call-to-action during our initial launch to be able to brief journalists on the news before it went live, give them something they could announce as exclusive, and provide a time-sensitive call-to-action to the readers so that they would actually sign up and not just read and go away.

How to Find the Most Authoritative Sites / Authors

“If a press release is published and no one sees it, was it published?” Perhaps the time existed when sending a press release out over the wire meant journalists would read it and write about it. That time has long been forgotten. Over 1,000 unread press releases are published every day. If you want your compelling story to be covered, you need to find the handful of journalists that will care.

Determine the publications

Find the publications that cover the type of story you want to share. If you’re in tech, Techmeme has a leaderboard of publications ranked by leadership and presence. This list will tell you which publications are likely to have influence. Visit the sites and see if your type of story appears on their site. But, once you’ve determined the publication do NOT send a pitch their “[email protected]” or “[email protected]” email addresses. In all the times I’ve done that, I have never had a single response. Those email addresses are likely on every PR, press release, and spam list and unlikely to get read. Instead…

Determine the journalists

Once you’ve determined which publications cover your area, check which journalists are doing the writing. Skim the articles and search for keywords and competitor names.

Over 1,000 unread press releases are published every day.

Identify one primary journalist at the publication that you would love to have cover you, and secondary ones if there are a few good options. If you’re not sure which one should be the primary, consider a few tests:

  • Do they truly seem to care about the space?
  • Do they write interesting/compelling stories that ‘get it’?
  • Do they appear on the Techmeme leaderboard?
  • Do their articles get liked/tweeted/shared and commented on?
  • Do they have a significant social presence?

Leveraging Google

Google author search by date

In addition to Techmeme or if you aren’t in the tech space Google will become a must have tool for finding the right journalists to pitch. Below the search box you will find a number of tabs. Click on Tools and change the Any time setting to Custom range. I like to use the past six months to ensure I find authors that are actively writing about my market. I start with the All results. This will return a combination of product sites and articles depending upon your search term.

Scan for articles and click on the link to see if the article is on topic. If it is find the author’s name. Often if you click on the author name it will take you to a bio page that includes their Twitter, LinkedIn, and/or Facebook profile. Many times you will find their email address in the bio. You should collect all the information and add it to your outreach spreadsheet. Click here to get a copy. It’s always a good idea to comment on the article to start building awareness of your name. Another good idea is to Tweet or Like the article.

Next click on the News tab and set the same search parameters. You will get a different set of results. Repeat the same steps. Between the two searches you will have a list of authors that actively write for the websites that Google considers the most authoritative on your market.

How to find the most socially shared authors

Buzzsumo search for most shared by date

Your next step is to find the writers whose articles get shared the most socially. Go to Buzzsumo and click on the Most Shared tab. Enter search terms for your market as well as competitor names. Again I like to use the past 6 months as the time range. You will get a list of articles that have been shared the most across Facebook, LinkedIn, Twitter, Pinterest, and Google+. In addition to finding the most shared articles and their authors you can also see some of the Twitter users that shared the article. Many of those Twitter users are big influencers in your market so it’s smart to start following and interacting with them as well as the authors.

How to Find Author Email Addresses

Some journalists publish their contact info right on the stories. For those that don’t, a bit of googling will often get you the email. For example, TechCrunch wrote a story a few years ago where they published all of their email addresses, which was in response to this new service that charges a small fee to provide journalist email addresses. Sometimes visiting their twitter pages will link to a personal site, upon which they will share an email address.

Of course all is not lost if you don’t find an email in the bio. There are two good services for finding emails, https://app.voilanorbert.com/ and https://hunter.io/. For Voila Norbert enter the author name and the website you found their article on. The majority of the time you search for an author on a major publication Norbert will return an accurate email address. If it doesn’t try Hunter.io.

On Hunter.io enter the domain name and click on Personal Only. Then scroll through the results to find the author’s email. I’ve found Norbert to be more accurate overall but between the two you will find most major author’s email addresses.

Email, by the way, is not necessarily the best way to engage a journalist. Many are avid Twitter users. Follow them and engage – that means read/retweet/favorite their tweets; reply to their questions, and generally be helpful BEFORE you pitch them. Later when you email them, you won’t be just a random email address.

Don’t spam

Now that you have all these email addresses (possibly thousands if you purchased a list) – do NOT spam. It is incredibly tempting to think “I could try to figure out which of these folks would be interested, but if I just email all of them, I’ll save myself time and be more likely to get some of them to respond.” Don’t do it.

Follow them and engage – that means read/retweet/favorite their tweets; reply to their questions, and generally be helpful BEFORE you pitch them.

First, you’ll want to tailor your pitch to the individual. Second, it’s a small world and you’ll be known as someone who spams – reputation is golden. Also, don’t call journalists. Unless you know them or they’ve said they’re open to calls, you’re most likely to just annoy them.

Build a relationship

Build Trust with reportersPlay the long game. You may be focusing just on the launch and hoping to get this one story covered, but if you don’t quickly flame-out, you will have many more opportunities to tell interesting stories that you’ll want the press to cover. Be honest and don’t exaggerate.
When you have 500 users it’s tempting to say, “We’ve got thousands!” Don’t. The good journalists will see through it and it’ll likely come back to bite you later. If you don’t know something, say “I don’t know but let me find out for you.” Most journalists want to write interesting stories that their readers will appreciate. Help them do that. Build deeper relationships with 5 – 10 journalists, rather than spamming thousands.

Stay organized

It doesn’t need to be complicated, but keep a spreadsheet that includes the name, publication, and contact info of the journalists you care about. Then, use it to keep track of who you’ve pitched, who’s responded, whether you’ve sent them the materials they need, and whether they intend to write/have written.

Make their life easy

Journalists have a million PR people emailing them, are actively engaging with readers on Twitter and in the comments, are tracking their metrics, are working their sources…and all the while needing to publish new articles. They’re busy. Make their life easy and they’re more likely to engage with yours.

Get to know them

Before sending them a pitch, know what they’ve written in the space. If you tell them how your story relates to ones they’ve written, it’ll help them put the story in context, and enable them to possibly link back to a story they wrote before.

Prepare your materials

Journalists will need somewhere to get more info (prepare a fact sheet), a URL to link to, and at least one image (ideally a few to choose from.) A fact sheet gives bite-sized snippets of information they may need about your startup or product: what it is, how big the market is, what’s the pricing, who’s on the team, etc. The URL is where their reader will get the product or more information from you. It doesn’t have to be live when you’re pitching, but you should be able to tell what the URL will be. The images are ones that they could embed in the article: a product screenshot, a CEO or team photo, an infographic. Scan the types of images included in their articles. Don’t send any of these in your pitch, but have them ready. Studies, stats, customer/partner/investor quotes are also good to have.

Pitch

A pitch has to be short and compelling.

Subject Line

Think back to the headline you want. Is it really compelling? Can you shorten it to a subject line? Include what’s happening and when. For Mike Arrington at Techcrunch, our first subject line was “Startup doing an ‘online time machine’”. Later I would include, “launching June 6th”.

For John Timmer at ArsTechnica, it was “Demographics data re: your 4/17 article”. Why? Because he wrote an article titled “WiFi popular with the young people; backups, not so much”. Since we had run a demographics survey on backups, I figured as a science editor he’d be interested in this additional data.

Body

A few key things about the body of the email. It should be short and to the point, no more than a few sentences. Here was my actual, original pitch email to John:

Hey John,

We’re launching Backblaze next week which provides a Time Machine-online type of service. As part of doing some research I read your article about backups not being popular with young people and that you had wished Accenture would have given you demographics. In prep for our invite-only launch I sponsored Harris Interactive to get demographic data on who’s doing backups and if all goes well, I should have that data on Friday.

Next week starts Backup Awareness Month (and yes, probably Clean Your House Month and Brush Your Teeth Month)…but nonetheless…good time to remind readers to backup with a bit of data?

Would you be interested in seeing/talking about the data when I get it?

Would you be interested in getting a sneak peak at Backblaze? (I could give you some invite codes for your readers as well.)

Gleb Budman        

CEO and Co-Founder

Backblaze, Inc.

Automatic, Secure, High-Performance Online Backup

Cell: XXX-XXX-XXXX

The Good: It said what we’re doing, why this relates to him and his readers, provides him information he had asked for in an article, ties to something timely, is clearly tailored for him, is pitched by the CEO and Co-Founder, and provides my cell.

The Bad: It’s too long.

I got better later. Here’s an example:

Subject: Does temperature affect hard drive life?

Hi Peter, there has been much debate about whether temperature affects how long a hard drive lasts. Following up on the Backblaze analyses of how long do drives last & which drives last the longest (that you wrote about) we’ve now analyzed the impact of heat on the nearly 40,000 hard drives we have and found that…

We’re going to publish the results this Monday, 5/12 at 5am California-time. Want a sneak peak of the analysis?

Timing

A common question is “When should I launch?” What day, what time? I prefer to launch on Tuesday at 8am California-time. Launching earlier in the week gives breathing room for the news to live longer. While your launch may be a single article posted and that’s that, if it ends up a larger success, earlier in the week allows other journalists (including ones who are in other countries) to build on the story. Monday announcements can be tough because the journalists generally need to have their stories finished by Friday, and while ideally everything is buttoned up beforehand, startups sometimes use the weekend as overflow before a launch.

The 8am California-time is because it allows articles to be published at the beginning of the day West Coast and around lunch-time East Coast. Later and you risk it being past publishing time for the day. We used to launch at 5am in order to be morning for the East Coast, but it did not seem to have a significant benefit in coverage or impact, but did mean that the entire internal team needed to be up at 3am or 4am. Sometimes that’s critical, but I prefer to not burn the team out when it’s not.

Finally, try to stay clear of holidays, major announcements and large conferences. If Apple is coming out with their next iPhone, many of the tech journalists will be busy at least a couple days prior and possibly a week after. Not always obvious, but if you can, find times that are otherwise going to be slow for news.

Follow-up

There is a fine line between persistence and annoyance. I once had a journalist write me after we had an announcement that was covered by the press, “Why didn’t you let me know?! I would have written about that!” I had sent him three emails about the upcoming announcement to which he never responded.

My general rule is 3 emails.

Ugh. However, my takeaway from this isn’t that I should send 10 emails to every journalist. It’s that sometimes these things happen.

My general rule is 3 emails. If I’ve identified a specific journalist that I think would be interested and have a pitch crafted for her, I’ll send her the email ideally 2 weeks prior to the announcement. I’ll follow-up a week later, and one more time 2 days prior. If she ever says, “I’m not interested in this topic,” I note it and don’t email her on that topic again.

If a journalist wrote, I read the article and engage in the comments (or someone on our team, such as our social guy, @YevP does). We’ll often promote the story through our social channels and email our employees who may choose to share the story as well. This helps us, but also helps the journalist get their story broader reach. Again, the goal is to build a relationship with the journalists your space. If there’s something relevant to your customers that the journalist wrote, you’re providing a service to your customers AND helping the journalist get the word out about the article.

At times the stories also end up shared on sites such as Hacker News, Reddit, Slashdot, or become active conversations on Twitter. Again, we try to engage there and respond to questions (when we do, we are always clear that we’re from Backblaze.)

And finally, I’ll often send a short thank you to the journalist.

Getting Your First 1,000 Customers With Press

As I mentioned at the beginning, there is more than one way to get your first 1,000 customers. My favorite is working with the press to share your story. If you figure out your compelling story, find the right journalists, make their life easy, pitch and follow-up, you stand a high likelyhood of getting coverage and customers. Better yet, that coverage will provide credibility for your company, and if done right, will establish you as a resource for the press for the future.

Like any muscle, this process takes working out. The first time may feel a bit daunting, but just take the steps one at a time. As you do this a few times, the process will be easier and you’ll know who to reach out and quickly determine what stories will be compelling.

The post How To Get Your First 1,000 Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] Emacs and Magit

Post Syndicated from corbet original https://lwn.net/Articles/727550/rss

The Git source-code management system is widely known for its flexibility
and for the distributed development model that it supports. Its reputation
for ease of use is … less well established. There should, thus, be
an opening for front-end systems that can make Git easier to use. One of
the most comprehensive Git front ends, Magit, works within the Emacs editor and has a
wide following. But Magit has run into some turbulence within the Emacs
development community that is blocking its wider distribution.

Creating a Daily Dashboard to Track Bounces and Complaints

Post Syndicated from Rubem De Lima Savordelli original https://aws.amazon.com/blogs/ses/creating-a-daily-dashboard-to-track-bounces-and-complaints/

Bounce and complaint rates can have a negative impact on your sender reputation, and a bad sender reputation makes it less likely that the emails you send will reach your recipients’ inboxes. Further, if your bounce or complaint rate is too high, we may have to suspend your Amazon SES account to protect other users. For these reasons, it is very important that you have a process in place to remove email addresses that have bounced or complained from your recipient list.

This article includes background information about bounces and complaints. It also discusses a sample solution that you can use to keep track of the bounce and complaint notifications that you receive.

What is a Bounce?

A bounce occurs when a message cannot be delivered to the intended recipient. There are two types of bounces:

  • A hard bounce occurs when a persistent issue prevents the message from being delivered. Hard bounces can occur when the recipient’s email address does not exist or the receiving domain does not exist. When an email hard bounces, it means that the recipient did not receive the message, and Amazon SES will no longer attempt to deliver the message.
  • A soft bounce occurs when a temporary issue prevents a message from being delivered. Soft bounces can occur when the recipient’s mailbox is full, when the connection to the receiving email server times out, or when there are too many simultaneous connections to the receiving mail server. When an email soft bounces, Amazon will attempt to redeliver it. If the issue persists, Amazon SES will stop trying to deliver the message, and the soft bounce will be converted to a hard bounce.

To learn more about bounces, see the Amazon SES Bounce FAQ in the Amazon SES Developer Guide.

What is a Complaint?

When an email recipient clicks the Mark as Spam (or similar) button in his or her email client, the ISP records the event as a complaint. If the emails that you send generate too many of these complaint events, the ISP may conclude that you’re sending spam. Many ISPs provide feedback loops, in which the ISP provides you with information about the message that generated the complaint event.

For more information about complaints, see the Amazon SES Complaint FAQ in the Amazon SES Developer Guide.

Building a Daily Dashboard

We recently added a section to the Amazon SES Developer Guide that documents the process of creating a daily bounce and complaint tracking dashboard. You can find the procedures for creating this daily dashboard at http://docs.aws.amazon.com/ses/latest/DeveloperGuide/bouncecomplaintdashboard.html.

This solution uses several AWS components—including Simple Notification Service (SNS), Simple Queue Service (SQS), Identity and Access Management (IAM), Simple Storage Service (S3), Lambda, and CloudWatch—to create a dashboard that is emailed to you every day. The daily dashboard, illustrated in the following image, contains a list of the messages that generated bounces and complaints over the past 24 hours.

This solution uses SNS to track bounce and complaint notifications. Those notifications are then collected in an SQS queue. A CloudWatch trigger initiates a Lambda function, which collects the notification events from SQS, processes them, publishes a dashboard to an S3 bucket, and sends you an email when the dashboard is ready to view. The following image illustrates the architecture of this solution.

When you receive the daily dashboard, you should use it to remove the addresses that hard bounced or complained from your recipient list. This measure will help protect your deliverability and inbox placement rates.

This solution is just one method of tracking the bounces and complaints that you receive when sending email using Amazon SES. We hope you find this sample solution useful. If you have any questions about this solution, please leave a comment below, or start a discussion in the Amazon SES forum.

Digital painter rundown

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/17/digital-painter-rundown/

Another patron post! IndustrialRobot asks:

You should totally write about drawing/image manipulation programs! (Inspired by https://eev.ee/blog/2015/05/31/text-editor-rundown/)

This is a little trickier than a text editor comparison — while most text editors are cross-platform, quite a few digital art programs are not. So I’m effectively unable to even try a decent chunk of the offerings. I’m also still a relatively new artist, and image editors are much harder to briefly compare than text editors…

Right, now that your expectations have been suitably lowered:

Krita

I do all of my digital art in Krita. It’s pretty alright.

Okay so Krita grew out of Calligra, which used to be KOffice, which was an office suite designed for KDE (a Linux desktop environment). I bring this up because KDE has a certain… reputation. With KDE, there are at least three completely different ways to do anything, each of those ways has ludicrous amounts of customization and settings, and somehow it still can’t do what you want.

Krita inherits this aesthetic by attempting to do literally everything. It has 17 different brush engines, more than 70 layer blending modes, seven color picker dockers, and an ungodly number of colorspaces. It’s clearly intended primarily for drawing, but it also supports animation and vector layers and a pretty decent spread of raster editing tools. I just right now discovered that it has Photoshop-like “layer styles” (e.g. drop shadow), after a year and a half of using it.

In fairness, Krita manages all of this stuff well enough, and (apparently!) it manages to stay out of your way if you’re not using it. In less fairness, they managed to break erasing with a Wacom tablet pen for three months?

I don’t want to rag on it too hard; it’s an impressive piece of work, and I enjoy using it! The emotion it evokes isn’t so much frustration as… mystified bewilderment.

I once filed a ticket suggesting the addition of a brush size palette — a panel showing a grid of fixed brush sizes that makes it easy to switch between known sizes with a tablet pen (and increases the chances that you’ll be able to get a brush back to the right size again). It’s a prominent feature of Paint Tool SAI and Clip Studio Paint, and while I’ve never used either of those myself, I’ve seen a good few artists swear by it.

The developer response was that I could emulate the behavior by creating brush presets. But that’s flat-out wrong: getting the same effect would require creating a ton of brush presets for every brush I have, plus giving them all distinct icons so the size is obvious at a glance. Even then, it would be much more tedious to use and fill my presets with junk.

And that sort of response is what’s so mysterious to me. I’ve never even been able to use this feature myself, but a year of amateur painting with Krita has convinced me that it would be pretty useful. But a developer didn’t see the use and suggested an incredibly tedious alternative that only half-solves the problem and creates new ones. Meanwhile, of the 28 existing dockable panels, a quarter of them are different ways to choose colors.

What is Krita trying to be, then? What does Krita think it is? Who precisely is the target audience? I have no idea.


Anyway, I enjoy drawing in Krita well enough. It ships with a respectable set of brushes, and there are plenty more floating around. It has canvas rotation, canvas mirroring, perspective guide tools, and other art goodies. It doesn’t colordrop on right click by default, which is arguably a grave sin (it shows a customizable radial menu instead), but that’s easy to rebind. It understands having a background color beneath a bottom transparent layer, which is very nice. You can also toggle any brush between painting and erasing with the press of a button, and that turns out to be very useful.

It doesn’t support infinite canvases, though it does offer a one-click button to extend the canvas in a given direction. I’ve never used it (and didn’t even know what it did until just now), but would totally use an infinite canvas.

I haven’t used the animation support too much, but it’s pretty nice to have. Granted, the only other animation software I’ve used is Aseprite, so I don’t have many points of reference here. It’s a relatively new addition, too, so I assume it’ll improve over time.

The one annoyance I remember with animation was really an interaction with a larger annoyance, which is: working with selections kind of sucks. You can’t drag a selection around with the selection tool; you have to switch to the move tool. That would be fine if you could at least drag the selection ring around with the selection tool, but you can’t do that either; dragging just creates a new selection.

If you want to copy a selection, you have to explicitly copy it to the clipboard and paste it, which creates a new layer. Ctrl-drag with the move tool doesn’t work. So then you have to merge that layer down, which I think is where the problem with animation comes in: a new layer is non-animated by default, meaning it effectively appears in any frame, so simply merging it down with merge it onto every single frame of the layer below. And you won’t even notice until you switch frames or play back the animation. Not ideal.

This is another thing that makes me wonder about Krita’s sense of identity. It has a lot of fancy general-purpose raster editing features that even GIMP is still struggling to implement, like high color depth support and non-destructive filters, yet something as basic as working with selections is clumsy. (In fairness, GIMP is a bit clumsy here too, but it has a consistent notion of “floating selection” that’s easy enough to work with.)

I don’t know how well Krita would work as a general-purpose raster editor; I’ve never tried to use it that way. I can’t think of anything obvious that’s missing. The only real gotcha is that some things you might expect to be tools, like smudge or clone, are just types of brush in Krita.

GIMP

Ah, GIMP — open source’s answer to Photoshop.

It’s very obviously intended for raster editing, and I’m pretty familiar with it after half a lifetime of only using Linux. I even wrote a little Scheme script for it ages ago to automate some simple edits to a couple hundred files, back before I was aware of ImageMagick. I don’t know what to say about it, specifically; it’s fairly powerful and does a wide variety of things.

In fact I’d say it’s almost frustratingly intended for raster editing. I used GIMP in my first attempts at digital painting, before I’d heard of Krita. It was okay, but so much of it felt clunky and awkward. Painting is split between a pencil tool, a paintbrush tool, and an airbrush tool; I don’t really know why. The default brushes are largely uninteresting. Instead of brush presets, there are tool presets that can be saved for any tool; it’s a neat idea, but doesn’t feel like a real substitute for brush presets.

Much of the same functionality as Krita is there, but it’s all somehow more clunky. I’m sure it’s possible to fiddle with the interface to get something friendlier for painting, but I never really figured out how.

And then there’s the surprising stuff that’s missing. There’s no canvas rotation, for example. There’s only one type of brush, and it just stamps the same pattern along a path. I don’t think it’s possible to smear or blend or pick up color while painting. The only way to change the brush size is via the very sensitive slider on the tool options panel, which I remember being a little annoying with a tablet pen. Also, you have to specifically enable tablet support? It’s not difficult or anything, but I have no idea why the default is to ignore tablet pressure and treat it like a regular mouse cursor.

As I mentioned above, there’s also no support for high color depth or non-destructive editing, which is honestly a little embarrassing. Those are the major things Serious Professionals™ have been asking for for ages, and GIMP has been trying to provide them, but it’s taking a very long time. The first signs of GEGL, a new library intended to provide these features, appeared in GIMP 2.6… in 2008. The last major release was in 2012. GIMP has been working on this new plumbing for almost as long as Krita’s entire development history. (To be fair, Krita has also raised almost €90,000 from three Kickstarters to fund its development; I don’t know that GIMP is funded at all.)

I don’t know what’s up with GIMP nowadays. It’s still under active development, but the exact status and roadmap are a little unclear. I still use it for some general-purpose editing, but I don’t see any reason to use it to draw.

I do know that canvas rotation will be in the next release, and there was some experimentation with embedding MyPaint’s brush engine (though when I tried it it was basically unusable), so maybe GIMP is interested in wooing artists? I guess we’ll see.

MyPaint

Ah, MyPaint. I gave it a try once. Once.

It’s a shame, really. It sounds pretty great: specifically built for drawing, has very powerful brushes, supports an infinite canvas, supports canvas rotation, has a simple UI that gets out of your way. Perfect.

Or so it seems. But in MyPaint’s eagerness to shed unnecessary raster editing tools, it forgot a few of the more useful ones. Like selections.

MyPaint has no notion of a selection, nor of copy/paste. If you want to move a head to align better to a body, for example, the sanctioned approach is to duplicate the layer, erase the head from the old layer, erase everything but the head from the new layer, then move the new layer.

I can’t find anything that resembles HSL adjustment, either. I guess the workaround for that is to create H/S/L layers and floodfill them with different colors until you get what you want.

I can’t work seriously without these basic editing tools. I could see myself doodling in MyPaint, but Krita works just as well for doodling as for serious painting, so I’ve never gone back to it.

Drawpile

Drawpile is the modern equivalent to OpenCanvas, I suppose? It lets multiple people draw on the same canvas simultaneously. (I would not recommend it as a general-purpose raster editor.)

It’s a little clunky in places — I sometimes have bugs where keyboard focus gets stuck in the chat, or my tablet cursor becomes invisible — but the collaborative part works surprisingly well. It’s not a brush powerhouse or anything, and I don’t think it allows textured brushes, but it supports tablet pressure and canvas rotation and locked alpha and selections and whatnot.

I’ve used it a couple times, and it’s worked well enough that… well, other people made pretty decent drawings with it? I’m not sure I’ve managed yet. And I wouldn’t use it single-player. Still, it’s fun.

Aseprite

Aseprite is for pixel art so it doesn’t really belong here at all. But it’s very good at that and I like it a lot.

That’s all

I can’t name any other serious contender that exists for Linux.

I’m dimly aware of a thing called “Photo Shop” that’s more intended for photos but functions as a passable painter. More artists seem to swear by Paint Tool SAI and Clip Studio Paint. Also there’s Paint.NET, but I have no idea how well it’s actually suited for painting.

And that’s it! That’s all I’ve got. Krita for drawing, GIMP for editing, Drawpile for collaborative doodling.

Building a Competitive Moat: Turning Challenges Into Advantages

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/turning-challenges-into-advantages/

castle on top of a storage pod

In my previous post on how Backblaze got started, I mentioned that “just because we knew the right solution, didn’t mean that it was possible.” I’ll dig into that here. The right solution was to offer unlimited backup for $5 per month. The price of storage at the time, however, would have likely forced us to price our unlimited backup service at 2x – 5x that.

We were faced with a difficult challenge – compromise a fundamental feature of our product by removing the unlimited storage element, increase our price point in order to cover our costs but likely limit our potential customer base, seek funding in order to run at a loss while we built market share with a hope/prayer we could make a profit in the future, or find another way (huge unknown that might not have a solution). Below I’ll dig into the options that were available, the paths we tried, and how this challenge completely transformed our company and ended up being our greatest technological advantage.

Available Options:

Use a Storage Service

Originally we intended to build the backup application, but leave the back-end storage to others; likely Amazon S3. This had many advantages:

  1. We would not have to worry about the storage at all
  2. It would scale up or down as we needed it
  3. We would pay only for what we used

Especially as a small, bootstrapped company with limited resources – these were incredible benefits.

There was just one problem. At S3’s then current pricing ($0.15/GB/month), a customer storing just 33 GB would cost us 100% of the $5 per month we would collect. Additionally, we would need to pay S3 transaction and download charges, along with our engineering/support/marketing and other expenses.. The conclusion, even if the average customer stored just 33 GB, it would cost us at least $10/month for a customer that we were charging just $5/month.

In 2007, when we were getting started, there were a few other storage services available. But all were more expensive. Despite the fantastic benefits of using such a service, it simply didn’t work for us.

Buy Storage Systems

Buying storage systems didn’t have all the benefits of using a storage service – we would have to forecast need, buy in big blocks up front, manage data centers, etc. – but it seemed the second-best option. Companies such as EMC, NetApp, Dell, and others sold hundreds of petabytes of storage systems where they provide the servers, software, and support.

Alas, there were two problems: One temporary, the other permanent (and fatal). The temporary problem was that these systems were hundreds of thousands of dollars just to get started. This was challenging for us from a cash-flow perspective, but it was just a question of coming up with the cash. The permanent problem was that these systems cost ~$1,000/TB of storage. Hard drives were selling for ~$100/TB, so there was a 10x markup for the storage system. That markup eliminated pursuing this path. What if the the average customer had 100 GB to store? It would take us 20 months to pay off the purchase. We weren’t sure how much data the average customer would have, but the scenarios we were running made it seem like a $5/month price point was unsustainable.

Our Choices Where:

Don’t Offer the Right Solution

If it’s impossible to offer unlimited backup for $5/month, there are certainly choices. We could have raised the price to $10/month, not make the backup unlimited, or close-up shop altogether. All doable, none ideal.

Raise Funding

Plenty of companies raise funding before they can be self-sustaining, and it can work out great for everyone. We had raised funding for a previous company and believed we could have done it for Backblaze. And raising funding would have taken care of the cash-flow issue if we chose to buy storage systems.

However, it would have left us with a business with negative unit economics – we would lose money on every customer, and the faster we grew, the more money we would lose. VCs do fund these types of companies often (many of the delivery companies today fall in this realm) with the idea that, at scale, you improve your cost structure and possibly also charge more. But it’s a dangerous game since not only is the business not self-sustaining, it inevitably must be significantly altered in order to survive.

Find a Way to Store Data for Less

If there were some way to store data for less, significantly less, it could all work. We had a tiny glimmer of hope that it would be possible: Since hard drives only cost ~$100/TB, if we could somehow use those drives without adding much overhead, that would be quite affordable.

“we wanted to build a sustainable business from day one and build a culture that believes dollars come from customers.”

Our first decision was to not compromise our product by restricting the amount of storage. Although this would have been a much easier solution, it violated our core mission: Create a simple and inexpensive solution to backup all of your important data.

We had previously also decided not to raise funding to get started because we wanted to build a sustainable business from day one and build a culture that believes dollars come from customers. With those decisions made, we moved onto finding the best solution to fulfill our mission and create a viable company.

Experimentation

All we wanted was to attach hard drives to the Internet. If we could do that inexpensively, our backup application could store the data there and we could offer our unlimited backup service.

A hard drive needs to be connected to a server to be available on the Internet. It certainly wouldn’t be very cost effective to have one server for every hard drive, as the server costs would dominate the equation. Alternatively, trying to attach a lot of drives to a server resulted in needing expensive “enterprise” servers. The goal then became cost-efficiently attaching as many hard drives as possible to one server. According to its spec, USB is supposed to allow for 127 devices to be daisy-chained to a single port. We tried; it didn’t work. We considered Firewire, which could connect 63 devices, but the connectors are aimed at graphic designers and ended up too expensive on a unit-basis. Our next thought was to use small consumer-grade DAS (Direct-attached storage) devices and connect those to a server. We managed to attach 8 DAS devices with 4 drives each for a total of 32 hard drives connected to one server.

DAS units attached to a server
This worked well, but it was operationally challenging as none of these devices were meant to fit in a data center rack. Further complicating matters was that moving one of these setups required cabling 10 power cords, and separately moving 9 boxes. Fine at small scale, but very hard to scale up.

We realized that we didn’t need all the boxes, we just needed backplanes to connect the drives from the DAS boxes to the motherboard from the server. We found a different DAS box that supports port multipliers and took that backplane. How did we decide on that DAS box? Tim, co-founder & Chief Cloud Officer, remembers going to Fry’s and picking the box that looked “about right”.

That all laid the path for our eventual 45 drive design. The next thought was: If we could put all that in one box, it might be the solution we were looking for. The first iteration of this was a plywood box.

the first wooden storage pod

That eventually evolved into a steel server and what we refer to as a Storage Pod.

steel storage pod chassis

Building a Storage Platform

The Storage Pod became our key building block, but was just a tiny component of the ‘storage platform’. We had to write software that would run on each Storage Pod, software that would create redundancy between the Storage Pods, and central software and systems that would coordinate other aspects of the system to accept/load balance/validate/clean-up data. We had to find and train contract manufacturers to build the Storage Pods, find and negotiate data center space and bandwidth, setup processes to buy drives and track their reliability, hire people to maintain the systems, and setup the business processes to do all of this and more at scale.

All of this ended up taking tremendous technical effort, management engagement, and work from all corners of Backblaze. But it has also paid enormous dividends.

The Transformation

We started Backblaze thinking of ourselves as a backup company. In reality, we became a storage company with ‘backup’ as the first service we offered on our storage platform. Our backup service relies on the storage platform as, without the storage platform, we couldn’t offer unlimited backup. To enable the backup service, storage became the foundation of our company and is still what we live and breathe every day.

It didn’t just change how we built the service, it changed the fundamental DNA of the company.

Dividends

Creating our own storage platform was certainly hard. But it enabled us to offer our unlimited backup for a low price and do that while running a sustainable business.

“It didn’t just change how we built the service, it changed the fundamental DNA of the company.”

We felt that we had a service and price point that customers wanted, and we “unlocked” the way to let us build it. Having our storage platform also provides us with a deep connection to our customers and the storage community – we share how we build Storage Pods and how reliable hard drives in our environment have been. That content, in turns, helps brings awareness to Backblaze; the awareness helps establish the company as a tech leader; that reputation helps us recruit to our growing team and earns customers who are evaluating our solutions vs Storage Company X.

And after years of being a storage company with a backup service, and being asked all the time to just offer our storage directly, we launched our Backblaze B2 Cloud Storage service. We offer this raw storage at a price of $0.005/GB/month – that’s less than 1/4th of the price of S3.

If we had built our backup service on one of the existing storage services or storage systems, it would have been easier – but none of this would have been possible. This challenge, which we have spent a decade working to overcome, has also transformed our company and became our greatest technological advantage.

The post Building a Competitive Moat: Turning Challenges Into Advantages appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hiring a Content Director

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/hiring-content-director/


Backblaze is looking to hire a full time Content Director. This role is an essential piece of our team, reporting directly to our VP of Marketing. As the hiring manager, I’d like to tell you a little bit more about the role, how I’m thinking about the collaboration, and why I believe this to be a great opportunity.

A Little About Backblaze and the Role

Since 2007, Backblaze has earned a strong reputation as a leader in data storage. Our products are astonishingly easy to use and affordable to purchase. We have engaged customers and an involved community that helps drive our brand. Our audience numbers in the millions and our primary interaction point is the Backblaze blog. We publish content for engineers (data infrastructure, topics in the data storage world), consumers (how to’s, merits of backing up), and entrepreneurs (business insights). In all categories, our Content Director drives our earned positioned as leaders.

Backblaze has a culture focused on being fair and good (to each other and our customers). We have created a sustainable business that is profitable and growing. Our team places a premium on open communication, being cleverly unconventional, and helping each other out. The Content Director, specifically, balances our needs as a commercial enterprise (at the end of the day, we want to sell our products) with the custodianship of our blog (and the trust of our audience).

There’s a lot of ground to be covered at Backblaze. We have three discreet business lines:

  • Computer Backup -> a 10 year old business focusing on backing up consumer computers.
  • B2 Cloud Storage -> Competing with Amazon, Google, and Microsoft… just at ¼ of the price (but with the same performance characteristics).
  • Business Backup -> Both Computer Backup and B2 Cloud Storage, but focused on SMBs and enterprise.

The Best Candidate Is…

An excellent writer – possessing a solid academic understanding of writing, the creative process, and delivering against deadlines. You know how to write with multiple voices for multiple audiences. We do not expect our Content Director to be a storage infrastructure expert; we do expect a facility with researching topics, accessing our engineering and infrastructure team for guidance, and generally translating the technical into something easy to understand. The best Content Director must be an active participant in the business/ strategy / and editorial debates and then must execute with ruthless precision.

Our Content Director’s “day job” is making sure the blog is running smoothly and the sales team has compelling collateral (emails, case studies, white papers).

Specifically, the Perfect Content Director Excels at:

  • Creating well researched, elegantly constructed content on deadline. For example, each week, 2 articles should be published on our blog. Blog posts should rotate to address the constituencies for our 3 business lines – not all blog posts will appeal to everyone, but over the course of a month, we want multiple compelling pieces for each segment of our audience. Similarly, case studies (and outbound emails) should be tailored to our sales team’s proposed campaigns / audiences. The Content Director creates ~75% of all content but is responsible for editing 100%.
  • Understanding organic methods for weaving business needs into compelling content. The majority of our content (but not EVERY piece) must tie to some business strategy. We hate fluff and hold our promotional content to a standard of being worth someone’s time to read. To be effective, the Content Director must understand the target customer segments and use cases for our products.
  • Straddling both Consumer & SaaS mechanics. A key part of the job will be working to augment the collateral used by our sales team for both B2 Cloud Storage and Business Backup. This content should be compelling and optimized for converting leads. And our foundational business line, Computer Backup, deserves to be nurtured and grown.
  • Product marketing. The Content Director “owns” the blog. But also assists in writing cases studies / white papers, creating collateral (email, trade show). Each of these things has a variety of call to action(s) and audiences. Direct experience is a plus, experience that will plausibly translate to these areas is a requirement.
  • Articulating views on storage, backup, and cloud infrastructure. Not everyone has experience with this. That’s fine, but if you do, it’s strongly beneficial.

A Thursday In The Life:

  • Coordinate Collaborators – We are deliverables driven culture, not a meeting driven one. We expect you to collaborate with internal blog authors and the occasional guest poster.
  • Collaborate with Design – Ensure imagery for upcoming posts / collateral are on track.
  • Augment Sales team – Lock content for next week’s outbound campaign.
  • Self directed blog agenda – Feedback for next Tuesday’s post is addressed, next Thursday’s post is circulated to marketing team for feedback & SEO polish.
  • Review Editorial calendar, make any changes.

Oh! And We Have Great Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee & fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Interested in Joining Our Team?

Send us an email to [email protected] with the subject “Content Director”. Please include your resume and 3 brief abstracts for content pieces.
Some hints for each of your three abstracts:

  • Create a compelling headline
  • Write clearly and concisely
  • Be brief, each abstract should be 100 words or less – no longer
  • Target each abstract to a different specific audience that is relevant to our business lines

Thank you for taking the time to read and consider all this. I hope it sounds like a great opportunity for you or someone you know. Principles only need apply.

The post Hiring a Content Director appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

RFD: the alien abduction prophecy protocol

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/05/rfd-alien-abduction-prophecy-protocol.html

“It’s tough to make predictions, especially about the future.”
– variously attributed to Yogi Berra and Niels Bohr

Right. So let’s say you are visited by transdimensional space aliens from outer space. There’s some old-fashioned probing, but eventually, they get to the point. They outline a series of apocalyptic prophecies, beginning with the surprise 2032 election of Dwayne Elizondo Mountain Dew Herbert Camacho as the President of the United States, followed by a limited-scale nuclear exchange with the Grand Duchy of Ruritania in 2036, and culminating with the extinction of all life due to a series of cascading Y2K38 failures that start at an Ohio pretzel reprocessing plan. Long story short, if you want to save mankind, you have to warn others of what’s to come.

But there’s a snag: when you wake up in a roadside ditch in Alabama, you realize that nobody is going to believe your story! If you come forward, your professional and social reputation will be instantly destroyed. If you’re lucky, the vindication of your claims will come fifteen years later; if not, it might turn out that you were pranked by some space alien frat boys who just wanted to have some cheap space laughs. The bottom line is, you need to be certain before you make your move. You figure this means staying mum until the Election Day of 2032.

But wait, this plan is also not very good! After all, how could your future self convince others that you knew about President Camacho all along? Well… if you work in information security, you are probably familiar with a neat solution: write down your account of events in a text file, calculate a cryptographic hash of this file, and publish the resulting value somewhere permanent. Fifteen years later, reveal the contents of your file and point people to your old announcement. Explain that you must have been in the possession of this very file back in 2017; otherwise, you would not have known its hash. Voila – a commitment scheme!

Although elegant, this approach can be risky: historically, the usable life of cryptographic hash functions seemed to hover at somewhere around 15 years – so even if you pick a very modern algorithm, there is a real risk that future advances in cryptanalysis could severely undermine the strength of your proof. No biggie, though! For extra safety, you could combine several independent hashing functions, or increase the computational complexity of the hash by running it in a loop. There are also some less-known hash functions, such as SPHINCS, that are designed with different trade-offs in mind and may offer longer-term security guarantees.

Of course, the computation of the hash is not enough; it needs to become an immutable part of the public record and remain easy to look up for years to come. There is no guarantee that any particular online publishing outlet is going to stay afloat that long and continue to operate in its current form. The survivability of more specialized and experimental platforms, such as blockchain-based notaries, seems even less clear. Thankfully, you can resort to another kludge: if you publish the hash through a large number of independent online venues, there is a good chance that at least one of them will be around in 2032.

(Offline notarization – whether of the pen-and-paper or the PKI-based variety – offers an interesting alternative. That said, in the absence of an immutable, public ledger, accusations of forgery or collusion would be very easy to make – especially if the fate of the entire planet is at stake.)

Even with this out of the way, there is yet another profound problem with the plan: a current-day scam artist could conceivably generate hundreds or thousands of political predictions, publish the hashes, and then simply discard or delete the ones that do not come true by 2032 – thus creating an illusion of prescience. To convince skeptics that you are not doing just that, you could incorporate a cryptographic proof of work into your approach, attaching a particular CPU time “price tag” to every hash. The future you could then claim that it would have been prohibitively expensive for the former you to attempt the “prediction spam” attack. But this argument seems iffy: a $1,000 proof may already be too costly for a lower middle class abductee, while a determined tech billionaire could easily spend $100,000 to pull off an elaborate prank on the entire world. Not to mention, massive CPU resources can be commandeered with little or no effort by the operators of large botnets and many other actors of this sort.

In the end, my best idea is to rely on an inherently low-bandwidth publication medium, rather than a high-cost one. For example, although a determined hoaxer could place thousands of hash-bearing classifieds in some of the largest-circulation newspapers, such sleigh-of-hand would be trivial for future sleuths to spot (at least compared to combing through the entire Internet for an abandoned hash). Or, as per an anonymous suggestion relayed by Thomas Ptacek: just tattoo the signature on your body, then post some post some pics; there are only so many places for a tattoo to go.

Still, what was supposed to be a nice, scientific proof devolved into a bunch of hand-wavy arguments and poorly-quantified probabilities. For the sake of future abductees: is there a better way?