Tag Archives: beta

New – AWS SAM Local (Beta) – Build and Test Serverless Applications Locally

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-aws-sam-local-beta-build-and-test-serverless-applications-locally/

Today we’re releasing a beta of a new tool, SAM Local, that makes it easy to build and test your serverless applications locally. In this post we’ll use SAM local to build, debug, and deploy a quick application that allows us to vote on tabs or spaces by curling an endpoint. AWS introduced Serverless Application Model (SAM) last year to make it easier for developers to deploy serverless applications. If you’re not already familiar with SAM my colleague Orr wrote a great post on how to use SAM that you can read in about 5 minutes. At it’s core, SAM is a powerful open source specification built on AWS CloudFormation that makes it easy to keep your serverless infrastructure as code – and they have the cutest mascot.

SAM Local takes all the good parts of SAM and brings them to your local machine.

There are a couple of ways to install SAM Local but the easiest is through NPM. A quick npm install -g aws-sam-local should get us going but if you want the latest version you can always install straight from the source: go get github.com/awslabs/aws-sam-local (this will create a binary named aws-sam-local, not sam).

I like to vote on things so let’s write a quick SAM application to vote on Spaces versus Tabs. We’ll use a very simple, but powerful, architecture of API Gateway fronting a Lambda function and we’ll store our results in DynamoDB. In the end a user should be able to curl our API curl https://SOMEURL/ -d '{"vote": "spaces"}' and get back the number of votes.

Let’s start by writing a simple SAM template.yaml:

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
  VotesTable:
    Type: "AWS::Serverless::SimpleTable"
  VoteSpacesTabs:
    Type: "AWS::Serverless::Function"
    Properties:
      Runtime: python3.6
      Handler: lambda_function.lambda_handler
      Policies: AmazonDynamoDBFullAccess
      Environment:
        Variables:
          TABLE_NAME: !Ref VotesTable
      Events:
        Vote:
          Type: Api
          Properties:
            Path: /
            Method: post

So we create a [dynamo_i] table that we expose to our Lambda function through an environment variable called TABLE_NAME.

To test that this template is valid I’ll go ahead and call sam validate to make sure I haven’t fat-fingered anything. It returns Valid! so let’s go ahead and get to work on our Lambda function.

import os
import os
import json
import boto3
votes_table = boto3.resource('dynamodb').Table(os.getenv('TABLE_NAME'))

def lambda_handler(event, context):
    print(event)
    if event['httpMethod'] == 'GET':
        resp = votes_table.scan()
        return {'body': json.dumps({item['id']: int(item['votes']) for item in resp['Items']})}
    elif event['httpMethod'] == 'POST':
        try:
            body = json.loads(event['body'])
        except:
            return {'statusCode': 400, 'body': 'malformed json input'}
        if 'vote' not in body:
            return {'statusCode': 400, 'body': 'missing vote in request body'}
        if body['vote'] not in ['spaces', 'tabs']:
            return {'statusCode': 400, 'body': 'vote value must be "spaces" or "tabs"'}

        resp = votes_table.update_item(
            Key={'id': body['vote']},
            UpdateExpression='ADD votes :incr',
            ExpressionAttributeValues={':incr': 1},
            ReturnValues='ALL_NEW'
        )
        return {'body': "{} now has {} votes".format(body['vote'], resp['Attributes']['votes'])}

So let’s test this locally. I’ll need to create a real DynamoDB database to talk to and I’ll need to provide the name of that database through the enviornment variable TABLE_NAME. I could do that with an env.json file or I can just pass it on the command line. First, I can call:
$ echo '{"httpMethod": "POST", "body": "{\"vote\": \"spaces\"}"}' |\
TABLE_NAME="vote-spaces-tabs" sam local invoke "VoteSpacesTabs"

to test the Lambda – it returns the number of votes for spaces so theoritically everything is working. Typing all of that out is a pain so I could generate a sample event with sam local generate-event api and pass that in to the local invocation. Far easier than all of that is just running our API locally. Let’s do that: sam local start-api. Now I can curl my local endpoints to test everything out.
I’ll run the command: $ curl -d '{"vote": "tabs"}' http://127.0.0.1:3000/ and it returns: “tabs now has 12 votes”. Now, of course I did not write this function perfectly on my first try. I edited and saved several times. One of the benefits of hot-reloading is that as I change the function I don’t have to do any additional work to test the new function. This makes iterative development vastly easier.

Let’s say we don’t want to deal with accessing a real DynamoDB database over the network though. What are our options? Well we can download DynamoDB Local and launch it with java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb. Then we can have our Lambda function use the AWS_SAM_LOCAL environment variable to make some decisions about how to behave. Let’s modify our function a bit:

import os
import json
import boto3
if os.getenv("AWS_SAM_LOCAL"):
    votes_table = boto3.resource(
        'dynamodb',
        endpoint_url="http://docker.for.mac.localhost:8000/"
    ).Table("spaces-tabs-votes")
else:
    votes_table = boto3.resource('dynamodb').Table(os.getenv('TABLE_NAME'))

Now we’re using a local endpoint to connect to our local database which makes working without wifi a little easier.

SAM local even supports interactive debugging! In Java and Node.js I can just pass the -d flag and a port to immediately enable the debugger. For Python I could use a library like import epdb; epdb.serve() and connect that way. Then we can call sam local invoke -d 8080 "VoteSpacesTabs" and our function will pause execution waiting for you to step through with the debugger.

Alright, I think we’ve got everything working so let’s deploy this!

First I’ll call the sam package command which is just an alias for aws cloudformation package and then I’ll use the result of that command to sam deploy.

$ sam package --template-file template.yaml --s3-bucket MYAWESOMEBUCKET --output-template-file package.yaml
Uploading to 144e47a4a08f8338faae894afe7563c3  90570 / 90570.0  (100.00%)
Successfully packaged artifacts and wrote output template to file package.yaml.
Execute the following command to deploy the packaged template
aws cloudformation deploy --template-file package.yaml --stack-name 
$ sam deploy --template-file package.yaml --stack-name VoteForSpaces --capabilities CAPABILITY_IAM
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - VoteForSpaces

Which brings us to our API:
.

I’m going to hop over into the production stage and add some rate limiting in case you guys start voting a lot – but otherwise we’ve taken our local work and deployed it to the cloud without much effort at all. I always enjoy it when things work on the first deploy!

You can vote now and watch the results live! http://spaces-or-tabs.s3-website-us-east-1.amazonaws.com/

We hope that SAM Local makes it easier for you to test, debug, and deploy your serverless apps. We have a CONTRIBUTING.md guide and we welcome pull requests. Please tweet at us to let us know what cool things you build. You can see our What’s New post here and the documentation is live here.

Randall

Trouble at the Krita Foundation

Post Syndicated from corbet original https://lwn.net/Articles/729412/rss

The Krita Foundation is having some
unexpected financial difficulties
and is looking for help. “Even
while we’re working on a new beta for Krita 3.2 and a new development build
for 4.0 (with Python, on Windows!), we have to release some bad news as
well. The Krita Foundation is having trouble with the Dutch tax
authorities.

New – GPU-Powered Streaming Instances for Amazon AppStream 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-powered-streaming-instances-for-amazon-appstream-2-0/

We launched Amazon AppStream 2.0 at re:Invent 2016. This application streaming service allows you to deliver Windows applications to a desktop browser.

AppStream 2.0 is fully managed and provides consistent, scalable performance by running applications on general purpose, compute optimized, and memory optimized streaming instances, with delivery via NICE DCV – a secure, high-fidelity streaming protocol. Our enterprise and public sector customers have started using AppStream 2.0 in place of legacy application streaming environments that are installed on-premises. They use AppStream 2.0 to deliver both commercial and line of business applications to a desktop browser. Our ISV customers are using AppStream 2.0 to move their applications to the cloud as-is, with no changes to their code. These customers focus on demos, workshops, and commercial SaaS subscriptions.

We are getting great feedback on AppStream 2.0 and have been adding new features very quickly (even by AWS standards). So far this year we have added an image builder, federated access via SAML 2.0, CloudWatch monitoring, Fleet Auto Scaling, Simple Network Setup, persistent storage for user files (backed by Amazon S3), support for VPC security groups, and built-in user management including web portals for users.

New GPU-Powered Streaming Instances
Many of our customers have told us that they want to use AppStream 2.0 to deliver specialized design, engineering, HPC, and media applications to their users. These applications are generally graphically intensive and are designed to run on expensive, high-end PCs in conjunction with a GPU (Graphics Processing Unit). Due to the hardware requirements of these applications, cost considerations have traditionally kept them out of situations where part-time or occasional access would otherwise make sense. Recently, another requirement has come to the forefront. These applications almost always need shared, read-write access to large amounts of sensitive data that is best stored, processed, and secured in the cloud. In order to meet the needs of these users and applications, we are launching two new types of streaming instances today:

Graphics Desktop – Based on the G2 instance type, Graphics Desktop instances are designed for desktop applications that use the CUDA, DirectX, or OpenGL for rendering. These instances are equipped with 15 GiB of memory and 8 vCPUs. You can select this instance family when you build an AppStream image or configure an AppStream fleet:

Graphics Pro – Based on the brand-new G3 instance type, Graphics Pro instances are designed for high-end, high-performance applications that can use the NVIDIA APIs and/or need access to large amounts of memory. These instances are available in three sizes, with 122 to 488 GiB of memory and 16 to 64 vCPUs. Again, you can select this instance family when you configure an AppStream fleet:

To learn more about how to launch, run, and scale a streaming application environment, read Scaling Your Desktop Application Streams with Amazon AppStream 2.0.

As I noted earlier, you can use either of these two instance types to build an AppStream image. This will allow you to test and fine tune your applications and to see the instances in action.

Streaming Instances in Action
We’ve been working with several customers during a private beta program for the new instance types. Here are a few stories (and some cool screen shots) to show you some of the applications that they are streaming via AppStream 2.0:

AVEVA is a world leading provider of engineering design and information management software solutions for the marine, power, plant, offshore and oil & gas industries. As part of their work on massive capital projects, their customers need to bring many groups of specialist engineers together to collaborate on the creation of digital assets. In order to support this requirement, AVEVA is building SaaS solutions that combine the streamed delivery of engineering applications with access to a scalable project data environment that is shared between engineers across the globe. The new instances will allow AVEVA to deliver their engineering design software in SaaS form while maximizing quality and performance. Here’s a screen shot of their Everything 3D app being streamed from AppStream:

Nissan, a Japanese multinational automobile manufacturer, trains its automotive specialists using 3D simulation software running on expensive graphics workstations. The training software, developed by The DiSti Corporation, allows its specialists to simulate maintenance processes by interacting with realistic 3D models of the vehicles they work on. AppStream 2.0’s new graphics capability now allows Nissan to deliver these training tools in real time, with up to date content, to a desktop browser running on low-cost commodity PCs. Their specialists can now interact with highly realistic renderings of a vehicle that allows them to train for and plan maintenance operations with higher efficiency.

Cornell University is an American private Ivy League and land-grant doctoral university located in Ithaca, New York. They deliver advanced 3D tools such as AutoDesk AutoCAD and Inventor to students and faculty to support their course work, teaching, and research. Until now, these tools could only be used on GPU-powered workstations in a lab or classroom. AppStream 2.0 allows them to deliver the applications to a web browser running on any desktop, where they run as if they were on a local workstation. Their users are no longer limited by available workstations in labs and classrooms, and can bring their own devices and have access to their course software. This increased flexibility also means that faculty members no longer need to take lab availability into account when they build course schedules. Here’s a copy of Autodesk Inventor Professional running on AppStream at Cornell:

Now Available
Both of the graphics streaming instance families are available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions and you can start streaming from them today. Your applications must run in a Windows 2012 R2 environment, and can make use of DirectX, OpenGL, CUDA, OpenCL, and Vulkan.

With prices in the US East (Northern Virginia) Region starting at $0.50 per hour for Graphics Desktop instances and $2.05 per hour for Graphics Pro instances, you can now run your simulation, visualization, and HPC workloads in the AWS Cloud on an economical, pay-by-the-hour basis. You can also take advantage of fast, low-latency access to Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Lambda, Amazon Redshift, and other AWS services to build processing workflows that handle pre- and post-processing of your data.

Jeff;

 

How To Get Your First 1,000 Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-1000-customers/

PR for getting your first 1000 customers

If you launch your startup and no one knows, did you actually launch? As mentioned in my last post, our initial launch target was to get a 1,000 people to use our service. But how do you get even 1,000 people to sign up for your service when no one knows who you are?

There are a variety of methods to attract your first 1,000 customers, but launching with the press is my favorite. I’ll explain why and how to do it below.

Paths to Attract Your First 1,000 Customers

Social following: If you have a massive social following, those people are a reasonable target for what you’re offering. In particular if your relationship with them is one where they would buy something you recommend, this can be one of the easiest ways to get your initial customers. However, building this type of following is non-trivial and often is done over several years.

Press not only provides awareness and customers, but credibility and SEO benefits as well

Paid advertising: The advantage of paid ads is you have control over when they are presented and what they say. The primary disadvantage is they tend to be expensive, especially before you have your positioning, messaging, and funnel nailed.

Viral: There are certainly examples of companies that launched with a hugely viral video, blog post, or promotion. While fantastic if it happens, even if you do everything right, the likelihood of massive virality is miniscule and the conversion rate is often low.

Press: As I said, this is my favorite. You don’t need to pay a PR agency and can go from nothing to launched in a couple weeks. Press not only provides awareness and customers, but credibility and SEO benefits as well.

How to Pitch the Press

It’s easy: Have a compelling story, find the right journalists, make their life easy, pitch and follow-up. Of course, each one of those has some nuance, so let’s dig in.

Have a compelling story

How to Get Attention When you’ve been working for months on your startup, it’s easy to get lost in the minutiae when talking to others. Stories that a journalist will write about need to be something their readers will care about. Knowing what story to tell and how to tell it is part science and part art. Here’s how you can get there:

The basics of your story

Ask yourself the following questions, and write down the answers:

  • What are we doing? What product service are we offering?
  • Why? What problem are we solving?
  • What is interesting or unique? Either about what we’re doing, how we’re doing it, or for who we’re doing it.

“But my story isn’t that exciting”

Neither was announcing a data backup company, believe me. Look for angles that make it compelling. Here are some:

  • Did someone on your team do something major before? (build a successful company/product, create some innovation, market something we all know, etc.)
  • Do you have an interesting investor or board member?
  • Is there a personal story that drove you to start this company?
  • Are you starting it in a unique place?
  • Did you come upon the idea in a unique way?
  • Can you share something people want to know that’s not usually shared?
  • Are you partnered with a well-known company?
  • …is there something interesting/entertaining/odd/shocking/touching/etc.?

It doesn’t get much less exciting than, “We’re launching a company that will backup your data.” But there were still a lot of compelling stories:

  • Founded by serial entrepreneurs, bootstrapped a capital-intensive company, committed to each other for a year without salary.
  • Challenging the way that every backup company before was set up by not asking customers to pick and choose files to backup.
  • Designing our own storage system.
  • Etc. etc.

For the initial launch, we focused on “unlimited for $5/month” and statistics from a survey we ran with Harris Interactive that said that 94% of people did not regularly backup their data.

It’s an old adage that “Everyone has a story.” Regardless of what you’re doing, there is always something interesting to share. Dig for that.

The headline

Once you’ve captured what you think the interesting story is, you’ve got to boil it down. Yes, you need the elevator pitch, but this is shorter…it’s the headline pitch. Write the headline that you would love to see a journalist write.

Regardless of what you’re doing, there is always something interesting to share. Dig for that.

Now comes the part where you have to be really honest with yourself: if you weren’t involved, would you care?

The “Techmeme Test”

One way I try to ground myself is what I call the “Techmeme Test”. Techmeme lists the top tech articles. Read the headlines. Imagine the headline you wrote in the middle of the page. If you weren’t involved, would you click on it? Is it more or less compelling than the others. Much of tech news is dominated by the largest companies. If you want to get written about, your story should be more compelling. If not, go back above and explore your story some more.

Embargoes, exclusives and calls-to-action

Journalists write about news. Thus, if you’ve already announced something and are then pitching a journalist to cover it, unless you’re giving her something significant that hasn’t been said, it’s no longer news. As a result, there are ‘embargoes’ and ‘exclusives’.

Embargoes

    • : An embargo simply means that you are sharing news with a journalist that they need to keep private until a certain date and time.

If you’re Apple, this may be a formal and legal document. In our case, it’s as simple as saying, “Please keep embargoed until 4/13/17 at 8am California time.” in the pitch. Some sites explicitly will not keep embargoes; for example The Information will only break news. If you want to launch something later, do not share information with journalists at these sites. If you are only working with a single journalist for a story, and your announcement time is flexible, you can jointly work out a date and time to announce. However, if you have a fixed launch time or are working with a few journalists, embargoes are key.

Exclusives: An exclusive means you’re giving something specifically to that journalist. Most journalists love an exclusive as it means readers have to come to them for the story. One option is to give a journalist an exclusive on the entire story. If it is your dream journalist, this may make sense. Another option, however, is to give exclusivity on certain pieces. For example, for your launch you could give an exclusive on funding detail & a VC interview to a more finance-focused journalist and insight into the tech & a CTO interview to a more tech-focused journalist.

Call-to-Action: With our launch we gave TechCrunch, Ars Technica, and SimplyHelp URLs that gave the first few hundred of their readers access to the private beta. Once those first few hundred users from each site downloaded, the beta would be turned off.

Thus, we used a combination of embargoes, exclusives, and a call-to-action during our initial launch to be able to brief journalists on the news before it went live, give them something they could announce as exclusive, and provide a time-sensitive call-to-action to the readers so that they would actually sign up and not just read and go away.

How to Find the Most Authoritative Sites / Authors

“If a press release is published and no one sees it, was it published?” Perhaps the time existed when sending a press release out over the wire meant journalists would read it and write about it. That time has long been forgotten. Over 1,000 unread press releases are published every day. If you want your compelling story to be covered, you need to find the handful of journalists that will care.

Determine the publications

Find the publications that cover the type of story you want to share. If you’re in tech, Techmeme has a leaderboard of publications ranked by leadership and presence. This list will tell you which publications are likely to have influence. Visit the sites and see if your type of story appears on their site. But, once you’ve determined the publication do NOT send a pitch their [email protected] or [email protected] email addresses. In all the times I’ve done that, I have never had a single response. Those email addresses are likely on every PR, press release, and spam list and unlikely to get read. Instead…

Determine the journalists

Once you’ve determined which publications cover your area, check which journalists are doing the writing. Skim the articles and search for keywords and competitor names.

Over 1,000 unread press releases are published every day.

Identify one primary journalist at the publication that you would love to have cover you, and secondary ones if there are a few good options. If you’re not sure which one should be the primary, consider a few tests:

  • Do they truly seem to care about the space?
  • Do they write interesting/compelling stories that ‘get it’?
  • Do they appear on the Techmeme leaderboard?
  • Do their articles get liked/tweeted/shared and commented on?
  • Do they have a significant social presence?

Leveraging Google

Google author search by date

In addition to Techmeme or if you aren’t in the tech space Google will become a must have tool for finding the right journalists to pitch. Below the search box you will find a number of tabs. Click on Tools and change the Any time setting to Custom range. I like to use the past six months to ensure I find authors that are actively writing about my market. I start with the All results. This will return a combination of product sites and articles depending upon your search term.

Scan for articles and click on the link to see if the article is on topic. If it is find the author’s name. Often if you click on the author name it will take you to a bio page that includes their Twitter, LinkedIn, and/or Facebook profile. Many times you will find their email address in the bio. You should collect all the information and add it to your outreach spreadsheet. Click here to get a copy. It’s always a good idea to comment on the article to start building awareness of your name. Another good idea is to Tweet or Like the article.

Next click on the News tab and set the same search parameters. You will get a different set of results. Repeat the same steps. Between the two searches you will have a list of authors that actively write for the websites that Google considers the most authoritative on your market.

How to find the most socially shared authors

Buzzsumo search for most shared by date

Your next step is to find the writers whose articles get shared the most socially. Go to Buzzsumo and click on the Most Shared tab. Enter search terms for your market as well as competitor names. Again I like to use the past 6 months as the time range. You will get a list of articles that have been shared the most across Facebook, LinkedIn, Twitter, Pinterest, and Google+. In addition to finding the most shared articles and their authors you can also see some of the Twitter users that shared the article. Many of those Twitter users are big influencers in your market so it’s smart to start following and interacting with them as well as the authors.

How to Find Author Email Addresses

Some journalists publish their contact info right on the stories. For those that don’t, a bit of googling will often get you the email. For example, TechCrunch wrote a story a few years ago where they published all of their email addresses, which was in response to this new service that charges a small fee to provide journalist email addresses. Sometimes visiting their twitter pages will link to a personal site, upon which they will share an email address.

Of course all is not lost if you don’t find an email in the bio. There are two good services for finding emails, https://app.voilanorbert.com/ and https://hunter.io/. For Voila Norbert enter the author name and the website you found their article on. The majority of the time you search for an author on a major publication Norbert will return an accurate email address. If it doesn’t try Hunter.io.

On Hunter.io enter the domain name and click on Personal Only. Then scroll through the results to find the author’s email. I’ve found Norbert to be more accurate overall but between the two you will find most major author’s email addresses.

Email, by the way, is not necessarily the best way to engage a journalist. Many are avid Twitter users. Follow them and engage – that means read/retweet/favorite their tweets; reply to their questions, and generally be helpful BEFORE you pitch them. Later when you email them, you won’t be just a random email address.

Don’t spam

Now that you have all these email addresses (possibly thousands if you purchased a list) – do NOT spam. It is incredibly tempting to think “I could try to figure out which of these folks would be interested, but if I just email all of them, I’ll save myself time and be more likely to get some of them to respond.” Don’t do it.

Follow them and engage – that means read/retweet/favorite their tweets; reply to their questions, and generally be helpful BEFORE you pitch them.

First, you’ll want to tailor your pitch to the individual. Second, it’s a small world and you’ll be known as someone who spams – reputation is golden. Also, don’t call journalists. Unless you know them or they’ve said they’re open to calls, you’re most likely to just annoy them.

Build a relationship

Build Trust with reporters Play the long game. You may be focusing just on the launch and hoping to get this one story covered, but if you don’t quickly flame-out, you will have many more opportunities to tell interesting stories that you’ll want the press to cover. Be honest and don’t exaggerate.
When you have 500 users it’s tempting to say, “We’ve got thousands!” Don’t. The good journalists will see through it and it’ll likely come back to bite you later. If you don’t know something, say “I don’t know but let me find out for you.” Most journalists want to write interesting stories that their readers will appreciate. Help them do that. Build deeper relationships with 5 – 10 journalists, rather than spamming thousands.

Stay organized

It doesn’t need to be complicated, but keep a spreadsheet that includes the name, publication, and contact info of the journalists you care about. Then, use it to keep track of who you’ve pitched, who’s responded, whether you’ve sent them the materials they need, and whether they intend to write/have written.

Make their life easy

Journalists have a million PR people emailing them, are actively engaging with readers on Twitter and in the comments, are tracking their metrics, are working their sources…and all the while needing to publish new articles. They’re busy. Make their life easy and they’re more likely to engage with yours.

Get to know them

Before sending them a pitch, know what they’ve written in the space. If you tell them how your story relates to ones they’ve written, it’ll help them put the story in context, and enable them to possibly link back to a story they wrote before.

Prepare your materials

Journalists will need somewhere to get more info (prepare a fact sheet), a URL to link to, and at least one image (ideally a few to choose from.) A fact sheet gives bite-sized snippets of information they may need about your startup or product: what it is, how big the market is, what’s the pricing, who’s on the team, etc. The URL is where their reader will get the product or more information from you. It doesn’t have to be live when you’re pitching, but you should be able to tell what the URL will be. The images are ones that they could embed in the article: a product screenshot, a CEO or team photo, an infographic. Scan the types of images included in their articles. Don’t send any of these in your pitch, but have them ready. Studies, stats, customer/partner/investor quotes are also good to have.

Pitch

A pitch has to be short and compelling.

Subject Line

Think back to the headline you want. Is it really compelling? Can you shorten it to a subject line? Include what’s happening and when. For Mike Arrington at Techcrunch, our first subject line was “Startup doing an ‘online time machine’”. Later I would include, “launching June 6th”.

For John Timmer at ArsTechnica, it was “Demographics data re: your 4/17 article”. Why? Because he wrote an article titled “WiFi popular with the young people; backups, not so much”. Since we had run a demographics survey on backups, I figured as a science editor he’d be interested in this additional data.

Body

A few key things about the body of the email. It should be short and to the point, no more than a few sentences. Here was my actual, original pitch email to John:

Hey John,

We’re launching Backblaze next week which provides a Time Machine-online type of service. As part of doing some research I read your article about backups not being popular with young people and that you had wished Accenture would have given you demographics. In prep for our invite-only launch I sponsored Harris Interactive to get demographic data on who’s doing backups and if all goes well, I should have that data on Friday.

Next week starts Backup Awareness Month (and yes, probably Clean Your House Month and Brush Your Teeth Month)…but nonetheless…good time to remind readers to backup with a bit of data?

Would you be interested in seeing/talking about the data when I get it?

Would you be interested in getting a sneak peak at Backblaze? (I could give you some invite codes for your readers as well.)

Gleb Budman        

CEO and Co-Founder

Backblaze, Inc.

Automatic, Secure, High-Performance Online Backup

Cell: XXX-XXX-XXXX

The Good: It said what we’re doing, why this relates to him and his readers, provides him information he had asked for in an article, ties to something timely, is clearly tailored for him, is pitched by the CEO and Co-Founder, and provides my cell.

The Bad: It’s too long.

I got better later. Here’s an example:

Subject: Does temperature affect hard drive life?

Hi Peter, there has been much debate about whether temperature affects how long a hard drive lasts. Following up on the Backblaze analyses of how long do drives last & which drives last the longest (that you wrote about) we’ve now analyzed the impact of heat on the nearly 40,000 hard drives we have and found that…

We’re going to publish the results this Monday, 5/12 at 5am California-time. Want a sneak peak of the analysis?

Timing

A common question is “When should I launch?” What day, what time? I prefer to launch on Tuesday at 8am California-time. Launching earlier in the week gives breathing room for the news to live longer. While your launch may be a single article posted and that’s that, if it ends up a larger success, earlier in the week allows other journalists (including ones who are in other countries) to build on the story. Monday announcements can be tough because the journalists generally need to have their stories finished by Friday, and while ideally everything is buttoned up beforehand, startups sometimes use the weekend as overflow before a launch.

The 8am California-time is because it allows articles to be published at the beginning of the day West Coast and around lunch-time East Coast. Later and you risk it being past publishing time for the day. We used to launch at 5am in order to be morning for the East Coast, but it did not seem to have a significant benefit in coverage or impact, but did mean that the entire internal team needed to be up at 3am or 4am. Sometimes that’s critical, but I prefer to not burn the team out when it’s not.

Finally, try to stay clear of holidays, major announcements and large conferences. If Apple is coming out with their next iPhone, many of the tech journalists will be busy at least a couple days prior and possibly a week after. Not always obvious, but if you can, find times that are otherwise going to be slow for news.

Follow-up

There is a fine line between persistence and annoyance. I once had a journalist write me after we had an announcement that was covered by the press, “Why didn’t you let me know?! I would have written about that!” I had sent him three emails about the upcoming announcement to which he never responded.

My general rule is 3 emails.

Ugh. However, my takeaway from this isn’t that I should send 10 emails to every journalist. It’s that sometimes these things happen.

My general rule is 3 emails. If I’ve identified a specific journalist that I think would be interested and have a pitch crafted for her, I’ll send her the email ideally 2 weeks prior to the announcement. I’ll follow-up a week later, and one more time 2 days prior. If she ever says, “I’m not interested in this topic,” I note it and don’t email her on that topic again.

If a journalist wrote, I read the article and engage in the comments (or someone on our team, such as our social guy, @YevP does). We’ll often promote the story through our social channels and email our employees who may choose to share the story as well. This helps us, but also helps the journalist get their story broader reach. Again, the goal is to build a relationship with the journalists your space. If there’s something relevant to your customers that the journalist wrote, you’re providing a service to your customers AND helping the journalist get the word out about the article.

At times the stories also end up shared on sites such as Hacker News, Reddit, Slashdot, or become active conversations on Twitter. Again, we try to engage there and respond to questions (when we do, we are always clear that we’re from Backblaze.)

And finally, I’ll often send a short thank you to the journalist.

Getting Your First 1,000 Customers With Press

As I mentioned at the beginning, there is more than one way to get your first 1,000 customers. My favorite is working with the press to share your story. If you figure out your compelling story, find the right journalists, make their life easy, pitch and follow-up, you stand a high likelyhood of getting coverage and customers. Better yet, that coverage will provide credibility for your company, and if done right, will establish you as a resource for the press for the future.

Like any muscle, this process takes working out. The first time may feel a bit daunting, but just take the steps one at a time. As you do this a few times, the process will be easier and you’ll know who to reach out and quickly determine what stories will be compelling.

The post How To Get Your First 1,000 Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Tumbleweed at Vuze as Torrent Client Development Grinds to a Halt

Post Syndicated from Andy original https://torrentfreak.com/tumbleweed-at-vuze-as-torrent-client-development-grinds-to-a-halt-170710/

Back in the summer of 2003 when torrenting was still in its infancy, a new torrent client hit the web promising big things.

Taking the Latin name of the blue poison dart frog and deploying a logo depicting its image, the Azureus client aimed to carve out a niche in what would become a market of several hundred million users.

Written in Java and available on Windows, Linux, OSX, and Android, Azureus (latterly ‘Vuze’) always managed to divide the community. Heralded by many as a feature-rich powerhouse that left no stone unturned, others saw the client as bloated when compared to the more streamlined uTorrent.

All that being said, Vuze knew its place in the market and on the bells-and-whistles front, it always delivered. Its features included swarm-merging, built-in search, DVD-burning capabilities, and device integration. It felt like Vuze was always offering something new.

Indeed, for the past several years and like clockwork, every month new additions and fixes have been deployed to Vuze. Since at least 2012 and up to early 2017, not a single month passed without Vuze being tuned up or improved in some manner via beta or full versions. Now, however, all of that seems to have ground to a halt.

The last full release of Vuze (v5.7.5.0) containing plenty of tweaks and fixes was released on February 28 this year. It followed the previous full release by roughly three months, a pattern its developers have kept up for some time with earlier versions. As expected, the Vuze 5.7.5.1 beta versions followed but on April 10, everything stopped.

It’s now three whole months since Vuze the last beta release, which may not sound like a long time unless one considers the history. Vuze has been actively developed for 14 years and its developers have posted communications on their devblog archives every single month, at least as far back as July 2012. Since then – nothing.

Back in May, a user on Vuze forums noted that none of Vuze’s featured content (such as TED Talks) could be downloaded, while another reported that the client’s anti-virus definitions weren’t updating. Given past scheduling, a new version of the client should have been released about a month ago. Nothing appeared.

To illustrate, this is a screenshot of the Vuze source code repository, which shows the number of code changes committed since 2012. The drastic drop-off in April 2017 (12 commits) versus dozens to even hundreds in preceding months is punctuated by zero commits for the past three months.

Of course, even avid developers have offline lives, and it’s certainly possible that an unusual set of outside circumstances have conspired to give the impression that development has stopped. However, posting a note to the Vuze blog or Vuze forum shouldn’t be too difficult, so people are naturally worried about the future.

TorrentFreak has reached out to the respected developer identified by Vuze forum users as the most likely to respond to questions. At the time of publication, we had received no response.

As mentioned earlier, torrent users have a love/hate relationship with Vuze and Azureus but there is no mistaking this clients’ massive contribution to the torrent landscape. Millions will be hoping that the current radio silence is nothing sinister but until that confirmation is received, the concerns will continue.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

timeShift(GrafanaBuzz, 1w) Issue 2

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/06/30/timeshiftgrafanabuzz-1w-issue-2/

A big thank you to everyone for the likes, retweets, comments and questions from last week’s timeShift debut. We were delighted to learn that people found this new resource useful, and are excited to continue to publish weekly issues. If you know of a recent article about Grafana, or are writing one yourself, please get in touch, we’d be happy to feature it here.

From the Blogosphere

Plugins and Dashboards

We are excited that there have been over 100,000 plugin installations since we launched the new plugable architecture in Grafana v3. You can discover and install plugins in your own on-premises or Hosted Grafana instance from our website. Below are some recent additions and updates.

SimpleJson SimpleJson is a generic backend datasource that has been the foundation of a number of Grafana data source plugins. It’s also a mechanism by which any application can expose metrics over http directly to Grafana. The newest version adds basic auth.

NetXMS Grafana datasource for NetXMS open source monitoring system.

GoogleCalendar This plugin shows the event description as an annotation on your graphs.

Discrete Panel Show discrete values in a horizontal graph. This panel now supports results from the table format.

Alarm Box This panel shows the total count of values across all series. This update adds a new option to customize how the display and color values are calculated.

Status Dot This panel shows a colored dot for each series; useful to monitor latest values at a glance.

This week’s MVC (Most Valuable Contributor)

Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback. A majority of fixes and improvements come from our fantastic community!

mtanda (Mitsuhiro Tanda)

159 PR’s during the last 2 years and still going strong. Thank you for your contributions mtanda!

What do you think?

Anything in particular you’d like to see in this series of posts? Too long? Too short? Boring? Let us know. Comment on this article below, or post something at our community forum. With your help, we can make this a worthwhile resource.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Backblaze B2, Cloud Storage on a Budget: One Year Later

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-b2-cloud-storage-on-a-budget-one-year-later/

B2 Cloud Storage Review

A year ago, Backblaze B2 Cloud Storage came out of beta and became available for everyone to use. We were pretty excited, even though it seemed like everyone and their brother had a cloud storage offering. Now that we are a year down the road let’s see how B2 has fared in the real world of tight budgets, maxed-out engineering schedules, insanely funded competition, and more. Spoiler alert: We’re still pretty excited…

Cloud Storage on a Budget

There are dozens of companies offering cloud storage and the landscape is cluttered with incomprehensible pricing models, cleverly disguised transfer and download charges, and differing levels of service that seem to be driven more by marketing departments than customer needs.

Backblaze B2 keeps things simple: A single performant level of service, a single affordable price for storage ($0.005/GB/month), a single affordable price for downloads ($0.02/GB), and a single list of transaction charges – all on a single pricing page.

Who’s Using B2?

By making cloud storage affordable, companies and organizations now have a way to store their data in the cloud and still be able to access and restore it as quickly as needed. You don’t have to choose between price and performance. Here are a few examples:

  • Media & Entertainment: KLRU-TV, Austin PBS, is using B2 to preserve their video catalog of the world renown musical anthology series, Austin City Limits.
  • LTO Migration: The Girl Scouts San Diego, were able to move their daily incremental backups from LTO tape to the cloud, saving money and time, while helping automate their entire backup process.
  • Cloud Migration: Vintage Aerial found it cost effective to discard their internal data server and store their unique hi-resolution images in B2 Cloud Storage.
  • Backup: Ahuja and Clark, a boutique accounting firm, was able to save over 80% on the cost to backup all their corporate and client data.

How is B2 Being Used?

B2 Cloud Storage can be accessed in four ways: using the Web GUI, using the CLI, using the API library, and using a product or service integrated with B2. While many customers are using the Web GUI, CLI and API to store and retrieve data, the most prolific use of B2 occurs via our integration partners. Each integration partner has certified they have met our best practices for integrating to B2 and we’ve tested each of the integrations submitted to us. Here are a few of the highlights.

  • NAS Devices – Synology and QNAP have integrations which allow their NAS devices to sync their data to/from B2.
  • Backup and Sync – CloudBerry, GoodSync, and Retrospect are just a few of the services that can backup and/or sync data to/from B2.
  • Hybrid Cloud – 45 Drives and OpenIO are solutions that allow you to setup and operate a hybrid data storage cloud environment.
  • Desktop Apps – CyberDuck, MountainDuck, Dropshare, and more allow users an easy way to store and use data in B2 right from your desktop.
  • Digital Asset Management – Cantemo, Cubix, CatDV, and axle Video, let you catalog your digital assets and then store them in B2 for fast retrieval when they are needed.

If you have an application or service that stores data in the cloud and it isn’t integrated with Backblaze B2, then your customers are probably paying too much for cloud storage.

What’s New in B2?

B2 Fireball – our rapid data ingest service. We send you a storage device, and you load it up with up to 40 TB of data and send it back, then we load the data into your B2 account. The cost is $550 per trip plus shipping. Save your network bandwidth with the B2 Fireball.

Lowered the download price – When we introduced B2, we set the price to download a gigabyte of data to be $0.05/GB – the same as most competitors. A year in, we reevaluated the price based on usage and decided to lower the price to $0.02/GB.

B2 User Groups – Backblaze Groups functionality is now available in B2. An administrator can invite users to a B2 centric Group to centralize the storage location for that group of users. For example, multiple members of a department working on a project will be able to archive their work-in-process activities into a single B2 bucket.

Time Machine backup – You may know that you can use your Synology NAS as the destination for your Time Machine backup. With B2 you can also sync your Synology NAS to B2 for a true 3-2-1 backup solution. If your system crashes or is lost, you can restore your Time Machine image directly from B2 to your new machine.

Life Cycle Rules – Create rules that allow you to manage the length of time deleted files will remain in your B2 bucket before they are deleted. A great option for managing the cleanup of outdated file versions to save on storage costs.

Large Files – In the B2 Web GUI you can upload files as large as 500 MB using either the upload or drag-and-drop functionality. The B2 CLI and API support the ability to upload/download files as large as 10 TB.

5 MB file part size – When working with large files, the minimum file part size can now be set as low as 5 MB versus the previous low setting of 100 MB. Now the range of a file part when working with large files can be from 5 MB to 5GB. This increases the throughput of your data uploads and downloads.

SHA-1 at the end – This feature allows you to compute the SHA-1 checksum and append it to the end of the request body versus doing the computation before the file is sent. This is especially useful for those applications which stream data to/from B2.

Cache-Control – When data is downloaded from B2 into a browser, the length of time the file remains in the browser cache can be set at the bucket level using the b2_create_bucket and b2_update_bucket API calls. Setting this policy is optional.

Customized delimiters – Used in the API, this allows you to specify a delimiter to use for a given purpose. A common use is to set a delimiter in the file name string. Then use that delimiter to detect a folder name within the string.

Looking Ahead

Over the past year we added nearly 30,000 new B2 customers to the fold and are welcoming more and more each day as B2 continues to grow. We have plans to expand our storage footprint by adding more data centers as we look forward to moving towards a multi-region environment.

For those of you who are B2 customers – thank you for helping build B2. If you have an interesting way you are using B2, tell us in the comments below.

The post Backblaze B2, Cloud Storage on a Budget: One Year Later appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

[$] A beta for PostgreSQL 10

Post Syndicated from jake original https://lwn.net/Articles/724871/rss

PostgreSQL version 10 had its first beta release on May
18, just in time for the annual PGCon developer
conference
. The latest annual release comes with a host of major
features, including new versions of replication and partitioning, and
enhanced parallel query. Version 10 includes 451 commits, nearly half a
million lines of code and documentation, and over 150 new or changed
features since version 9.6. The PostgreSQL
community will find a lot to get excited about in this
release, as the project has delivered a long list of enhancements to
existing functionality. There’s also a few features aimed at fulfilling
new use cases, particularly in the “big data” industry sector.

[$] Language summit lightning talks

Post Syndicated from jake original https://lwn.net/Articles/723823/rss

Over the course of the day, the 2017 Python Language Summit hosted a
handful of lightning talks, several of which were worked into the dynamic
schedule when an opportunity presented itself. They ranged from the
traditional “less than five minutes” format to some that strayed well
outside of that time frame—some generated a fair amount of discussion as
well. Topics were all over the map: board elections, beta releases,
Python as a security vulnerability, Jython, and more.

IPTV Providers Counter Premier League Piracy Blocks

Post Syndicated from Andy original https://torrentfreak.com/iptv-providers-counter-premier-league-piracy-blocks-170520/

In the UK, top tier football is handled by The Premier League and its broadcasting partners Sky and BT Sport. All are facing problems with Internet piracy.

In a nutshell, official subscriptions are far from cheap, so people are always on the lookout for more affordable alternatives. As a result, large numbers of fans are turning to piracy-enabled set-top boxes for their fix.

These devices, often running Kodi with third-party addons, not only provide free or cheap football streams but also enable fans to watch matches at 3pm on Saturdays, a time traditionally covered by the blackout.

To mitigate this threat, earlier this year the Premier League obtained a rather special High Court injunction.

While similar in its aims to earlier orders targeting torrent sites including The Pirate Bay, this injunction enables the Premier League to act quickly, forcing local ISPs such as Sky, BT, and Virgin to block football streams in real-time.

“This will enable us to target the suppliers of illegal streams to IPTV boxes, and the internet, in a proportionate and precise manner,” the Premier League said at the time.

Ever since the injunction was issued, TF has monitored for signs that it has been achieving its stated aim of stopping or at least reducing stream availability. Based on information obtained from several popular IPTV suppliers, after several weeks we have concluded that Premier League streams are still easy to find, with some conditions.

HD sources for games across all Sky channels are commonplace on paid services, with SD sources available for free. High-quality streams have been consistently available on Saturday afternoons for the sensitive 3pm kick-off, with little to no interference or signs of disruption.

Of course, the Internet is a very big place, so it is certainly possible that disruption has been experienced by users elsewhere. However, what we do know is that some IPTV providers have been working behind the scenes to keep their services going.

According to a low-level contact at one IPTV provider who demanded total anonymity, servers used by his ‘company’ (he uses the term loosely) have seen their loads drop unexpectedly during match times, an indication that ISPs might be targeting their customers with blocks.

A re-seller for another well-known provider told TF that some intermittent disruption had been felt but that it was “being handled” as and when it “becomes a problem.” Complaint levels from customers are not yet considered a concern, he added.

That the Premier League’s efforts are having at least some effect doesn’t appear to be in doubt, but it’s pretty difficult to find evidence in public. That being said, an IPTV provider whose identity we were asked to conceal has taken more easily spotted measures.

After Premier League matches got underway this past Tuesday night, the provider in question launched a new beta service in its Kodi addon. Perhaps unsurprisingly, it allows users to cycle through proxy servers in order to bypass blocks put in place by ISPs on behalf of the Premier League.

Embedded proxy service in Kodi

As seen from the image above, the beta unblocking service is accessible via the service’s Kodi addon and requires no special skills to operate. Simply clicking on the “Find a Proxy to Use” menu item opens up the page below.

The servers used to bypass the blocks

Once a working proxy is found, access to the streams is facilitated indirectly, thereby evading the Premier League’s attempts at blocking IP addresses at the UK’s ISPs. Once that’s achieved, the list of streams is accessible again.

Sky Sports streams ready, in HD

The use of proxies for this kind of traffic is of interest, at least as far as the injunction goes.

What we know already is that the Premier League only has permission to block servers if it “reasonably believes” they have the “sole or predominant purpose of enabling or facilitating access to infringing streams of Premier League match footage.”

If any server “is being used for any other substantial purpose”, the football organization cannot block it, meaning that non-dedicated or multi-function proxies cannot be blocked by ISPs, legally at least.

On Thursday evening, however, a TF source monitoring a popular IPTV provider using proxies reported that the match between Southampton and Manchester United suddenly became blocked. Whether that was due to Premier League action is unclear but by using a VPN, usual service was restored.

The use of VPNs with IPTV services raises other issues, however. All Premier League blockades can be circumvented with the use of a VPN but many IPTV providers are known for being intolerant of them, since they can also be used by restreamers to ‘pirate’ their service.

The Premier League injunction came into force on March 18, 2017, and will run out this weekend when the football season ends.

It’s reasonable to presume that the period will have been used for testing and that the Premier League will be back in court again this year seeking a further injunction for the new season starting in August. Expect it to be more effective than it has been thus far.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

zetcd: running ZooKeeper apps without ZooKeeper

Post Syndicated from ris original https://lwn.net/Articles/723334/rss

The CoreOS Blog introduces the first
beta release, v0.0.1, of zetcd. “Distributed systems commonly rely
on a distributed consensus to coordinate work. Usually the systems
providing distributed consensus guarantee information is delivered in order
and never suffer split-brain conflicts. The usefulness, but rich design
space, of such systems is evident by the proliferation of implementations;
projects such as chubby, ZooKeeper, etcd, and consul, despite differing in philosophy
and protocol, all focus on serving similar basic key-value primitives for
distributed consensus. As part of making etcd the most appealing foundation
for distributed systems, the etcd team developed a new proxy, zetcd, to
serve ZooKeeper requests with an unmodified etcd cluster.

Grafana 4.3 Beta Release

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/05/12/grafana-4.3-beta-release/

Grafana v4.3 Beta is now available for download.

Release Highlights

  • New Heatmap Panel
  • Graph Panel Histogram Mode
  • Elasticsearch Histogram Aggregation
  • Prometheus Table data format
  • New MySQL Data Source (alpha version to get some early feedback)
  • 60+ small fixes and improvements, most of them contributed by our fantastic community!

Check out the New Features in v4.3 Dashboard on the Grafana Play site for
a showcase of these new features.

Histogram Support

A Histogram is a kind of bar chart that groups numbers into ranges, often called buckets or bins. Taller bars show that more data falls in that range.

The Graph Panel now supports Histograms.

Histogram Aggregation Support for Elasticsearch

Elasticsearch is the only supported data source that can return pre-bucketed data (data that is already grouped into ranges). With other data sources there is a risk of returning inaccurate data in a histogram due to using already aggregated data rather than raw data. This release adds support for Elasticsearch pre-bucketed data that can be visualized with the new Heatmap Panel.

Heatmap Panel

The Histogram support in the Graph Panel does not show changes over time – it aggregates all the data together for the chosen time range. To visualize a histogram over time, we have built a new Heatmap Panel.

Every column in a Heatmap is a histogram snapshot. Instead of visualizing higher values with higher bars, a heatmap visualizes higher values with color. The histogram shown above is equivalent to one column in the heatmap shown below.

The Heatmap panel also works with Elasticsearch Histogram Aggregations for more accurate server side bucketing.

MySQL Data Source (alpha)

This release includes a new core data source for MySQL. You can write any possible MySQL query and format it as either Time Series or Table Data allowing it be used with the Graph Panel, Table Panel and SingleStat Panel.

We are still working on the MySQL data source. As it’s missing some important features, like templating and macros and future changes could be breaking, we are
labeling the state of the data source as Alpha. Instead of holding up the release of v4.3 we are including it in its current shape to get some early feedback. So please try it out and let us know what you think on twitter or on our community forum. Is this a feature that you would use? How can we make it better?

The query editor can show the generated and interpolated SQL that is sent to the MySQL server.

The query editor will also show any errors that resulted from running the query (very useful when you have a syntax error!).

Health Check Endpoint

Now you can monitor the monitoring with the Health Check Endpoint! The new /api/health endpoint returns HTTP 200 OK if everything is up and HTTP 503 Error if the Grafana database cannot be pinged.

Lazy Load Panels

Grafana now delays loading panels until they become visible (scrolled into view). This means panels out of view are not sending requests thereby reducing the load on your time series database.

Prometheus – Table Data (column per label)

The Prometheus data source now supports the Table Data format by automatically assigning a column to a label. This makes it really easy to browse data in the table panel.

Other Highlights From The Changelog

Changes:

  • Table: Support to change column header text #3551
  • InfluxDB: influxdb query builder support for ORDER BY and LIMIT (allows TOPN queries) #6065 Support influxdb’s SLIMIT Feature #7232 thx @thuck
  • Graph: Support auto grid min/max when using log scale #3090, thx @bigbenhur
  • Prometheus: Make Prometheus query field a textarea #7663, thx @hagen1778
  • Server: Support listening on a UNIX socket #4030, thx @mitjaziv

Fixes:

  • MySQL: 4-byte UTF8 not supported when using MySQL database (allows Emojis in Dashboard Names) #7958
  • Dashboard: Description tooltip is not fully displayed #7970

Lots more enhancements and fixes can be found in the Changelog.

Download

Head to the v4.3 download page for download links & instructions.

Thanks

A big thanks to all the Grafana users who contribute by submitting PRs, bug reports, helping out on our community site and providing feedback!

Making sweet, sweet music with PiSound

Post Syndicated from Jonic original https://www.raspberrypi.org/blog/making-sweet-sweet-music-pisound/

I’d say I am a passable guitarist. Ever since I learnt about the existence of the Raspberry Pi in 2012, I’ve wondered how I could use one as a guitar effects unit. Unfortunately, I’m also quite lazy and have therefore done precisely nothing to make one. Now, though, I no longer have to beat myself up about this. Thanks to the PiSound board from Blokas, musicians can connect all manner of audio gear to their Raspberry Pi, bringing their projects to a whole new level. Essentially, it transforms your Pi into a complete audio workstation! What musician wouldn’t want a piece of that?

PiSound: a soundcard HAT for the Raspberry Pi

Raspberry Pi with PiSound attached

The PiSound in situ: do those dials go all the way to eleven?

PiSound is a HAT for the Raspberry Pi 3 which acts as a souped-up sound card. It allows you to send and receive audio signals from its jacks, and send MIDI input/output signals to compatible devices. It features two 6mm in/out jacks, two standard DIN-5 MIDI in/out sockets, potentiometers for volume and gain, and ‘The Button’ (with emphatic capitals) for activating audio manipulation patches. Following an incredibly successful Indiegogo campaign, the PiSound team is preparing the board for sale later in the year.

Setting the board up was simple, thanks to the excellent documentation on the PiSound site. First, I mounted the board on my Raspberry Pi’s GPIO pins and secured it with the supplied screws. Next, I ran one script in a terminal window on a fresh installation of Raspbian, which downloaded, installed, and set up all the software I needed to get going. All I had to do after that was connect my instruments and get to work creating patches for Pure Data, a popular visual programming interface for manipulating media streams.

PiSound with instruments and computer

Image from Blokas

Get creative with PiSound!

During my testing, I created some simple fuzz, delay, and tremolo guitar effects. The possibilities, though, are as broad as your imagination. I’ve come up with some ideas to inspire you:

  • You could create a web interface for the guitar effects, accessible over a local network on a smartphone or tablet.
  • How about controlling an interactive light show or projected visualisation on stage using the audio characteristics of the guitar signal?
  • Channel your inner Matt Bellamy and rig up some MIDI hardware on your guitar to trigger loops and samples while you play.
  • Use a tilt switch to increase the intensity of an effect when the angle of the guitar’s neck is changed (imagine you’re really going for it during a solo).
  • You could even use the audio input stream as a base for generating other non-audio results.

pisound – Audio & MIDI Interface for your Raspberry Pi

Indiegogo Campaign: https://igg.me/at/pisound More Info: http://www.blokas.io Sounds by Sarukas: http://bit.ly/2myN8lf

Now I have had a taste of what this incredible little board can do, I’m very excited to see what new things it will enable me to do as a performer. It’s compact and practical, too: as the entire thing is about the size of a standard guitar pedal, I could embed it into one of my guitars if I wanted to. Alternatively, I could get creative and design a custom enclosure for it.

Using Sonic Pi with PiSound

Community favourite Sonic Pi will also support the board very soon, as Sam Aaron and Ben Smith ably demonstrated at our fifth birthday party celebrations. This means you don’t even need to be able to play an instrument to make something awesome with this clever little HAT.

The Future of @Sonic_Pi with Sam Aaron & Ben Smith at #PiParty

Uploaded by Alan O’Donohoe on 2017-03-05.

I’m incredibly impressed with the hardware and the support on the PiSound website. It’s going to be my go-to HAT for advanced audio projects, and, when it finally launches later this year, I’ll have all the motivation I need to create the guitar effects unit I’ve always wanted.

Find out more about PiSound over at the Blokas website, and take a deeper look at the tech specs and other information over at the PiSound documentation site.

Disclaimer: I am personally a backer of the Indiegogo campaign, and Blokas very kindly supplied a beta board for this review.

The post Making sweet, sweet music with PiSound appeared first on Raspberry Pi.

Spotify’s Beta Used ‘Pirate’ MP3 Files, Some From Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/spotifys-beta-used-pirate-mp3-files-some-from-pirate-bay-170509/

While some pirates will probably never be tempted away from the digital high seas, over the past decade millions have ditched or tapered down their habit with the help of Spotify.

It’s no coincidence that from the very beginning more than a decade ago, the streaming service had more than a few things in common with the piracy scene.

Spotify CEO Daniel Ek originally worked with uTorrent creator Ludvig ‘Ludde’ Strigeus before the pair sold to BitTorrent Inc. and began work on Spotify. Later, the company told TF that pirates were their target.

“Spotify is a new way of enjoying music. We believe Spotify provides a viable alternative to music piracy,” the company said.

“We think the way forward is to create a service better than piracy, thereby converting users into a legal, sustainable alternative which also enriches the total music experience.”

The technology deployed by Spotify was also familiar. Like the majority of ‘pirate’ platforms at the time, Spotify operated a peer-to-peer (P2P) system which grew to become one of the largest on the Internet. It was shut down in 2011.

But in the clearest nod to pirates, Spotify was available for free, supported by ads if the user desired. This was the platform’s greatest asset as it sought to win over a generation that had grown accustomed to gorging on free MP3s. Interestingly, however, an early Pirate Bay figure has now revealed that Spotify also had a use for the free content floating around the Internet.

As one of the early members of Sweden’s infamous Piratbyrån (piracy bureau), Rasmus Fleischer was also one of key figures at The Pirate Bay. Over the years he’s been a writer, researcher, debater and musician, and in 2012 he finished his PhD thesis on “music’s political economy.”

As part of a five-person team, Fleischer is now writing a book about Spotify. Titled ‘Spotify Teardown – Inside the Black Box of Streaming Music’, the book aims to shine light on the history of the famous music service and also spills the beans on a few secrets.

In an interview with Sweden’s DI.se, Fleischer reveals that when Spotify was in early beta, the company used unlicensed music to kick-start the platform.

“Spotify’s beta version was originally a pirate service. It was distributing MP3 files that the employees happened to have on their hard drives,” he reveals.

Rumors that early versions of Spotify used ‘pirate’ MP3s have been floating around the Internet for years. People who had access to the service in the beginning later reported downloading tracks that contained ‘Scene’ labeling, tags, and formats, which are the tell-tale signs that content hadn’t been obtained officially.

Solid proof has been more difficult to come by but Fleischer says he knows for certain that Spotify was using music obtained not only from pirate sites, but the most famous pirate site of all.

According to the writer, a few years ago he was involved with a band that decided to distribute their music on The Pirate Bay instead of the usual outlets. Soon after, the album appeared on Spotify’s beta service.

“I thought that was funny. So I emailed Spotify and asked how they obtained it. They said that ‘now, during the test period, we will use music that we find’,” Fleischer recalls.

For a company that has attracting pirates built into its DNA, it’s perhaps fitting that it tempted them with the same bait found on pirate sites. Certainly, the company’s history of a pragmatic attitude towards piracy means that few will be shouting ‘hypocrites’ at the streaming platform now.

Indeed, according to Fleischer the successes and growth of Spotify are directly linked to the temporary downfall of The Pirate Bay following the raid on the site in 2006, and the lawsuits that followed.

“The entire Spotify beta period and its early launch history is in perfect sync with the Pirate Bay process,” Fleischer explains.

“They would not have had as much attention if they had not been able to surf that wave. The company’s early history coincides with the Pirate Party becoming a hot topic, and the trial of the Pirate Bay in the Stockholm District Court.”

In 2013, Fleischer told TF that The Pirate Bay had “helped catalyze so-called ‘new business models’,” and it now appears that Spotify is reaping the benefits and looks set to keep doing so into the future.

An in-depth interview with Rasmus Fleischer will be published here soon, including an interesting revelation detailing how TorrentFreak readers positively affected the launch of Spotify in the United States.

Spotify Teardown – Inside the Black Box of Streaming Music will be published early 2018.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MariaDB 10.3-alpha released

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2017/04/mariadb-103-alpha-released.html

While most of the MariaDB developers have been working hard on getting MariaDB 10.2 out as GA, a small team, including me, has been working on the next release, MariaDB 10.3.

The theme of MariaDB 10.2 is complex operations, like window functions, common table expressions, JSON functions, the theme of MariaDB 10.3 is compatibility.

Compatibility refers to functionality that exist in other databases but have been missing in MariaDB:

In MariaDB 10.2 ORACLE mode was limited to removing MariaDB specific options in SHOW CREATE TABLE, SHOW CREATE VIEW and setting SQL_MODE to “PIPES_AS_CONCAT, ANSI_QUOTES, IGNORE_SPACE, ORACLE, NO_KEY_OPTIONS, NO_TABLE_OPTIONS, NO_FIELD_OPTIONS, NO_AUTO_CREATE_USER”.

In MariaDB 10.3, SQL_MODE=ORACLE mode allows MariaDB to understand a large subset of Oracle’s PL/SQL language. The documentation for what is supported is still lacking, but the interested can find what is supported in the test suite in the “mysql-test/suite/compat/oracle” directory.

If things go as planned, the features we will add to 10.3 prior to beta are:

Most of the above features are already close to be ready (to be added in future Alphas), so I expect that it willl not take many months before we can make a first MariaDB 10.3 beta!

This is in line what was discussed on the MariaDB developer conference in New York one week ago, where most attendees wanted to see new MariaDB releases more often.

MariaDB 10.3 can be downloaded here

Happy testing!

Acrophobia 1.0: don’t drop the ball!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/acrophobia/

Using servomotors and shadow tracking, Acrophobia 1.0’s mission to give a Raspberry Pi a nervous disposition is a rolling success.

Acrophobia 1.0

Acrophobia, a nervous machine with no human-serving goal, but with a single fear: of dropping the ball. Unlike any other ball balancing machine, Acrophobia has no interest in keeping the ball centered. She is just afraid to drop it, getting trapped in near-infinite loops of her own making.

How to give a Raspberry Pi Acrophobia

Controlling the MDF body and 3D printed wheels, the heart of Acrophobia contains a Raspberry Pi 2 and a Camera Module. The camera tracks a shadow across a square of semi-elastic synthetic cloth, moving the Turnigy S901D servomotors at each corner to keep it within a set perimeter.

Acrophobia Raspberry Pi

Well-placed lighting creates the perfect shadow for the Raspberry Pi to track

The shadow is cast by a small ball, and the single goal of Acrophobia is to keep that ball from dropping off the edge.

Acrophobia, a nervous machine with no human-serving goal, but with a single fear: of dropping the ball.

Unlike any other ball-balancing machine, Acrophobia has no interest in keeping the ball centered. She is just afraid to drop it, getting trapped in near-infinite loops of her own making.

To set up the build, the Raspberry Pi is accessed via VNC viewer on an iPad. Once the Python code is executed, Acrophobia is stuck in its near-infinite nightmare loop.

Acrophobia Raspberry Pi

This video for Acrophobia 1.0 has only recently been uploaded to Vimeo, but the beta recording has been available for some time. You can see the initial iteration, created by George Adamopoulos, Dafni Papadopoulou, Maria Papacharisi and Filippos Pappas for the National Technical University of Athens School of Architecture Undergraduate course here, and compare the two. The beta video includes the details of the original Arduino/webcam setup that was eventually replaced by the Raspberry Pi and Camera Module.

Team Building

I recently saw a similar build to this, again using a Raspberry Pi, which used tablet computers as game controllers. Instead of relying on a camera to track the ball, two players worked together to keep the ball within the boundaries of the sheet.

Naturally, now that I need the video for a blog post, I can’t find it. But if you know what I’m talking about, share the link in the comments below.

And if you don’t, it’s time to get making, my merry band of Pi builders. Who can turn Acrophobia into an interactive game?

The post Acrophobia 1.0: don’t drop the ball! appeared first on Raspberry Pi.