Tag Archives: new product

Analyze data in Amazon DynamoDB using Amazon SageMaker for real-time prediction

Post Syndicated from YongSeong Lee original https://aws.amazon.com/blogs/big-data/analyze-data-in-amazon-dynamodb-using-amazon-sagemaker-for-real-time-prediction/

Many companies across the globe use Amazon DynamoDB to store and query historical user-interaction data. DynamoDB is a fast NoSQL database used by applications that need consistent, single-digit millisecond latency.

Often, customers want to turn their valuable data in DynamoDB into insights by analyzing a copy of their table stored in Amazon S3. Doing this separates their analytical queries from their low-latency critical paths. This data can be the primary source for understanding customers’ past behavior, predicting future behavior, and generating downstream business value. Customers often turn to DynamoDB because of its great scalability and high availability. After a successful launch, many customers want to use the data in DynamoDB to predict future behaviors or provide personalized recommendations.

DynamoDB is a good fit for low-latency reads and writes, but it’s not practical to scan all data in a DynamoDB database to train a model. In this post, I demonstrate how you can use DynamoDB table data copied to Amazon S3 by AWS Data Pipeline to predict customer behavior. I also demonstrate how you can use this data to provide personalized recommendations for customers using Amazon SageMaker. You can also run ad hoc queries using Amazon Athena against the data. DynamoDB recently released on-demand backups to create full table backups with no performance impact. However, it’s not suitable for our purposes in this post, so I chose AWS Data Pipeline instead to create managed backups are accessible from other services.

To do this, I describe how to read the DynamoDB backup file format in Data Pipeline. I also describe how to convert the objects in S3 to a CSV format that Amazon SageMaker can read. In addition, I show how to schedule regular exports and transformations using Data Pipeline. The sample data used in this post is from Bank Marketing Data Set of UCI.

The solution that I describe provides the following benefits:

  • Separates analytical queries from production traffic on your DynamoDB table, preserving your DynamoDB read capacity units (RCUs) for important production requests
  • Automatically updates your model to get real-time predictions
  • Optimizes for performance (so it doesn’t compete with DynamoDB RCUs after the export) and for cost (using data you already have)
  • Makes it easier for developers of all skill levels to use Amazon SageMaker

All code and data set in this post are available in this .zip file.

Solution architecture

The following diagram shows the overall architecture of the solution.

The steps that data follows through the architecture are as follows:

  1. Data Pipeline regularly copies the full contents of a DynamoDB table as JSON into an S3
  2. Exported JSON files are converted to comma-separated value (CSV) format to use as a data source for Amazon SageMaker.
  3. Amazon SageMaker renews the model artifact and update the endpoint.
  4. The converted CSV is available for ad hoc queries with Amazon Athena.
  5. Data Pipeline controls this flow and repeats the cycle based on the schedule defined by customer requirements.

Building the auto-updating model

This section discusses details about how to read the DynamoDB exported data in Data Pipeline and build automated workflows for real-time prediction with a regularly updated model.

Download sample scripts and data

Before you begin, take the following steps:

  1. Download sample scripts in this .zip file.
  2. Unzip the src.zip file.
  3. Find the automation_script.sh file and edit it for your environment. For example, you need to replace 's3://<your bucket>/<datasource path>/' with your own S3 path to the data source for Amazon ML. In the script, the text enclosed by angle brackets—< and >—should be replaced with your own path.
  4. Upload the json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar file to your S3 path so that the ADD jar command in Apache Hive can refer to it.

For this solution, the banking.csv  should be imported into a DynamoDB table.

Export a DynamoDB table

To export the DynamoDB table to S3, open the Data Pipeline console and choose the Export DynamoDB table to S3 template. In this template, Data Pipeline creates an Amazon EMR cluster and performs an export in the EMRActivity activity. Set proper intervals for backups according to your business requirements.

One core node(m3.xlarge) provides the default capacity for the EMR cluster and should be suitable for the solution in this post. Leave the option to resize the cluster before running enabled in the TableBackupActivity activity to let Data Pipeline scale the cluster to match the table size. The process of converting to CSV format and renewing models happens in this EMR cluster.

For a more in-depth look at how to export data from DynamoDB, see Export Data from DynamoDB in the Data Pipeline documentation.

Add the script to an existing pipeline

After you export your DynamoDB table, you add an additional EMR step to EMRActivity by following these steps:

  1. Open the Data Pipeline console and choose the ID for the pipeline that you want to add the script to.
  2. For Actions, choose Edit.
  3. In the editing console, choose the Activities category and add an EMR step using the custom script downloaded in the previous section, as shown below.

Paste the following command into the new step after the data ­­upload step:

s3://#{myDDBRegion}.elasticmapreduce/libs/script-runner/script-runner.jar,s3://<your bucket name>/automation_script.sh,#{output.directoryPath},#{myDDBRegion}

The element #{output.directoryPath} references the S3 path where the data pipeline exports DynamoDB data as JSON. The path should be passed to the script as an argument.

The bash script has two goals, converting data formats and renewing the Amazon SageMaker model. Subsequent sections discuss the contents of the automation script.

Automation script: Convert JSON data to CSV with Hive

We use Apache Hive to transform the data into a new format. The Hive QL script to create an external table and transform the data is included in the custom script that you added to the Data Pipeline definition.

When you run the Hive scripts, do so with the -e option. Also, define the Hive table with the 'org.openx.data.jsonserde.JsonSerDe' row format to parse and read JSON format. The SQL creates a Hive EXTERNAL table, and it reads the DynamoDB backup data on the S3 path passed to it by Data Pipeline.

Note: You should create the table with the “EXTERNAL” keyword to avoid the backup data being accidentally deleted from S3 if you drop the table.

The full automation script for converting follows. Add your own bucket name and data source path in the highlighted areas.

#!/bin/bash
hive -e "
ADD jar s3://<your bucket name>/json-serde-1.3.6-SNAPSHOT-jar-with-dependencies.jar ; 
DROP TABLE IF EXISTS blog_backup_data ;
CREATE EXTERNAL TABLE blog_backup_data (
 customer_id map<string,string>,
 age map<string,string>, job map<string,string>, 
 marital map<string,string>,education map<string,string>, 
 default map<string,string>, housing map<string,string>,
 loan map<string,string>, contact map<string,string>, 
 month map<string,string>, day_of_week map<string,string>, 
 duration map<string,string>, campaign map<string,string>,
 pdays map<string,string>, previous map<string,string>, 
 poutcome map<string,string>, emp_var_rate map<string,string>, 
 cons_price_idx map<string,string>, cons_conf_idx map<string,string>,
 euribor3m map<string,string>, nr_employed map<string,string>, 
 y map<string,string> ) 
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' 
LOCATION '$1/';

INSERT OVERWRITE DIRECTORY 's3://<your bucket name>/<datasource path>/' 
SELECT concat( customer_id['s'],',', 
 age['n'],',', job['s'],',', 
 marital['s'],',', education['s'],',', default['s'],',', 
 housing['s'],',', loan['s'],',', contact['s'],',', 
 month['s'],',', day_of_week['s'],',', duration['n'],',', 
 campaign['n'],',',pdays['n'],',',previous['n'],',', 
 poutcome['s'],',', emp_var_rate['n'],',', cons_price_idx['n'],',',
 cons_conf_idx['n'],',', euribor3m['n'],',', nr_employed['n'],',', y['n'] ) 
FROM blog_backup_data
WHERE customer_id['s'] > 0 ; 

After creating an external table, you need to read data. You then use the INSERT OVERWRITE DIRECTORY ~ SELECT command to write CSV data to the S3 path that you designated as the data source for Amazon SageMaker.

Depending on your requirements, you can eliminate or process the columns in the SELECT clause in this step to optimize data analysis. For example, you might remove some columns that have unpredictable correlations with the target value because keeping the wrong columns might expose your model to “overfitting” during the training. In this post, customer_id  columns is removed. Overfitting can make your prediction weak. More information about overfitting can be found in the topic Model Fit: Underfitting vs. Overfitting in the Amazon ML documentation.

Automation script: Renew the Amazon SageMaker model

After the CSV data is replaced and ready to use, create a new model artifact for Amazon SageMaker with the updated dataset on S3.  For renewing model artifact, you must create a new training job.  Training jobs can be run using the AWS SDK ( for example, Amazon SageMaker boto3 ) or the Amazon SageMaker Python SDK that can be installed with “pip install sagemaker” command as well as the AWS CLI for Amazon SageMaker described in this post.

In addition, consider how to smoothly renew your existing model without service impact, because your model is called by applications in real time. To do this, you need to create a new endpoint configuration first and update a current endpoint with the endpoint configuration that is just created.

#!/bin/bash
## Define variable 
REGION=$2
DTTIME=`date +%Y-%m-%d-%H-%M-%S`
ROLE="<your AmazonSageMaker-ExecutionRole>" 


# Select containers image based on region.  
case "$REGION" in
"us-west-2" )
    IMAGE="174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest"
    ;;
"us-east-1" )
    IMAGE="382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest" 
    ;;
"us-east-2" )
    IMAGE="404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest" 
    ;;
"eu-west-1" )
    IMAGE="438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest" 
    ;;
 *)
    echo "Invalid Region Name"
    exit 1 ;  
esac

# Start training job and creating model artifact 
TRAINING_JOB_NAME=TRAIN-${DTTIME} 
S3OUTPUT="s3://<your bucket name>/model/" 
INSTANCETYPE="ml.m4.xlarge"
INSTANCECOUNT=1
VOLUMESIZE=5 
aws sagemaker create-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --algorithm-specification TrainingImage=${IMAGE},TrainingInputMode=File --role-arn ${ROLE}  --input-data-config '[{ "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://<your bucket name>/<datasource path>/", "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "text/csv", "CompressionType": "None" , "RecordWrapperType": "None"  }]'  --output-data-config S3OutputPath=${S3OUTPUT} --resource-config  InstanceType=${INSTANCETYPE},InstanceCount=${INSTANCECOUNT},VolumeSizeInGB=${VOLUMESIZE} --stopping-condition MaxRuntimeInSeconds=120 --hyper-parameters feature_dim=20,predictor_type=binary_classifier  

# Wait until job completed 
aws sagemaker wait training-job-completed-or-stopped --training-job-name ${TRAINING_JOB_NAME}  --region ${REGION}

# Get newly created model artifact and create model
MODELARTIFACT=`aws sagemaker describe-training-job --training-job-name ${TRAINING_JOB_NAME} --region ${REGION}  --query 'ModelArtifacts.S3ModelArtifacts' --output text `
MODELNAME=MODEL-${DTTIME}
aws sagemaker create-model --region ${REGION} --model-name ${MODELNAME}  --primary-container Image=${IMAGE},ModelDataUrl=${MODELARTIFACT}  --execution-role-arn ${ROLE}

# create a new endpoint configuration 
CONFIGNAME=CONFIG-${DTTIME}
aws sagemaker  create-endpoint-config --region ${REGION} --endpoint-config-name ${CONFIGNAME}  --production-variants  VariantName=Users,ModelName=${MODELNAME},InitialInstanceCount=1,InstanceType=ml.m4.xlarge

# create or update the endpoint
STATUS=`aws sagemaker describe-endpoint --endpoint-name  ServiceEndpoint --query 'EndpointStatus' --output text --region ${REGION} `
if [[ $STATUS -ne "InService" ]] ;
then
    aws sagemaker  create-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}    
else
    aws sagemaker  update-endpoint --endpoint-name  ServiceEndpoint  --endpoint-config-name ${CONFIGNAME} --region ${REGION}
fi

Grant permission

Before you execute the script, you must grant proper permission to Data Pipeline. Data Pipeline uses the DataPipelineDefaultResourceRole role by default. I added the following policy to DataPipelineDefaultResourceRole to allow Data Pipeline to create, delete, and update the Amazon SageMaker model and data source in the script.

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "sagemaker:CreateTrainingJob",
 "sagemaker:DescribeTrainingJob",
 "sagemaker:CreateModel",
 "sagemaker:CreateEndpointConfig",
 "sagemaker:DescribeEndpoint",
 "sagemaker:CreateEndpoint",
 "sagemaker:UpdateEndpoint",
 "iam:PassRole"
 ],
 "Resource": "*"
 }
 ]
}

Use real-time prediction

After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint. This approach is useful for interactive web, mobile, or desktop applications.

Following, I provide a simple Python code example that queries against Amazon SageMaker endpoint URL with its name (“ServiceEndpoint”) and then uses them for real-time prediction.

=== Python sample for real-time prediction ===

#!/usr/bin/env python
import boto3
import json 

client = boto3.client('sagemaker-runtime', region_name ='<your region>' )
new_customer_info = '34,10,2,4,1,2,1,1,6,3,190,1,3,4,3,-1.7,94.055,-39.8,0.715,4991.6'
response = client.invoke_endpoint(
    EndpointName='ServiceEndpoint',
    Body=new_customer_info, 
    ContentType='text/csv'
)
result = json.loads(response['Body'].read().decode())
print(result)
--- output(response) ---
{u'predictions': [{u'score': 0.7528127431869507, u'predicted_label': 1.0}]}

Solution summary

The solution takes the following steps:

  1. Data Pipeline exports DynamoDB table data into S3. The original JSON data should be kept to recover the table in the rare event that this is needed. Data Pipeline then converts JSON to CSV so that Amazon SageMaker can read the data.Note: You should select only meaningful attributes when you convert CSV. For example, if you judge that the “campaign” attribute is not correlated, you can eliminate this attribute from the CSV.
  2. Train the Amazon SageMaker model with the new data source.
  3. When a new customer comes to your site, you can judge how likely it is for this customer to subscribe to your new product based on “predictedScores” provided by Amazon SageMaker.
  4. If the new user subscribes your new product, your application must update the attribute “y” to the value 1 (for yes). This updated data is provided for the next model renewal as a new data source. It serves to improve the accuracy of your prediction. With each new entry, your application can become smarter and deliver better predictions.

Running ad hoc queries using Amazon Athena

Amazon Athena is a serverless query service that makes it easy to analyze large amounts of data stored in Amazon S3 using standard SQL. Athena is useful for examining data and collecting statistics or informative summaries about data. You can also use the powerful analytic functions of Presto, as described in the topic Aggregate Functions of Presto in the Presto documentation.

With the Data Pipeline scheduled activity, recent CSV data is always located in S3 so that you can run ad hoc queries against the data using Amazon Athena. I show this with example SQL statements following. For an in-depth description of this process, see the post Interactive SQL Queries for Data in Amazon S3 on the AWS News Blog. 

Creating an Amazon Athena table and running it

Simply, you can create an EXTERNAL table for the CSV data on S3 in Amazon Athena Management Console.

=== Table Creation ===
CREATE EXTERNAL TABLE datasource (
 age int, 
 job string, 
 marital string , 
 education string, 
 default string, 
 housing string, 
 loan string, 
 contact string, 
 month string, 
 day_of_week string, 
 duration int, 
 campaign int, 
 pdays int , 
 previous int , 
 poutcome string, 
 emp_var_rate double, 
 cons_price_idx double,
 cons_conf_idx double, 
 euribor3m double, 
 nr_employed double, 
 y int 
)
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n' 
LOCATION 's3://<your bucket name>/<datasource path>/';

The following query calculates the correlation coefficient between the target attribute and other attributes using Amazon Athena.

=== Sample Query ===

SELECT corr(age,y) AS correlation_age_and_target, 
 corr(duration,y) AS correlation_duration_and_target, 
 corr(campaign,y) AS correlation_campaign_and_target,
 corr(contact,y) AS correlation_contact_and_target
FROM ( SELECT age , duration , campaign , y , 
 CASE WHEN contact = 'telephone' THEN 1 ELSE 0 END AS contact 
 FROM datasource 
 ) datasource ;

Conclusion

In this post, I introduce an example of how to analyze data in DynamoDB by using table data in Amazon S3 to optimize DynamoDB table read capacity. You can then use the analyzed data as a new data source to train an Amazon SageMaker model for accurate real-time prediction. In addition, you can run ad hoc queries against the data on S3 using Amazon Athena. I also present how to automate these procedures by using Data Pipeline.

You can adapt this example to your specific use case at hand, and hopefully this post helps you accelerate your development. You can find more examples and use cases for Amazon SageMaker in the video AWS 2017: Introducing Amazon SageMaker on the AWS website.

 


Additional Reading

If you found this post useful, be sure to check out Serving Real-Time Machine Learning Predictions on Amazon EMR and Analyzing Data in S3 using Amazon Athena.

 


About the Author

Yong Seong Lee is a Cloud Support Engineer for AWS Big Data Services. He is interested in every technology related to data/databases and helping customers who have difficulties in using AWS services. His motto is “Enjoy life, be curious and have maximum experience.”

 

 

Raspberry Pi 3 Model B+ on sale now at $35

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-3-model-bplus-sale-now-35/

Here’s a long post. We think you’ll find it interesting. If you don’t have time to read it all, we recommend you watch this video, which will fill you in with everything you need, and then head straight to the product page to fill yer boots. (We recommend the video anyway, even if you do have time for a long read. ‘Cos it’s fab.)

A BRAND-NEW PI FOR π DAY

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

If you’ve been a Raspberry Pi watcher for a while now, you’ll have a bit of a feel for how we update our products. Just over two years ago, we released Raspberry Pi 3 Model B. This was our first 64-bit product, and our first product to feature integrated wireless connectivity. Since then, we’ve sold over nine million Raspberry Pi 3 units (we’ve sold 19 million Raspberry Pis in total), which have been put to work in schools, homes, offices and factories all over the globe.

Those Raspberry Pi watchers will know that we have a history of releasing improved versions of our products a couple of years into their lives. The first example was Raspberry Pi 1 Model B+, which added two additional USB ports, introduced our current form factor, and rolled up a variety of other feedback from the community. Raspberry Pi 2 didn’t get this treatment, of course, as it was superseded after only one year; but it feels like it’s high time that Raspberry Pi 3 received the “plus” treatment.

So, without further ado, Raspberry Pi 3 Model B+ is now on sale for $35 (the same price as the existing Raspberry Pi 3 Model B), featuring:

  • A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU
  • Dual-band 802.11ac wireless LAN and Bluetooth 4.2
  • Faster Ethernet (Gigabit Ethernet over USB 2.0)
  • Power-over-Ethernet support (with separate PoE HAT)
  • Improved PXE network and USB mass-storage booting
  • Improved thermal management

Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

Behold the shiny

Raspberry Pi 3B+ is available to buy today from our network of Approved Resellers.

New features, new chips

Roger Thornton did the design work on this revision of the Raspberry Pi. Here, he and I have a chat about what’s new.

Introducing the Raspberry Pi 3 Model B+

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

The new product is built around BCM2837B0, an updated version of the 64-bit Broadcom application processor used in Raspberry Pi 3B, which incorporates power integrity optimisations, and a heat spreader (that’s the shiny metal bit you can see in the photos). Together these allow us to reach higher clock frequencies (or to run at lower voltages to reduce power consumption), and to more accurately monitor and control the temperature of the chip.

Dual-band wireless LAN and Bluetooth are provided by the Cypress CYW43455 “combo” chip, connected to a Proant PCB antenna similar to the one used on Raspberry Pi Zero W. Compared to its predecessor, Raspberry Pi 3B+ delivers somewhat better performance in the 2.4GHz band, and far better performance in the 5GHz band, as demonstrated by these iperf results from LibreELEC developer Milhouse.

Tx bandwidth (Mb/s) Rx bandwidth (Mb/s)
Raspberry Pi 3B 35.7 35.6
Raspberry Pi 3B+ (2.4GHz) 46.7 46.3
Raspberry Pi 3B+ (5GHz) 102 102

The wireless circuitry is encapsulated under a metal shield, rather fetchingly embossed with our logo. This has allowed us to certify the entire board as a radio module under FCC rules, which in turn will significantly reduce the cost of conformance testing Raspberry Pi-based products.

We’ll be teaching metalwork next.

Previous Raspberry Pi devices have used the LAN951x family of chips, which combine a USB hub and 10/100 Ethernet controller. For Raspberry Pi 3B+, Microchip have supported us with an upgraded version, LAN7515, which supports Gigabit Ethernet. While the USB 2.0 connection to the application processor limits the available bandwidth, we still see roughly a threefold increase in throughput compared to Raspberry Pi 3B. Again, here are some typical iperf results.

Tx bandwidth (Mb/s) Rx bandwidth (Mb/s)
Raspberry Pi 3B 94.1 95.5
Raspberry Pi 3B+ 315 315

We use a magjack that supports Power over Ethernet (PoE), and bring the relevant signals to a new 4-pin header. We will shortly launch a PoE HAT which can generate the 5V necessary to power the Raspberry Pi from the 48V PoE supply.

There… are… four… pins!

Coming soon to a Raspberry Pi 3B+ near you

Raspberry Pi 3B was our first product to support PXE Ethernet boot. Testing it in the wild shook out a number of compatibility issues with particular switches and traffic environments. Gordon has rolled up fixes for all known issues into the BCM2837B0 boot ROM, and PXE boot is now enabled by default.

Clocking, voltages and thermals

The improved power integrity of the BCM2837B0 package, and the improved regulation accuracy of our new MaxLinear MxL7704 power management IC, have allowed us to tune our clocking and voltage rules for both better peak performance and longer-duration sustained performance.

Below 70°C, we use the improvements to increase the core frequency to 1.4GHz. Above 70°C, we drop to 1.2GHz, and use the improvements to decrease the core voltage, increasing the period of time before we reach our 80°C thermal throttle; the reduction in power consumption is such that many use cases will never reach the throttle. Like a modern smartphone, we treat the thermal mass of the device as a resource, to be spent carefully with the goal of optimising user experience.

This graph, courtesy of Gareth Halfacree, demonstrates that Raspberry Pi 3B+ runs faster and at a lower temperature for the duration of an eight‑minute quad‑core Sysbench CPU test.

Note that Raspberry Pi 3B+ does consume substantially more power than its predecessor. We strongly encourage you to use a high-quality 2.5A power supply, such as the official Raspberry Pi Universal Power Supply.

FAQs

We’ll keep updating this list over the next couple of days, but here are a few to get you started.

Are you discontinuing earlier Raspberry Pi models?

No. We have a lot of industrial customers who will want to stick with the existing products for the time being. We’ll keep building these models for as long as there’s demand. Raspberry Pi 1B+, Raspberry Pi 2B, and Raspberry Pi 3B will continue to sell for $25, $35, and $35 respectively.

What about Model A+?

Raspberry Pi 1A+ continues to be the $20 entry-level “big” Raspberry Pi for the time being. We are considering the possibility of producing a Raspberry Pi 3A+ in due course.

What about the Compute Module?

CM1, CM3 and CM3L will continue to be available. We may offer versions of CM3 and CM3L with BCM2837B0 in due course, depending on customer demand.

Are you still using VideoCore?

Yes. VideoCore IV 3D is the only publicly-documented 3D graphics core for ARM‑based SoCs, and we want to make Raspberry Pi more open over time, not less.

Credits

A project like this requires a vast amount of focused work from a large team over an extended period. Particular credit is due to Roger Thornton, who designed the board and ran the exhaustive (and exhausting) RF compliance campaign, and to the team at the Sony UK Technology Centre in Pencoed, South Wales. A partial list of others who made major direct contributions to the BCM2837B0 chip program, CYW43455 integration, LAN7515 and MxL7704 developments, and Raspberry Pi 3B+ itself follows:

James Adams, David Armour, Jonathan Bell, Maria Blazquez, Jamie Brogan-Shaw, Mike Buffham, Rob Campling, Cindy Cao, Victor Carmon, KK Chan, Nick Chase, Nigel Cheetham, Scott Clark, Nigel Clift, Dominic Cobley, Peter Coyle, John Cronk, Di Dai, Kurt Dennis, David Doyle, Andrew Edwards, Phil Elwell, John Ferdinand, Doug Freegard, Ian Furlong, Shawn Guo, Philip Harrison, Jason Hicks, Stefan Ho, Andrew Hoare, Gordon Hollingworth, Tuomas Hollman, EikPei Hu, James Hughes, Andy Hulbert, Anand Jain, David John, Prasanna Kerekoppa, Shaik Labeeb, Trevor Latham, Steve Le, David Lee, David Lewsey, Sherman Li, Xizhe Li, Simon Long, Fu Luo Larson, Juan Martinez, Sandhya Menon, Ben Mercer, James Mills, Max Passell, Mark Perry, Eric Phiri, Ashwin Rao, Justin Rees, James Reilly, Matt Rowley, Akshaye Sama, Ian Saturley, Serge Schneider, Manuel Sedlmair, Shawn Shadburn, Veeresh Shivashimper, Graham Smith, Ben Stephens, Mike Stimson, Yuree Tchong, Stuart Thomson, John Wadsworth, Ian Watch, Sarah Williams, Jason Zhu.

If you’re not on this list and think you should be, please let me know, and accept my apologies.

The post Raspberry Pi 3 Model B+ on sale now at $35 appeared first on Raspberry Pi.

Preparing for AWS Certificate Manager (ACM) Support of Certificate Transparency

Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-get-ready-for-certificate-transparency/

 

Update from March 27, 2018: On March 27, 2018, we updated ACM APIs so that you can disable Certificate Transparency logging on a per-certificate basis.


Starting April 30, 2018, Google Chrome will require all publicly trusted certificates issued after this date to be logged in at least two Certificate Transparency logs. This means that any certificate issued that is not logged will result in an error message in Google Chrome. Beginning April 24, 2018, Amazon will log all new and renewed certificates in at least two public logs unless you disable Certificate Transparency logging.

Without Certificate Transparency, it can be difficult for a domain owner to know if an unexpected certificate was issued for their domain. Under the current system, no record is kept of certificates being issued, and domain owners do not have a reliable way to identify rogue certificates.

To address this situation, Certificate Transparency creates a cryptographically secure log of each certificate issued. Domain owners can search the log to identify unexpected certificates, whether issued by mistake or malice. Domain owners can also identify Certificate Authorities (CAs) that are improperly issuing certificates. In this blog post, I explain more about Certificate Transparency and tell you how to prepare for it.

How does Certificate Transparency work?

When a CA issues a publicly trusted certificate, the CA must submit the certificate to one or more Certificate Transparency log servers. The Certificate Transparency log server responds with a signed certificate timestamp (SCT) that confirms the log server will add the certificate to the list of known certificates. The SCT is then embedded in the certificate and delivered automatically to a browser. The SCT is like a receipt that proves the certificate was published into the Certificate Transparency log. Starting April 30, Google Chrome will require an SCT as proof that the certificate was published to a Certificate Transparency log in order to trust the certificate without displaying an error message.

What is Amazon doing to support Certificate Transparency?

Certificate Transparency is a good practice. It enables AWS customers to be more confident that an unauthorized certificate hasn’t been issued by a CA. Beginning on April 24, 2018, Amazon will log all new and renewed certificates in at least two Certificate Transparency logs unless you disable Certificate Transparency logging.

We recognize that there can be times when our customers do not want to log certificates. For example, if you are building a website for an unreleased product and have registered the subdomain, newproduct.example.com, requesting a logged certificate for your domain will make it publicly known that the new product is coming. Certificate Transparency logging also can expose server hostnames that you want to keep private. Hostnames such as payments.example.com can reveal the purpose of a server and provide attackers with information about your private network. These logs do not contain the private key for your certificate. For these reasons, on March 27, 2018 we updated ACM APIs so that you can disable Certificate Transparency logging on a per-certificate basis using the ACM APIs or with the AWS CLI. Doing so will lead to errors in Google Chrome, which may be preferable to exposing the information.

Please refer to ACM documentation for specifics on how to opt out of Certificate Transparency logging.

Conclusion

Beginning April 24, 2018, ACM will begin logging all new and renewed certificates by default. If you don’t want a certificate to be logged, you’ll be able to opt out using the AWS API or CLI. However, for Google Chrome to trust the certificate, all issued or imported certificates must have the SCT information embedded in them by April 30, 2018.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, start a new thread in the ACM forum.

– Jonathan

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Zero WH: pre-soldered headers and what to do with them

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/zero-wh/

If you head over to the website of your favourite Raspberry Pi Approved Reseller today, you may find the new Zero WH available to purchase. But what it is? Why is it different, and what can you do with it?

Raspberry Pi Zero WH

“If you like pre-soldered headers, and getting caught in the rain…”

Raspberry Pi Zero WH

Imagine a Raspberry Pi Zero W. Now add a professionally soldered header. Boom, that’s the Raspberry Pi Zero WH! It’s your same great-tasting Pi, with a brand-new…crust? It’s perfect for everyone who doesn’t own a soldering iron or who wants the soldering legwork done for them.

What you can do with the Zero WH

What can’t you do? Am I right?! The small size of the Zero W makes it perfect for projects with minimal wiggle-room. In such projects, some people have no need for GPIO pins — they simply solder directly to the board. However, there are many instances where you do want a header on your Zero W, for example in order to easily take advantage of the GPIO expander tool for Debian Stretch on a PC or Mac.

GPIO expander in clubs and classrooms

As Ben Nuttall explains in his blog post on the topic:

[The GPIO expander tool] is a real game-changer for Raspberry Jams, Code Clubs, CoderDojos, and schools. You can live boot the Raspberry Pi Desktop OS from a USB stick, use Linux PCs, or even install [the Pi OS] on old computers. Then you have really simple access to physical computing without full Raspberry Pi setups, and with no SD cards to configure.

Using the GPIO expander with the Raspberry Pi Zero WH decreases the setup cost for anyone interested in trying out physical computing in the classroom or at home. (And once you’ve stuck your toes in, you’ll obviously fall in love and will soon find yourself with multiple Raspberry Pi models, HATs aplenty, and an area in your home dedicated to your new adventure in Raspberry Pi. Don’t say I didn’t warn you.)

Other uses for a Zero W with a header

The GPIO expander setup is just one of a multitude of uses for a Raspberry Pi Zero W with a header. You may want the header for prototyping before you commit to soldering wires directly to a board. Or you may have a temporary build in mind for your Zero W, in which case you won’t want to commit to soldering wires to the board at all.

Raspberry Pi Zero WH

Your use case may be something else entirely — tell us in the comments below how you’d utilise a pre-soldered Raspberry Pi Zero WH in your project. The best project idea will receive ten imaginary house points of absolutely no practical use, but immense emotional value. Decide amongst yourselves who you believe should win them — I’m going to go waste a few more hours playing SLUG!

The post Zero WH: pre-soldered headers and what to do with them appeared first on Raspberry Pi.

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.

How to Compete with Giants

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-compete-with-giants/

How to Compete with Giants

This post by Backblaze’s CEO and co-founder Gleb Budman is the sixth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants

Use the Join button above to receive notification of new posts in this series.

Perhaps your business is competing in a brand new space free from established competitors. Most of us, though, start companies that compete with existing offerings from large, established companies. You need to come up with a better mousetrap — not the first mousetrap.

That’s the challenge Backblaze faced. In this post, I’d like to share some of the lessons I learned from that experience.

Backblaze vs. Giants

Competing with established companies that are orders of magnitude larger can be daunting. How can you succeed?

I’ll set the stage by offering a few sets of giants we compete with:

  • When we started Backblaze, we offered online backup in a market where companies had been offering “online backup” for at least a decade, and even the newer entrants had raised tens of millions of dollars.
  • When we built our storage servers, the alternatives were EMC, NetApp, and Dell — each of which had a market cap of over $10 billion.
  • When we introduced our cloud storage offering, B2, our direct competitors were Amazon, Google, and Microsoft. You might have heard of them.

What did we learn by competing with these giants on a bootstrapped budget? Let’s take a look.

Determine What Success Means

For a long time Apple considered Apple TV to be a hobby, not a real product worth focusing on, because it did not generate a billion in revenue. For a $10 billion per year revenue company, a new business that generates $50 million won’t move the needle and often isn’t worth putting focus on. However, for a startup, getting to $50 million in revenue can be the start of a wildly successful business.

Lesson Learned: Don’t let the giants set your success metrics.

The Advantages Startups Have

The giants have a lot of advantages: more money, people, scale, resources, access, etc. Following their playbook and attacking head-on means you’re simply outgunned. Common paths to failure are trying to build more features, enter more markets, outspend on marketing, and other similar approaches where scale and resources are the primary determinants of success.

But being a startup affords many advantages most giants would salivate over. As a nimble startup you can leverage those to succeed. Let’s breakdown nine competitive advantages we’ve used that you can too.

1. Drive Focus

It’s hard to build a $10 billion revenue business doing just one thing, and most giants have a broad portfolio of businesses, numerous products for each, and targeting a variety of customer segments in multiple markets. That adds complexity and distributes management attention.

Startups get the benefit of having everyone in the company be extremely focused, often on a singular mission, product, customer segment, and market. While our competitors sell everything from advertising to Zantac, and are investing in groceries and shipping, Backblaze has focused exclusively on cloud storage. This means all of our best people (i.e. everyone) is focused on our cloud storage business. Where is all of your focus going?

Lesson Learned: Align everyone in your company to a singular focus to dramatically out-perform larger teams.

2. Use Lack-of-Scale as an Advantage

You may have heard Paul Graham say “Do things that don’t scale.” There are a host of things you can do specifically because you don’t have the same scale as the giants. Use that as an advantage.

When we look for data center space, we have more options than our largest competitors because there are simply more spaces available with room for 100 cabinets than for 1,000 cabinets. With some searching, we can find data center space that is better/cheaper.

When a flood in Thailand destroyed factories, causing the world’s supply of hard drives to plummet and prices to triple, we started drive farming. The giants certainly couldn’t. It was a bit crazy, but it let us keep prices unchanged for our customers.

Our Chief Cloud Officer, Tim, used to work at Adobe. Because of their size, any new product needed to always launch in a multitude of languages and in global markets. Once launched, they had scale. But getting any new product launched was incredibly challenging.

Lesson Learned: Use lack-of-scale to exploit opportunities that are closed to giants.

3. Build a Better Product

This one is probably obvious. If you’re going to provide the same product, at the same price, to the same customers — why do it? Remember that better does not always mean more features. Here’s one way we built a better product that didn’t require being a bigger company.

All online backup services required customers to choose what to include in their backup. We found that this was complicated for users since they often didn’t know what needed to be backed up. We flipped the model to back up everything and allow users to exclude if they wanted to, but it was not required. This reduced the number of features/options, while making it easier and better for the user.

This didn’t require the resources of a huge company; it just required understanding customers a bit deeper and thinking about the solution differently. Building a better product is the most classic startup competitive advantage.

Lesson Learned: Dig deep with your customers to understand and deliver a better mousetrap.

4. Provide Better Service

How can you provide better service? Use your advantages. Escalations from your customer care folks to engineering can go through fewer hoops. Fixing an issue and shipping can be quicker. Access to real answers on Twitter or Facebook can be more effective.

A strategic decision we made was to have all customer support people as full-time employees in our headquarters. This ensures they are in close contact to the whole company for feedback to quickly go both ways.

Having a smaller team and fewer layers enables faster internal communication, which increases customer happiness. And the option to do things that don’t scale — such as help a customer in a unique situation — can go a long way in building customer loyalty.

Lesson Learned: Service your customers better by establishing clear internal communications.

5. Remove The Unnecessary

After determining that the industry standard EMC/NetApp/Dell storage servers would be too expensive to build our own cloud storage upon, we decided to build our own infrastructure. Many said we were crazy to compete with these multi-billion dollar companies and that it would be impossible to build a lower cost storage server. However, not only did it prove to not be impossible — it wasn’t even that hard.

One key trick? Remove the unnecessary. While EMC and others built servers to sell to other companies for a wide variety of use cases, Backblaze needed servers that only Backblaze would run, and for a single use case. As a result we could tailor the servers for our needs by removing redundancy from each server (since we would run redundant servers), and using lower-performance components (since we would get high-performance by running parallel servers).

What do your customers and use cases not need? This can trim costs and complexity while often improving the product for your use case.

Lesson Learned: Don’t think “what can we add” to what the giants offer — think “what can we remove.”

6. Be Easy

How many times have you visited a large company website, particularly one that’s not consumer-focused, only to leave saying, “Huh? I don’t understand what you do.” Keeping your website clear, and your product and pricing simple, will dramatically increase conversion and customer satisfaction. If you’re able to make it 2x easier and thus increasing your conversion by 2x, you’ve just allowed yourself to spend ½ as much acquiring a customer.

Providing unlimited data backup wasn’t specifically about providing more storage — it was about making it easier. Since users didn’t know how much data they needed to back up, charging per gigabyte meant they wouldn’t know the cost. Providing unlimited data backup meant they could just relax.

Customers love easy — and being smaller makes easy easier to deliver. Use that as an advantage in your website, marketing materials, pricing, product, and in every other customer interaction.

Lesson Learned: Ease-of-use isn’t a slogan: it’s a competitive advantage. Treat it as seriously as any other feature of your product

7. Don’t Be Afraid of Risk

Obviously unnecessary risks are unnecessary, and some risks aren’t worth taking. However, large companies that have given guidance to Wall Street with a $0.01 range on their earning-per-share are inherently going to be very risk-averse. Use risk-tolerance to open up opportunities, and adjust your tolerance level as you scale. In your first year, there are likely an infinite number of ways your business may vaporize; don’t be too worried about taking a risk that might have a 20% downside when the upside is hockey stick growth.

Using consumer-grade hard drives in our servers may have caused pain and suffering for us years down-the-line, but they were priced at approximately 50% of enterprise drives. Giants wouldn’t have considered the option. Turns out, the consumer drives performed great for us.

Lesson Learned: Use calculated risks as an advantage.

8. Be Open

The larger a company grows, the more it wants to hide information. Some of this is driven by regulatory requirements as a public company. But most of this is cultural. Sharing something might cause a problem, so let’s not. All external communication is treated as a critical press release, with rounds and rounds of editing by multiple teams and approvals. However, customers are often desperate for information. Moreover, sharing information builds trust, understanding, and advocates.

I started blogging at Backblaze before we launched. When we blogged about our Storage Pod and open-sourced the design, many thought we were crazy to share this information. But it was transformative for us, establishing Backblaze as a tech thought leader in storage and giving people a sense of how we were able to provide our service at such a low cost.

Over the years we’ve developed a culture of being open internally and externally, on our blog and with the press, and in communities such as Hacker News and Reddit. Often we’ve been asked, “why would you share that!?” — but it’s the continual openness that builds trust. And that culture of openness is incredibly challenging for the giants.

Lesson Learned: Overshare to build trust and brand where giants won’t.

9. Be Human

As companies scale, typically a smaller percent of founders and executives interact with customers. The people who build the company become more hidden, the language feels “corporate,” and customers start to feel they’re interacting with the cliche “faceless, nameless corporation.” Use your humanity to your advantage. From day one the Backblaze About page listed all the founders, and my email address. While contacting us shouldn’t be the first path for a customer support question, I wanted it to be clear that we stand behind the service we offer; if we’re doing something wrong — I want to know it.

To scale it’s important to have processes and procedures, but sometimes a situation falls outside of a well-established process. While we want our employees to follow processes, they’re still encouraged to be human and “try to do the right thing.” How to you strike this balance? Simon Sinek gives a good talk about it: make your employees feel safe. If employees feel safe they’ll be human.

If your customer is a consumer, they’ll appreciate being treated as a human. Even if your customer is a corporation, the purchasing decision-makers are still people.

Lesson Learned: Being human is the ultimate antithesis to the faceless corporation.

Build Culture to Sustain Your Advantages at Scale

Presumably the goal is not to always be competing with giants, but to one day become a giant. Does this mean you’ll lose all of these advantages? Some, yes — but not all. Some of these advantages are cultural, and if you build these into the culture from the beginning, and fight to keep them as you scale, you can keep them as you become a giant.

Tesla still comes across as human, with Elon Musk frequently interacting with people on Twitter. Apple continues to provide great service through their Genius Bar. And, worst case, if you lose these at scale, you’ll still have the other advantages of being a giant such as money, people, scale, resources, and access.

Of course, some new startup will be gunning for you with grand ambitions, so just be sure not to get complacent. 😉

The post How to Compete with Giants appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Say Hello to the New Atlassian

Post Syndicated from Chris De Santis original https://www.anchor.com.au/blog/2017/09/hello-new-atlassian/

Who is Atlassian?

Atlassian is an Australian IT company that develops enterprise software, with its best-known products being its issue-tracking app, Jira, and team collaboration and wiki product, Confluence.

In December 2015, Atlassian went public and made their initial public offering (IPO) under the symbol TEAM, valuing them at $4.37 billion. In summary, they big.

What happened?

A facelift

It’s a nice sunny day in Sydney in mid-September of 2017, and Atlassian, after 15 years of consistency, has rebranded, changing their look and feel for a brighter and funner one, compared to the dreary previous look.New Atlassian Branding VideoIt’s a hell of a lot simpler and, as they show in the above video, it’s going to be used with a lot more creativity and flair in mind—it’s flexible in a sense that they can use it in a lot more ways than before, with a lot more colours than before.

Atlassian Logo ComparisonThe blues they’re using now work super-well with the logos on a white background, whereas the white logos on their new champion, brand colour blue can go both ways: some can see it as a bold, daring step which is quite attractive, while others can see it as off-putting and not very user-friendly.

New Atlassian Logo Versions

What’s it all mean?

Symbolism

In his announcement blog, Atlassian Co-Founder & Co-CEO, Mike Cannon-Brookes, mentions that the branding change reflects their newly-shifted focus on the concept of teamwork. He continues to explain that their previous logo depicted the sky-holding Greek titan Atlas and symbolised legendary service and support. But, while it has become renown, they’re shifting their focus on the concept of teamwork—why focus on something you’ve already done right, right?

Atlassian Logo EvolutionThe new logo contains more symbolism than meets the eye, as can be interpreted as:

  • Two people high-fiving
  • A mountain to scale
  • The letter “A” (seen as two pillars reinforcing each other)
Product logos

Atlassian has created and acquired many products in their adventure so far, and they all seemed to have a similar art style, but something always felt off about their consistency. Well, needless to say, this was addressed with Atlassian’s very own “identity system”, which is a pretty cool term for a consistent logo-look for 14+ products, to fit them under one brand.

New Atlassian Product LogosThe result is a set of unique marks that “still feel very related to each other”. Whereas, I also see a new set of “unknown” Pokémon.

Typeface

New Atlassian TypefaceTo add a cherry on top, Atlassian will be using their own custom-made typeface called Charlie Sans, specifically designed to balance legibility with personality–that’s probably the best way to describe it. Otherwise, I’d say, out of purely-constructive criticism, that there isn’t much difference between itself and any of the other staple fonts; i.e. Arial, Verdana, etc. Then again, I’m not a professional designer.

It doesn’t look as distinct as their previous typeface, but, to be fair, it does look very slick next to the new product logos.

Well…

What do you think about it all?

 

Image credits: Atlassian

The post Say Hello to the New Atlassian appeared first on AWS Managed Services by Anchor.

AWS Hot Startups – August 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-august-2017/

There’s no doubt about it – Artificial Intelligence is changing the world and how it operates. Across industries, organizations from startups to Fortune 500s are embracing AI to develop new products, services, and opportunities that are more efficient and accessible for their consumers. From driverless cars to better preventative healthcare to smart home devices, AI is driving innovation at a fast rate and will continue to play a more important role in our everyday lives.

This month we’d like to highlight startups using AI solutions to help companies grow. We are pleased to feature:

  • SignalBox – a simple and accessible deep learning platform to help businesses get started with AI.
  • Valossa – an AI video recognition platform for the media and entertainment industry.
  • Kaliber – innovative applications for businesses using facial recognition, deep learning, and big data.

SignalBox (UK)

In 2016, SignalBox founder Alain Richardt was hearing the same comments being made by developers, data scientists, and business leaders. They wanted to get into deep learning but didn’t know where to start. Alain saw an opportunity to commodify and apply deep learning by providing a platform that does the heavy lifting with an easy-to-use web interface, blueprints for common tasks, and just a single-click to productize the models. With SignalBox, companies can start building deep learning models with no coding at all – they just select a data set, choose a network architecture, and go. SignalBox also offers step-by-step tutorials, tips and tricks from industry experts, and consulting services for customers that want an end-to-end AI solution.

SignalBox offers a variety of solutions that are being used across many industries for energy modeling, fraud detection, customer segmentation, insurance risk modeling, inventory prediction, real estate prediction, and more. Existing data science teams are using SignalBox to accelerate their innovation cycle. One innovative UK startup, Energi Mine, recently worked with SignalBox to develop deep networks that predict anomalous energy consumption patterns and do time series predictions on energy usage for businesses with hundreds of sites.

SignalBox uses a variety of AWS services including Amazon EC2, Amazon VPC, Amazon Elastic Block Store, and Amazon S3. The ability to rapidly provision EC2 GPU instances has been a critical factor in their success – both in terms of keeping their operational expenses low, as well as speed to market. The Amazon API Gateway has allowed for operational automation, giving SignalBox the ability to control its infrastructure.

To learn more about SignalBox, visit here.

Valossa (Finland)

As students at the University of Oulu in Finland, the Valossa founders spent years doing research in the computer science and AI labs. During that time, the team witnessed how the world was moving beyond text, with video playing a greater role in day-to-day communication. This spawned an idea to use technology to automatically understand what an audience is viewing and share that information with a global network of content producers. Since 2015, Valossa has been building next generation AI applications to benefit the media and entertainment industry and is moving beyond the capabilities of traditional visual recognition systems.

Valossa’s AI is capable of analyzing any video stream. The AI studies a vast array of data within videos and converts that information into descriptive tags, categories, and overviews automatically. Basically, it sees, hears, and understands videos like a human does. The Valossa AI can detect people, visual and auditory concepts, key speech elements, and labels explicit content to make moderating and filtering content simpler. Valossa’s solutions are designed to provide value for the content production workflow, from media asset management to end-user applications for content discovery. AI-annotated content allows online viewers to jump directly to their favorite scenes or search specific topics and actors within a video.

Valossa leverages AWS to deliver the industry’s first complete AI video recognition platform. Using Amazon EC2 GPU instances, Valossa can easily scale their computation capacity based on customer activity. High-volume video processing with GPU instances provides the necessary speed for time-sensitive workflows. The geo-located Availability Zones in EC2 allow Valossa to bring resources close to their customers to minimize network delays. Valossa also uses Amazon S3 for video ingestion and to provide end-user video analytics, which makes managing and accessing media data easy and highly scalable.

To see how Valossa works, check out www.WhatIsMyMovie.com or enable the Alexa Skill, Valossa Movie Finder. To try the Valossa AI, sign up for free at www.valossa.com.

Kaliber (San Francisco, CA)

Serial entrepreneurs Ray Rahman and Risto Haukioja founded Kaliber in 2016. The pair had previously worked in startups building smart cities and online privacy tools, and teamed up to bring AI to the workplace and change the hospitality industry. Our world is designed to appeal to our senses – stores and warehouses have clearly marked aisles, products are colorfully packaged, and we use these designs to differentiate one thing from another. We tell each other apart by our faces, and previously that was something only humans could measure or act upon. Kaliber is using facial recognition, deep learning, and big data to create solutions for business use. Markets and companies that aren’t typically associated with cutting-edge technology will be able to use their existing camera infrastructure in a whole new way, making them more efficient and better able to serve their customers.

Computer video processing is rapidly expanding, and Kaliber believes that video recognition will extend to far more than security cameras and robots. Using the clients’ network of in-house cameras, Kaliber’s platform extracts key data points and maps them to actionable insights using their machine learning (ML) algorithm. Dashboards connect users to the client’s BI tools via the Kaliber enterprise APIs, and managers can view these analytics to improve their real-world processes, taking immediate corrective action with real-time alerts. Kaliber’s Real Metrics are aimed at combining the power of image recognition with ML to ultimately provide a more meaningful experience for all.

Kaliber uses many AWS services, including Amazon Rekognition, Amazon Kinesis, AWS Lambda, Amazon EC2 GPU instances, and Amazon S3. These services have been instrumental in helping Kaliber meet the needs of enterprise customers in record time.

Learn more about Kaliber here.

Thanks for reading and we’ll see you next month!

-Tina

 

Pimoroni is 5 now!

Post Syndicated from guru original https://www.raspberrypi.org/blog/pimoroni-is-5-now/

Long read written by Pimoroni’s Paul Beech, best enjoyed over a cup o’ grog.

Every couple of years, I’ve done a “State of the Fleet” update here on the Raspberry Pi blog to tell everyone how the Sheffield Pirates are doing. Half a decade has gone by in a blink, but reading back over the previous posts shows that a lot has happened in that time!

TL;DR We’re an increasingly medium-sized design/manufacturing/e-commerce business with workshops in Sheffield, UK, and Essen, Germany, and we employ almost 40 people. We’re totally lovely. Thanks for supporting us!

 

We’ve come a long way, baby

I’m sitting looking out the window at Sheffield-on-Sea and feeling pretty lucky about how things are going. In the morning, I’ll be flying east for Maker Faire Tokyo with Niko (more on him later), and to say hi to some amazing people in Shenzhen (and to visit Huaqiangbei, of course). This is after I’ve already visited this year’s Maker Faires in New York, San Francisco, and Berlin.

Pimoroni started out small, but we’ve grown like weeds, and we’re steadily sauntering towards becoming a medium-sized business. That’s thanks to fantastic support from the people who buy our stuff and spread the word. In return, we try to be nice, friendly, and human in everything we do, and to make exciting things, ideally with our own hands here in Sheffield.

Pimoroni soldering

Handmade with love

We’ve made it onto a few ‘fastest-growing’ lists, and we’re in the top 500 of the Inc. 5000 Europe list. Adafruit did it first a few years back, and we’ve never gone wrong when we’ve followed in their footsteps.

The slightly weird nature of Pimoroni means we get listed as either a manufacturing or e-commerce business. In reality, we’re about four or five companies in one shell, which is very much against the conventions of “how business is done”. However, having seen what Adafruit, SparkFun, and Seeed do, we’re more than happy to design, manufacture, and sell our stuff in-house, as well as stocking the best stuff from across the maker community.

Pimoroni stocks

Product and process

The whole process of expansion has not been without its growing pains. We’re just under 40 people strong now, and have an outpost in Germany (also hilariously far from the sea for piratical activities). This means we’ve had to change things quickly to improve and automate processes, so that the wheels won’t fall off as things get bigger. Process optimization is incredibly interesting to a geek, especially the making sure that things are done well, that mistakes are easy to spot and to fix, and that nothing is missed.

At the end of 2015, we had a step change in how busy we were, and our post room and support started to suffer. As a consequence, we implemented measures to become more efficient, including small but important things like checking in parcels with a barcode scanner attached to a Raspberry Pi. That Pi has been happily running on the same SD card for a couple of years now without problems 😀

Pimoroni post room

Going postal?

We also hired a full-time support ninja, Matt, to keep the experience of getting stuff from us light and breezy and to ensure that any problems are sorted. He’s had hugely positive impact already by making the emails and replies you see more friendly. Of course, he’s also started using the laser cutters for tinkering projects. It’d be a shame to work at Pimoroni and not get to use all the wonderful toys, right?

Employing all the people

You can see some of the motley crew we employ here and there on the Pimoroni website. And if you drop by at the Raspberry Pi Birthday Party, Pi Wars, Maker Faires, Deer Shed Festival, or New Scientist Live in September, you’ll be seeing new Pimoroni faces as we start to engage with people more about what we do. On top of that, we’re starting to make proper videos (like Sandy’s soldering guide), as opposed to the 101 episodes of Bilge Tank we recorded in a rather off-the-cuff and haphazard fashion. Although that’s the beauty of Bilge Tank, right?

Pimoroni soldering

Such soldering setup

As Emma, Sandy, Lydia, and Tanya gel as a super creative team, we’re starting to create more formal educational resources, and to make kits that are suitable for a wider audience. Things like our Pi Zero W kits are products of their talents.

Emma is our new Head of Marketing. She’s really ‘The Only Marketing Person Who Would Ever Fit In At Pimoroni’, having been a core part of the Sheffield maker scene since we hung around with one Ben Nuttall, in the dark days before Raspberry Pi was a thing.

Through a series of fortunate coincidences, Niko and his equally talented wife Mena were there when we cut the first Pibow in 2012. They immediately pitched in to help us buy our second laser cutter so we could keep up with demand. They have been supporting Pimoroni with sourcing in East Asia, and now Niko has become a member of the Pirates’ Council and the Head of Engineering as we’re increasing the sophistication and scale of the things we do. The Unicorn HAT HD is one of his masterpieces.

Pimoroni devices

ALL the HATs!

We see ourselves as a wonderful island of misfit toys, and it feels good to have the best toy shop ever, and to support so many lovely people. Business is about more than just profits.

Where do we go to, me hearties?

So what are our plans? At the moment we’re still working absolutely flat-out as demand from wholesalers, retailers, and customers increases. We thought Raspberry Pi was big, but it turns out it’s just getting started. Near the end of 2016, it seemed to reach a whole new level of popularityand still we continue to meet people to whom we have to explain what a Pi is. It’s a good problem to have.

We need a bigger space, but it’s been hard to find somewhere suitable in Sheffield that won’t mean we’re stuck on an industrial estate miles from civilisation. That would be bad for the crewwe like having world-class burritos on our doorstep.

The good news is, it looks like our search is at an end! Just in time for the arrival of our ‘Super-Turbo-Death-Star’ new production line, which will enable to make devices in a bigger, better, faster, more ‘Now now now!’ fashion \o/

Pimoroni warehouse

Spacious, but not spacious enough!

We’ve got lots of treasure in the pipeline, but we want to pick up the pace of development even more and create many new HATs, pHATs, and SHIMs, e.g. for environmental sensing and audio applications. Picade will also be getting some love to make it slicker and more hackable.

We’re also starting to flirt with adding more engineering and production capabilities in-house. The plan is to try our hand at anodising, powder-coating, and maybe even injection-moulding if we get the space and find the right machine. Learning how to do things is amazing, and we love having an idea and being able to bring it to life in almost no time at all.

Pimoroni production

This is where the magic happens

Fanks!

There are so many people involved in supporting our success, and some people we love for just existing and doing wonderful things that make us want to do better. The biggest shout-outs go to Liz, Eben, Gordon, James, all the Raspberry Pi crew, and Limor and pt from Adafruit, for being the most supportive guiding lights a young maker company could ever need.

A note from us

It is amazing for us to witness the growth of businesses within the Raspberry Pi ecosystem. Pimoroni is a wonderful example of an organisation that is creating opportunities for makers within its local community, and the company is helping to reinvigorate Sheffield as the heart of making in the UK.

If you’d like to take advantage of the great products built by the Pirates, Monkeys, Robots, and Ninjas of Sheffield, you should do it soon: Pimoroni are giving everyone 20% off their homemade tech until 6 August.

Pimoroni, from all of us here at Pi Towers (both in the UK and USA), have a wonderful birthday, and many a grog on us!

The post Pimoroni is 5 now! appeared first on Raspberry Pi.

A History of Removable Computer Storage

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/history-removable-computer-storage/

A History of Removable Storage

Almost from the start we’ve had a problem with computers: They create and consume more data than we can economically store. Hundreds of companies have been created around the need for more computer storage. These days if we need space we can turn to cloud services like our own B2 Cloud Storage, but it hasn’t always been that way. The history of removable computer storage is like the history of hard drives: A fascinating look into the ever-evolving technology of data storage.

The Birth of Removable Storage

Punch Cards

punch card

Before electronic computers existed, there were electrical, mechanical computing devices. Herman Hollerith, a U.S. census worker interested in simplifying the laborious process of tabulating census data, made a device that read information from rectangular cards with holes punched in particular locations to indicate information like marital status and age.

Hollerith’s cards long outlasted him and his machine. With the advent of electronic computers in the 1950s, punch cards became the de facto method of data input. The conventions introduced with punch cards, such as an 80 column width, affected everything from the way we’d make computer monitors to the format of text files for decades.

Open-Reel Tapes and Magnetic Cartridges

IBM 100 tape drive

Magnetic tape drives were standard issue for the mainframes and minicomputers used by businesses and other organizations from the advent of the computer industry in the 1950s up until the 1980s.

Tape drives started out on 10 1/2-inch reels. A thin metal strip recorded data magnetically. Watch any television program of this era and the scene with a computer will show you a device like this. The nine-track tapes developed by IBM for its computers could store up to 175 MB per tape. At the time, that was a tremendous amount of data, suitable for archiving days or weeks’ worth of data. These days 175 MBs might be enough to store a few dozen photos from your smartphone. Times have changed!

Eventually the big reel to reel systems would be replaced with much more portable, easier-to-use, and higher density magnetic tape cartridges. Mag tapes for data backup found their way into PCs in the 80s and 90s, though they, too, would be replaced by other removable media systems like CD-R burners.

Linear Tape-Open (LTO) made its debut in the late 1990s. These digital tape cartridges could store 100 GB each, making them ideal for backing up servers and archiving big projects. Since then capacity has improved to 6.0 TB per tape. There’s still a demand for LTO data archival systems today. However, tape drives are nearing their end of usefulness as better cloud options takeover the backup and archival markets. Our own B2 Cloud Storage is rapidly making LTO a thing of the past.

Burning LTO

Winchester Drives

IBM 3340 Winchester drive

Spinning hard disk drives started out as huge refrigerator-sized boxes attached to mainframe computers. As more businesses found uses for computers, the need for storage increased, but allowable floor space did not. IBM’s solution for this problem came in the early 1970s: the IBM 3340, popular known as a Winchester.

The 3340 sported removable data modules that contained hard drive platters which could store up to 70 MB. Instead of having to buy a whole new cabinet, companies leasing equipment from IBM could buy additional data modules to increase their storage capabilities.

From the start, the 3340 was a smashing success (okay, maybe smashing isn’t the best adjective to use when describing a hard drive, but you get the point). You could find these and their descendants connected to mainframes and minicomputers in corporate data centers throughout the 1970s and into the 1980s.

The Birth of the PC Brings New Storage Solutions

Cassette Recorder

TRS-80 w cassette drive

The 1970s saw another massive evolution of computers with the introduction of first generation personal computers. The first PCs lacked any built-in permanent storage. Hard disk drives were still very expensive. Even floppy disk drives were rare at the time. When you turned the computer off, you’d lose your data, unless you had something to store it on.

The solution that the first PC makers came up was to use a cassette recorder. Microcassettes exploded in the consumer electronic market as a convenient and inexpensive way for people to record and listen to music and use for voice dictation. At a time that long-distance phone calls were an expensive luxury, it was the original FaceTime for some of us, too: I remember as a preschooler, recording and playing cassettes to stay in touch with my grandparents on the other side of the country.

So using a cassette recorder to store computer data made sense. The devices were already commonplace and relatively inexpensive. Type in a save command, and the computer played tones through a cable connected to the tape drive to differentiate binary 0s and 1s. Type in a load command, and you could play back the tape to read the program into memory. It was very slow. But it was better than nothing.

Floppy Disk

Commodore 1541

The 1970s saw the rise of the floppy disk, the portable storage format that ultimately reigned supreme for decades. The earliest models of floppy disks were eight inches in diameter and could hold about 80 KB. Eight-inch drives were more common in corporate computing, but when floppies came to personal computers, the smaller 5 1/4-inch design caught on like wildfire.

Floppy disks became commonplace alongside the Apples and Commodores of the day. You could squeeze about 120 KB onto one of those puppies. Doesn’t sound like a lot, but it was plenty of space for Apple DOS and Lode Runner.

Apple popularized the 3 1/2-inch size when it introduced the Macintosh in 1984. By the late 1980s the smaller floppy disk size – which would ultimately store 1.44 MB per disk – was the dominant removable storage medium of the day. And so it would remain for decades.

The Bernoulli Box

Bernoulli Box

In the early 1980s, a new product called the Bernoulli Box would offer the convenience of removable cartridges like Winchester drives but in a much smaller, more portable format. It was called the Bernoulli Box. The Bernoulli box was an important removable storage device for businesses who had transitioned from expensive mainframes and minicomputers to desktops.

Bernoulli cartridges worked on the same principle as floppies but were larger and in a much more shielded enclosure. The cartridges sported larger capacities than floppy disks, too. You could store 10 MB or 20 MB instead of the 1.44 MB limit on a floppy disk. Capacities would increase over time to 230 MB. Bernoulli Boxes and the cartridges were expensive, which kept them in the realm of business storage. Iomega, the Bernoulli Box’s creator, turned its attention to an enormously popular removable storage system you’ll read about later: the Zip drive.

SyQuest Disks

SyQuest drive

In the 1990s another removable storage device made its mark in the computer industry. SyQuest developed a removable storage system that used 44 MB (and later 88 MB) hard disk platters. SyQuest drives were mainstays of creative digital markets – I saw them on almost any I could find a Mac doing graphic design work, desktop publishing, music, or video work.

SyQuest would be a footnote by the late 90s as Zip disks, recordable CDs and other storage media overtook them. Speaking of Zip disks…

The Click of Death

Zip Drive

The 1990s were a transitionary period for personal computing (well, when isn’t, it really). Information density was increasing rapidly. We were still years away from USB thumb drives and ubiquitous high-speed Wi-Fi, so “sneakernet” – physically transporting information from one computer to another – was still the preferred way to get big projects back and forth. Floppy drives were too small, hard disks weren’t portable, and rewritable CDs were expensive.

Iomega came along with the Zip Drive, a removable storage system that used disks shaped like heavier-duty floppies, each capable of storing up to 100 MB on them. A high-density floppy could store 1.4 MB or so, so it was orders of magnitude more of portable storage. Zip Disks quickly became popular, but Iomega eventually redesigned them to lower the cost of manufacturing. The redesign came with a price: The drives failed more frequently and could damage the disk in the process.

The phenomenon became known as the Click of Death: The sound the actuator (the part with the read/write head) would make as it reset after hitting a damaged sector on the disk. Iomega would eventually settle a class-action lawsuit over the issue, but consumers were already moving away from the format.

Iomega developed a successor to the Zip drive: The Jaz drive. When it first came out, it could store 1 GB on a removable cartridge. Inside the cartridge was a spinning hard disk mechanism; it wasn’t unlike the SyQuest drives that had been popular a few years before, but in a smaller size you could easily fit into a jacket pocket. Unfortunately, the Jaz drive developed reliability problems of its own – disks would get jammed in the drives, drives overheated, and some had vibration problems.

Recordable CDs and DVDs

Apple SuperDrive

As a storage medium, Compact Discs had been around since the 1980s, mainly popular as a music listening format. CD burners connected from the beginning, but they were ridiculously huge and expensive: The size of a washing machine and tens of thousands of dollars. By the late 1990s technology improved, prices lowered and recordable CD burners – CD-Rs – became commonplace.

With our ever-increasing need for more storage, we moved on to DVD-R and DVD-RW systems within a few years, upping the total you could store per disc to 4.3 GB (eventually up to 8 GB per disc once dual-layer media and burners were introduced).

Blu-Ray Disc offers even greater storage capacity and is popular for its use in the home entertainment market, so some PCs have added recordable Blu-Ray drives. Blu-ray sports capacities from 25 to 128 GB per disc depending on format. Increasingly, even optical drives have become optional accessories as we’ve slimmed down our laptop computers to improve portability.

Magneto-Optical

Magneto-optical disk

Another optical format, Magneto-Optical (MO), was used on some computer systems in the 80s and 90s. It would also find its way into consumer products. The cartridges could store 650 MB. Initial systems were only able to write once to a disc, but later ones were rewriteable.

NeXT, the other computer maker founded by Steve Jobs besides Apple, was the earliest desktop system to feature a MO drive as standard issue. Magneto-optical drives were available in 5 1/4-inch and 3-inch physical sizes with capacities up to 9 GB per disc. The most popular consumer incarnation of magneto-optical is Sony’s MiniDisc.

Removeable Storage Moves Beyond Computers

SD Cards

SD Cards

The most recent removable media format to see widespread adoption on personal computers is the Secure Digital (SD) Card. SD cards have become the industry standard most popular with many smartphones, still cameras, and video cameras. They can serve up data securely thanks to password protection, smartSD protocol and Near Field Communication (NFC) support available in some variations.

With no moving parts and non-volatile flash memory inside, SD cards are reliable, quiet and relatively fast methods of transporting and archiving data. What’s more, they come in different physical sizes to suit different device applications – everything from postage stamp-sized cards found in digital cameras to fingernail-sized micro cards found in phones.

Even compared to 5 1/4-inch media like Blu-ray Discs, SD card capacities are remarkable. 128 GB and 256 GB cards are commonplace now. What’s more, the SDXC spec maxes out at 2 TB, with support for 8K video transfer speeds possible. So there’s some headroom both for performance and capacity.

The More Things Change

As computer hardware continues to improve and as we continue to demand higher performance and greater portability and convenience, portable media will change. But as we’ve found ourselves with ubiquitous, high-speed Internet connectivity, the very need for removable local storage has diminished. Now instead of archiving data on an external cartridge, disc or card, we can just upload it to the cloud and access it anywhere.

That doesn’t obviate the need for a good backup strategy, of course. It’s vital to keep your important files safe with a local archive or backup. For that, removable media like SD cards and rewritable DVDs and even external hard drives can continue to fill an important role. Remember to store your info offsite too, preferably with a continuous, secure and reliable backup method like Backblaze Cloud Backup: Unlimited, unthrottled and easy to use.

The post A History of Removable Computer Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

TEN BUCKS! TEN FREAKIN’ BUCKS! Zero W aftermath

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/ten-freakin-bucks-zero-w-aftermath/

Tuesday saw the launch of our brand-new $10 Raspberry Pi Zero W, the next step in the evolution of our tiniest computer, now equipped with wireless LAN and Bluetooth.

Steve Anderson 🇪🇺 on Twitter

looks around house* “I’ve got too many SBCs. Really must get rid of some…” *new @Raspberry_Pi Zero W released* “SHUT UP AND TAKE MY MONEY!

As we hoped, the Zero W was very well received, with units flying off the virtual shelves of our official distributors.

The Pi Hut on Twitter

Over 4000 #PiZeroW in first batch of parcels for the postie.

By close of business on launch day, Zero Ws were winging their way to tens of thousands of excited makers, all eager to retrofit their existing Zero projects, or find new ways to build with the updated tech.

Facebook Raspberry Pi Zero W

We wanted to highlight some of the best responses we’ve received over the last few days: a mix of tweets, status updates and videos that made us smile.

Andy definitely wins the prize for most excitable launch day video. His enthusiasm is infectious!

Andy’s Pick: Pi Zero W

Today, Raspberry Pi launched the Pi Zero W, an upgrade to their $10 Pi Zero, adding Wi-Fi and Bluetooth to the tiny computer. For the full episode, visit twit.tv/mbw/548

Pi Borg wasted no time in fitting the Zero W into one of their Pololu kits. We’re looking forward to seeing it in action at the Big Birthday Weekend on Saturday.

Raspberry Pi Zero W robot!

We’ve built a robot using the new Raspberry Pi Zero W, a Pololu kit hacked to fit some bigger motors and our secret new motor controller being revealed on Friday… stay tuned! http://www.piborg.org

Raspberry Pi Foundation CEO Philip Colligan took the Zero W along with him yesterday when he joined the Secretary of State for Culture, Media and Sport to help launch the UK Government’s Digital Strategy.

STEAM Co. on Twitter

CreativityIsGREAT DEFINED. @philipcolligan on @Raspberry_Pi launched with #UKDigitalStrategy @dcms @beisgovuk @MattHancockMP @BBCRoryCJ https://t.co/6s2Loetqwj

And there’s always an eruption of excitement from the Comms team when Wil jumps on board!

Wil Wheaton on Twitter

Oh boy!! @Raspberry_Pi zero with WiFi on-board is available, and @pimoroni has some really neat kits!! https://t.co/dqQzE5KHyD

We also saw some brilliant launch videos from members of our community.

NEW Raspberry Pi Zero Wireless – $10 with WiFi + Bluetooth!

On the 5th anniversary of the launch of the original Raspberry Pi in 2012, the Foundation have decided to treat the community with a brand new product. A fork of the Pi Zero, but with added WiFi and Bluetooth, say hello to the Raspberry Pi Zero Wireless!

Pi Zero W with wifi, bluetooth and a brand new official case

Raspberry Pi Zero W newly launched today sports WiFi and Bluetooth and costs $10 + shipping and taxes. More information here http://raspi.tv/?p=9964 Also a brand new case.

We even became a Twitter Moment which, for many of us avid Tweeters, was kinda a big deal. Plus, well… pizza.

This tiny device has wireless LAN and HDMI and costs less than a pizza

The Raspberry Pi has sold more than 12 million devices around the world in various forms. The latest – the Pi Zero W – solves a key problem with the original by adding built-in wireless LAN and bluetooth functionality.

All in all, a great fifth birthday launch day was had by all.

James @raspjamberlin on Twitter

I would love to take a moment to wish @Raspberry_Pi a very happy 5th birthday! Congratulations to everyone that works so hard to give us Pi

If you ordered a Pi Zero W, make sure you share your projects with us across all social media or in the comments below. We can’t wait to see what you get up to with our newborn bundle of joy!

 

The post TEN BUCKS! TEN FREAKIN’ BUCKS! Zero W aftermath appeared first on Raspberry Pi.

New product! Raspberry Pi Zero W joins the family

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-zero-w-joins-family/

Today is Raspberry Pi’s fifth birthday: it’s five years since we launched the original Raspberry Pi, selling a hundred thousand units in the first day, and setting us on the road to a lifetime total (so far) of over twelve million units. To celebrate, we’re announcing a new product: meet Raspberry Pi Zero W, a new variant of Raspberry Pi Zero with wireless LAN and Bluetooth, priced at only $10.

Multum in parvo

So what’s the story?

In November 2015, we launched Raspberry Pi Zero, the diminutive $5 entry-level Raspberry Pi. This represented a fivefold reduction in cost over the original Model A: it was cheap enough that we could even stick it on the front cover of The MagPi, risking civil insurrection in newsagents throughout the land.

MagPi issue 40: causing trouble for WHSmith (credit: Adam Nicholls)

Over the ensuing fifteen months, Zero grew a camera connector and found its way into everything from miniature arcade cabinets to electric skateboards. Many of these use cases need wireless connectivity. The homebrew “People in Space” indicator in the lobby at Pi Towers is a typical example, with an official wireless dongle hanging off the single USB port: users often end up adding a USB hub to allow them to connect a keyboard, a mouse and a network adapter, and this hub can easily cost more than the Zero itself.

People in SPAAAAAACE

Zero W fixes this problem by integrating more functionality into the core product. It uses the same Cypress CYW43438 wireless chip as Raspberry Pi 3 Model B to provide 802.11n wireless LAN and Bluetooth 4.0 connectivity.

Pi Zero Announcement Video

Music: Orqestruh by SAFAKASH – https://soundcloud.com/safakash

To recap, here’s the full feature list for Zero W:

  • 1GHz, single-core CPU
  • 512MB RAM
  • Mini-HDMI port
  • Micro-USB On-The-Go port
  • Micro-USB power
  • HAT-compatible 40-pin header
  • Composite video and reset headers
  • CSI camera connector
  • 802.11n wireless LAN
  • Bluetooth 4.0

We imagine you’ll find all sorts of uses for Zero W. It makes a better general-purpose computer because you’re less likely to need a hub: if you’re using Bluetooth peripherals you might well end up with nothing at all plugged into the USB port. And of course it’s a great platform for experimenting with IoT applications.

Official case

To accompany Raspberry Pi Zero W, we’ve been working with our friends at Kinneir Dufort and T-Zero to create an official injection-moulded case. This shares the same design language as the official case for the Raspberry Pi 3, and features three interchangeable lids:

  • A blank one
  • One with an aperture to let you access the GPIOs
  • One with an aperture and mounting point for a camera

Three cases for the price of one

The case set also includes a short camera adapter flexi, and a set of rubber feet to make sure your cased Zero or Zero W doesn’t slide off the desk.

New distributors

You may have noticed that we’ve added several new Zero distributors recently: ModMyPi in the UK, pi3g in Germany, Samm Teknoloji in Turkey, Kubii in France, Spain, Italy and Portugal, and Kiwi Electronics in the Netherlands, Belgium and Luxembourg.

Raspberry Pi Zero W is available from all Zero distributors today, with the exception of Micro Center, who should have stock in stores by the end of this week. Check the icons below to find the stockist that’s best for you!

UK, Ireland

Pimoroni The Pi Hut

United States

Adafruit Canakit Microcenter

Canada

Canakit

Germany, Austria, Switzerland

France, Spain, Italy, Portugal

Netherlands, Belgium, Luxembourg

Turkey

Global

Pimoroni The Pi Hut Adafruit
Canakit

The post New product! Raspberry Pi Zero W joins the family appeared first on Raspberry Pi.

“I recieved a free or discounted product in return for an honest review”

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/44006.html

My experiences with Amazon reviewing have been somewhat unusual. A review of a smart switch I wrote received enough attention that the vendor pulled the product from Amazon. At the time of writing, I’m ranked as around the 2750th best reviewer on Amazon despite having a total of 18 reviews. But the world of Amazon reviews is even stranger than that, and the past couple of weeks have given me some insight into it.

Amazon’s success is fairly phenomenal. It’s estimated that there’s over 50 million people in the US paying $100 a year to get free shipping on Amazon purchases, and combined with Amazon’s surprisingly customer friendly service there’s a lot of people with a very strong preference for choosing Amazon rather than any other retailer. If you’re not on Amazon, you’re hurting your sales.

And if you’re an established brand, this works pretty well. Some people will search for your product directly and buy it, leaving reviews. Well reviewed products appear higher up in search results, so people searching for an item type rather than a brand will still see your product appear early in the search results, in turn driving sales. Some proportion of those customers will leave reviews, which helps keep your product high up in the results. As long as your products aren’t utterly dreadful, you’ll probably maintain that position.

But if you’re a brand nobody’s ever heard of, things are more difficult. People are unlikely to search for your product directly, so you’re relying on turning up in the results for more generic terms. But if you’re selling a more generic kind of item (say, a Bluetooth smart bulb) then there’s probably a number of other brands nobody’s ever heard of selling almost identical objects. If there’s no reason for anybody to choose your product then you’re probably not going to get any reviews and you’re not going to move up the search rankings. Even if your product is better than the competition, a small number of sales means a tiny number of reviews. By the time that number’s large enough to matter, you’re probably onto a new product cycle.

In summary: if nobody’s ever heard of you, you need reviews but you’re probably not getting any.

The old way of doing this was to send review samples to journalists, but nobody’s going to run a comprehensive review of 3000 different USB cables and even if they did almost nobody would read it before making a decision on Amazon. You need Amazon reviews, but you’re not getting any. The obvious solution is to send review samples to people who will leave Amazon reviews. This is where things start getting more dubious.

Amazon run a program called Vine which is intended to solve this problem. Send samples to Amazon and they’ll distribute them to a subset of trusted reviewers. These reviewers write a review as normal, and Amazon tag the review with a “Vine Voice” badge which indicates to readers that the reviewer received the product for free. But participation in Vine is apparently expensive, and so there’s a proliferation of sites like Snagshout or AMZ Review Trader that use a different model. There’s no requirement that you be an existing trusted reviewer and the product probably isn’t free. You sign up, choose a product, receive a discount code and buy it from Amazon. You then have a couple of weeks to leave a review, and if you fail to do so you’ll lose access to the service. This is completely acceptable under Amazon’s rules, which state “If you receive a free or discounted product in exchange for your review, you must clearly and conspicuously disclose that fact”. So far, so reasonable.

In reality it’s worse than that, with several opportunities to game the system. AMZ Review Trader makes it clear to sellers that they can choose reviewers based on past reviews, giving customers an incentive to leave good reviews in order to keep receiving discounted products. Some customers take full advantage of this, leaving a giant number of 5 star reviews for products they clearly haven’t tested and then (presumably) reselling them. What’s surprising is that this kind of cynicism works both ways. Some sellers provide two listings for the same product, the second being significantly more expensive than the first. They then offer an attractive discount for the more expensive listing in return for a review, taking it down to approximately the same price as the original item. Once the reviews are in, they can remove the first listing and drop the price of the second to the original price point.

The end result is a bunch of reviews that are nominally honest but are tied to perverse incentives. In effect, the overall star rating tells you almost nothing – you still need to actually read the reviews to gain any insight into whether the customer actually used the product. And when you do write an honest review that the seller doesn’t like, they may engage in heavy handed tactics in an attempt to make the review go away.

It’s hard to avoid the conclusion that Amazon’s review model is broken, but it’s not obvious how to fix it. When search ranking is tied to reviews, companies have a strong incentive to do whatever it takes to obtain positive reviews. What we’re left with for now is having to laboriously click through a number of products to see whether their rankings come from thoughtful and detailed reviews or are just a mass of 5 star one liners.

comment count unavailable comments