Tag Archives: resources

Cloud Babble: The Jargon of Cloud Storage

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/what-is-cloud-computing/

Cloud Babble

One of the things we in the technology business are good at is coming up with names, phrases, euphemisms, and acronyms for the stuff that we create. The Cloud Storage market is no different, and we’d like to help by illuminating some of the cloud storage related terms that you might come across. We know this is just a start, so please feel free to add in your favorites in the comments section below and we’ll update this post accordingly.

Clouds

The cloud is really just a collection of purpose built servers. In a public cloud the servers are shared between multiple unrelated tenants. In a private cloud, the servers are dedicated to a single tenant or sometimes a group of related tenants. A public cloud is off-site, while a private cloud can be on-site or off-site – or on-prem or off-prem, if you prefer.

Both Sides Now: Hybrid Clouds

Speaking of on-prem and off-prem, there are Hybrid Clouds or Hybrid Data Clouds depending on what you need. Both are based on the idea that you extend your local resources (typically on-prem) to the cloud (typically off-prem) as needed. This extension is controlled by software that decides, based on rules you define, what needs to be done where.

A Hybrid Data Cloud is specific to data. For example, you can set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

A Hybrid Cloud is similar to a Hybrid Data Cloud except it also extends compute. For example, at the end of the quarter, you can spin up order processing application instances off-prem as needed to add to your on-prem capacity. Of course, determining where the transactional data used and created by these applications resides can be an interesting systems design challenge.

Clouds in my Coffee: Fog

Typically, public and private clouds live in large buildings called data centers. Full of servers, networking equipment, and clean air, data centers need lots of power, lots of networking bandwidth, and lots of space. This often limits where data centers are located. The further away you are from a data center, the longer it generally takes to get your data to and from there. This is known as latency. That’s where “Fog” comes in.

Fog is often referred to as clouds close to the ground. Fog, in our cloud world, is basically having a “little” data center near you. This can make data storage and even cloud based processing faster for everyone nearby. Data, and less so processing, can be transferred to/from the Fog to the Cloud when time is less a factor. Data could also be aggregated in the Fog and sent to the Cloud. For example, your electric meter could report its minute-by-minute status to the Fog for diagnostic purposes. Then once a day the aggregated data could be send to the power company’s Cloud for billing purposes.

Another term used in place of Fog is Edge, as in computing at the Edge. In either case, a given cloud (data center) usually has multiple Edges (little data centers) connected to it. The connection between the Edge and the Cloud is sometimes known as the middle-mile. The network in the middle-mile can be less robust than that required to support a stand-alone data center. For example, the middle-mile can use 1 Gbps lines, versus a data center, which would require multiple 10 Gbps lines.

Heavy Clouds No Rain: Data

We’re all aware that we are creating, processing, and storing data faster than ever before. All of this data is stored in either a structured or more likely an unstructured way. Databases and data warehouses are structured ways to store data, but a vast amount of data is unstructured – meaning the schema and data access requirements are not known until the data is queried. A large pool of unstructured data in a flat architecture can be referred to as a Data Lake.

A Data Lake is often created so we can perform some type of “big data” analysis. In an over simplified example, let’s extend the lake metaphor a bit and ask the question; “how many fish are in our lake?” To get an answer, we take a sufficient sample of our lake’s water (data), count the number of fish we find, and extrapolate based on the size of the lake to get an answer within a given confidence interval.

A Data Lake is usually found in the cloud, an excellent place to store large amounts of non-transactional data. Watch out as this can lead to our data having too much Data Gravity or being locked in the Hotel California. This could also create a Data Silo, thereby making a potential data Lift-and-Shift impossible. Let me explain:

  • Data Gravity — Generally, the more data you collect in one spot, the harder it is to move. When you store data in a public cloud, you have to pay egress and/or network charges to download the data to another public cloud or even to your own on-premise systems. Some public cloud vendors charge a lot more than others, meaning that depending on your public cloud provider, your data could financially have a lot more gravity than you expected.
  • Hotel California — This is like Data Gravity but to a lesser scale. Your data is in the Hotel California if, to paraphrase, “your data can check out any time you want, but it can never leave.” If the cost of downloading your data is limiting the things you want to do with that data, then your data is in the Hotel California. Data is generally most valuable when used, and with cloud storage that can include archived data. This assumes of course that the archived data is readily available, and affordable, to download. When considering a cloud storage project always figure in the cost of using your own data.
  • Data Silo — Over the years, businesses have suffered from organizational silos as information is not shared between different groups, but instead needs to travel up to the top of the silo before it can be transferred to another silo. If your data is “trapped” in a given cloud by the cost it takes to share such data, then you may have a Data Silo, and that’s exactly opposite of what the cloud should do.
  • Lift-and-Shift — This term is used to define the movement of data or applications from one data center to another or from on-prem to off-prem systems. The move generally occurs all at once and once everything is moved, systems are operational and data is available at the new location with few, if any, changes. If your data has too much gravity or is locked in a hotel, a data lift-and-shift may break the bank.

I Can See Clearly Now

Hopefully, the cloudy terms we’ve covered are well, less cloudy. As we mentioned in the beginning, our compilation is just a start, so please feel free to add in your favorite cloud term in the comments section below and we’ll update this post with your contributions. Keep your entries “clean,” and please no words or phrases that are really adverts for your company. Thanks.

The post Cloud Babble: The Jargon of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

New AWS Auto Scaling – Unified Scaling For Your Cloud Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-auto-scaling-unified-scaling-for-your-cloud-applications/

I’ve been talking about scalability for servers and other cloud resources for a very long time! Back in 2006, I wrote “This is the new world of scalable, on-demand web services. Pay for what you need and use, and not a byte more.” Shortly after we launched Amazon Elastic Compute Cloud (EC2), we made it easy for you to do this with the simultaneous launch of Elastic Load Balancing, EC2 Auto Scaling, and Amazon CloudWatch. Since then we have added Auto Scaling to other AWS services including ECS, Spot Fleets, DynamoDB, Aurora, AppStream 2.0, and EMR. We have also added features such as target tracking to make it easier for you to scale based on the metric that is most appropriate for your application.

Introducing AWS Auto Scaling
Today we are making it easier for you to use the Auto Scaling features of multiple AWS services from a single user interface with the introduction of AWS Auto Scaling. This new service unifies and builds on our existing, service-specific, scaling features. It operates on any desired EC2 Auto Scaling groups, EC2 Spot Fleets, ECS tasks, DynamoDB tables, DynamoDB Global Secondary Indexes, and Aurora Replicas that are part of your application, as described by an AWS CloudFormation stack or in AWS Elastic Beanstalk (we’re also exploring some other ways to flag a set of resources as an application for use with AWS Auto Scaling).

You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.

If you have tried to use any of our Auto Scaling options in the past, you undoubtedly understand the trade-offs involved in choosing scaling thresholds. AWS Auto Scaling gives you a variety of scaling options: You can optimize for availability, keeping plenty of resources in reserve in order to meet sudden spikes in demand. You can optimize for costs, running close to the line and accepting the possibility that you will tax your resources if that spike arrives. Alternatively, you can aim for the middle, with a generous but not excessive level of spare capacity. In addition to optimizing for availability, cost, or a blend of both, you can also set a custom scaling threshold. In each case, AWS Auto Scaling will create scaling policies on your behalf, including appropriate upper and lower bounds for each resource.

AWS Auto Scaling in Action
I will use AWS Auto Scaling on a simple CloudFormation stack consisting of an Auto Scaling group of EC2 instances and a pair of DynamoDB tables. I start by removing the existing Scaling Policies from my Auto Scaling group:

Then I open up the new Auto Scaling Console and selecting the stack:

Behind the scenes, Elastic Beanstalk applications are always launched via a CloudFormation stack. In the screen shot above, awseb-e-sdwttqizbp-stack is an Elastic Beanstalk application that I launched.

I can click on any stack to learn more about it before proceeding:

I select the desired stack and click on Next to proceed. Then I enter a name for my scaling plan and choose the resources that I’d like it to include:

I choose the scaling strategy for each type of resource:

After I have selected the desired strategies, I click Next to proceed. Then I review the proposed scaling plan, and click Create scaling plan to move ahead:

The scaling plan is created and in effect within a few minutes:

I can click on the plan to learn more:

I can also inspect each scaling policy:

I tested my new policy by applying a load to the initial EC2 instance, and watched the scale out activity take place:

I also took a look at the CloudWatch metrics for the EC2 Auto Scaling group:

Available Now
We are launching AWS Auto Scaling today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Singapore) Regions today, with more to follow. There’s no charge for AWS Auto Scaling; you pay only for the CloudWatch Alarms that it creates and any AWS resources that you consume.

As is often the case with our new services, this is just the first step on what we hope to be a long and interesting journey! We have a long roadmap, and we’ll be adding new features and options throughout 2018 in response to your feedback.

Jeff;

Analyzing the Linux boot process (opensource.com)

Post Syndicated from corbet original https://lwn.net/Articles/744528/rss

Alison Chaiken looks
in detail at how the kernel boots
on opensource.com.
Besides starting buggy spyware, what function does early boot
firmware serve? The job of a bootloader is to make available to a newly
powered processor the resources it needs to run a general-purpose operating
system like Linux. At power-on, there not only is no virtual memory, but no
DRAM until its controller is brought up.

Hello World Issue 4: Professional Development

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/hello-world-issue-4/

Another new year brings with it thoughts of setting goals and targets. Thankfully, there is a new issue of Hello World packed with practical advise to set you on the road to success.

Hello World is our magazine about computing and digital making for educators, and it’s a collaboration between the Raspberry Pi Foundation and Computing at School, which is part of the British Computing Society.

Hello World 4 Professional Development Raspberry Pi CAS

In issue 4, our international panel of educators and experts recommends approaches to continuing professional development in computer science education.

Approaches to professional development, and much more

With recommendations for more professional development in the Royal Society’s report, and government funding to support this, our cover feature explores some successful approaches. In addition, the issue is packed with other great resources, guides, features, and lesson plans to support educators.

Hello World 4 Professional Development Raspberry Pi CAS
Hello World 4 Professional Development Raspberry Pi CAS
Hello World 4 Professional Development Raspberry Pi CAS
Hello World 4 Professional Development Raspberry Pi CAS

Highlights include:

  • The Royal Society: After the Reboot — learn about the latest report and its findings about computing education
  • The Cyber Games — a new programme looking for the next generation of security experts
  • Engaging Students with Drones
  • Digital Literacy: Lost in Translation?
  • Object-oriented Coding with Python

Get your copy of Hello World 4

Hello World is available as a free Creative Commons download for anyone around the world who is interested in computer science and digital making education. You can get the latest issue as a PDF file straight from the Hello World website.

Thanks to the very generous sponsorship of BT, we are able to offer free print copies of the magazine to serving educators in the UK. It’s for teachers, Code Club volunteers, teaching assistants, teacher trainers, and others who help children and young people learn about computing and digital making. So remember to subscribe to have your free print magazine posted directly to your home — 6000 educators have already signed up to receive theirs!

Could you write for Hello World?

By sharing your knowledge and experience of working with young people to learn about computing, computer science, and digital making in Hello World, you will help inspire others to get involved. You will also help bring the power of digital making to more and more educators and learners.

The computing education community is full of people who lend their experience to help colleagues. Contributing to Hello World is a great way to take an active part in this supportive community, and you’ll be adding to a body of free, open-source learning resources that are available for anyone to use, adapt, and share. It’s also a tremendous platform to broadcast your work: Hello World digital versions alone have been downloaded more than 50000 times!

Wherever you are in the world, get in touch with us by emailing our editorial team about your article idea.

The post Hello World Issue 4: Professional Development appeared first on Raspberry Pi.

Raspbery Pi-newood Derby

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pinewood-derby/

Andre Miron’s Pinewood Derby Instant Replay System (sorry, not sorry for the pun in the title) uses a Raspberry Pi to monitor the finishing line and play back a slow-motion instant replay, putting an end to “No, I won!” squabbles once and for all.

Raspberry Pi Based Pinewood Derby Instant Replay Demo

This is the same system I demo in this video (https://youtu.be/-QyMxKfBaAE), but on our actual track with real pinewood derby cars. Glad to report that it works great!

Pinewood Derby

For those unfamiliar with the term, the Pinewood Derby is a racing event for Cub Scouts in the USA. Cub Scouts, often with the help of a guardian, build race cars out of wood according to rules regarding weight, size, materials, etc.

Pinewood derby race car

The Cubs then race their cars in heats, with the winners advancing to district and council races.

Who won?

Andre’s Instant Replay System registers the race cars as they cross the finishing line, and it plays back slow-motion video of the crossing on a monitor. As he explains on YouTube:

The Pi is recording a constant stream of video, and when the replay is triggered, it records another half-second of video, then takes the last second and a half and saves it in slow motion (recording is done at 90 fps), before replaying.

The build also uses an attached Arduino, connected to GPIO pin 5, to trigger the recording and playback as it registers the passing cars via a voltage splitter. Additionally, the system announces the finishing places on a rather attractive-looking display above the finishing line.

Pinewood derby race car Raspberry Pi

The result? No more debate about whose car crossed the line first in neck-and-neck races.

Build your own

Andre takes us through the physical setup of the build in the video below, and you’ll find the complete code pasted in the description of the video here. Thanks, Andre!

Raspberry Pi based Pinewood Derby Instant Replay System

See the system on our actual track here: https://youtu.be/B3lcQHWGq88 Raspberry Pi based instant replay system, triggered by Arduino Pinewood Derby Timer. The Pi uses GPIO pin 5 attached to a voltage splitter on Arduino output 11 (and ground-ground) to detect when a car crosses the finish line, which triggers the replay.

Digital making in your club

If you’re a member of an various after-school association such as the Scouts or Guides, then using the Raspberry Pi and our free project resources, or visiting a Code Club or CoderDojo, are excellent ways to work towards various badges and awards. So talk to your club leader to discover all the ways in which you can incorporate digital making into your club!

The post Raspbery Pi-newood Derby appeared first on Raspberry Pi.

Announcing our new beta for the AWS Certified Security – Specialty exam

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-our-new-beta-for-the-aws-certified-security-specialty-exam/

Take the AWS Certified Security – Specialty beta exam for the chance to be among the first to hold this new AWS Certification. This beta exam allows experienced cloud security professionals to demonstrate and validate their expertise. Register today – this beta exam will only be available from January 15 to March 2!

About the exam

This beta exam validates that the successful candidate can effectively demonstrate knowledge of how to secure the AWS platform. The exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection.

The exam validates:

  • Familiarity with regional- and country-specific security and compliance regulations and meta issues that these regulations embody.
  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • Ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements.
  • An understanding of security operations and risk.

Learn more and register >>

Who is eligible

The beta is open to anyone who currently holds an Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

We have training and other resources to help you prepare for the beta exam:

AWS Security Fundamentals Digital| 3 Hours
This course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS Classroom | 3 Days
This course demonstrates how to efficiently use AWS security services to stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your data and systems in the cloud. The course highlights the security features of AWS key services including compute, storage, networking, and database services.

Online resources for Cloud Security and Compliance

Review documentation, whitepapers, and articles & tutorials related to cloud security and compliance.

Learn more and register >>

Please contact us if you have questions about exam registration.

Good luck!

Continuous Deployment to Kubernetes using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, Amazon ECR and AWS Lambda

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/devops/continuous-deployment-to-kubernetes-using-aws-codepipeline-aws-codecommit-aws-codebuild-amazon-ecr-and-aws-lambda/

Thank you to my colleague Omar Lari for this blog on how to create a continuous deployment pipeline for Kubernetes!


You can use Kubernetes and AWS together to create a fully managed, continuous deployment pipeline for container based applications. This approach takes advantage of Kubernetes’ open-source system to manage your containerized applications, and the AWS developer tools to manage your source code, builds, and pipelines.

This post describes how to create a continuous deployment architecture for containerized applications. It uses AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS Lambda to deploy containerized applications into a Kubernetes cluster. In this environment, developers can remain focused on developing code without worrying about how it will be deployed, and development managers can be satisfied that the latest changes are always deployed.

What is Continuous Deployment?

There are many articles, posts and even conferences dedicated to the practice of continuous deployment. For the purposes of this post, I will summarize continuous delivery into the following points:

  • Code is more frequently released into production environments
  • More frequent releases allow for smaller, incremental changes reducing risk and enabling simplified roll backs if needed
  • Deployment is automated and requires minimal user intervention

For a more information, see “Practicing Continuous Integration and Continuous Delivery on AWS”.

How can you use continuous deployment with AWS and Kubernetes?

You can leverage AWS services that support continuous deployment to automatically take your code from a source code repository to production in a Kubernetes cluster with minimal user intervention. To do this, you can create a pipeline that will build and deploy committed code changes as long as they meet the requirements of each stage of the pipeline.

To create the pipeline, you will use the following services:

  • AWS CodePipeline. AWS CodePipeline is a continuous delivery service that models, visualizes, and automates the steps required to release software. You define stages in a pipeline to retrieve code from a source code repository, build that source code into a releasable artifact, test the artifact, and deploy it to production. Only code that successfully passes through all these stages will be deployed. In addition, you can optionally add other requirements to your pipeline, such as manual approvals, to help ensure that only approved changes are deployed to production.
  • AWS CodeCommit. AWS CodeCommit is a secure, scalable, and managed source control service that hosts private Git repositories. You can privately store and manage assets such as your source code in the cloud and configure your pipeline to automatically retrieve and process changes committed to your repository.
  • AWS CodeBuild. AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces artifacts that are ready to deploy. You can use AWS CodeBuild to both build your artifacts, and to test those artifacts before they are deployed.
  • AWS Lambda. AWS Lambda is a compute service that lets you run code without provisioning or managing servers. You can invoke a Lambda function in your pipeline to prepare the built and tested artifact for deployment by Kubernetes to the Kubernetes cluster.
  • Kubernetes. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It provides a platform for running, deploying, and managing containers at scale.

An Example of Continuous Deployment to Kubernetes:

The following example illustrates leveraging AWS developer tools to continuously deploy to a Kubernetes cluster:

  1. Developers commit code to an AWS CodeCommit repository and create pull requests to review proposed changes to the production code. When the pull request is merged into the master branch in the AWS CodeCommit repository, AWS CodePipeline automatically detects the changes to the branch and starts processing the code changes through the pipeline.
  2. AWS CodeBuild packages the code changes as well as any dependencies and builds a Docker image. Optionally, another pipeline stage tests the code and the package, also using AWS CodeBuild.
  3. The Docker image is pushed to Amazon ECR after a successful build and/or test stage.
  4. AWS CodePipeline invokes an AWS Lambda function that includes the Kubernetes Python client as part of the function’s resources. The Lambda function performs a string replacement on the tag used for the Docker image in the Kubernetes deployment file to match the Docker image tag applied in the build, one that matches the image in Amazon ECR.
  5. After the deployment manifest update is completed, AWS Lambda invokes the Kubernetes API to update the image in the Kubernetes application deployment.
  6. Kubernetes performs a rolling update of the pods in the application deployment to match the docker image specified in Amazon ECR.
    The pipeline is now live and responds to changes to the master branch of the CodeCommit repository. This pipeline is also fully extensible, you can add steps for performing testing or adding a step to deploy into a staging environment before the code ships into the production cluster.

An example pipeline in AWS CodePipeline that supports this architecture can be seen below:

Conclusion

We are excited to see how you leverage this pipeline to help ease your developer experience as you develop applications in Kubernetes.

You’ll find an AWS CloudFormation template with everything necessary to spin up your own continuous deployment pipeline at the CodeSuite – Continuous Deployment Reference Architecture for Kubernetes repo on GitHub. The repository details exactly how the pipeline is provisioned and how you can use it to deploy your own applications. If you have any questions, feedback, or suggestions, please let us know!

Validate Your IT Security Expertise with the New AWS Certified Security – Specialty Beta Exam

Post Syndicated from Sara Snedeker original https://aws.amazon.com/blogs/security/validate-your-it-security-expertise-with-the-new-aws-certified-security-specialty-beta-exam/

AWS Training and Certification image

If you are an experienced cloud security professional, you can demonstrate and validate your expertise with the new AWS Certified Security – Specialty beta exam. This exam allows you to demonstrate your knowledge of incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. Register today – this beta exam will be available only from January 15 to March 2, 2018.

By taking this exam, you can validate your:

  • Familiarity with region-specific and country-specific security and compliance regulations and meta issues that these regulations include.
  • Understanding of data encryption methods and secure internet protocols, and the AWS mechanisms to implement them.
  • Working knowledge of AWS security services to provide a secure production environment.
  • Ability to make trade-off decisions with regard to cost, security, and deployment complexity when given a set of application requirements.

See the full list of security knowledge you can validate by taking this beta exam.

Who is eligible?

The beta exam is open to anyone who currently holds an AWS Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

You can take the following courses and use AWS cloud security resources and compliance resources to prepare for this exam.

AWS Security Fundamentals (digital, 3 hours)
This digital course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS (classroom, 3 days)
This instructor-led course demonstrates how to efficiently use AWS security services to help stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your AWS resources. The course highlights the security features of AWS compute, storage, networking, and database services.

If you have questions about this new beta exam, contact us.

Good luck with the exam!

– Sara

AWS IoT, Greengrass, and Machine Learning for Connected Vehicles at CES

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-greengrass-and-machine-learning-for-connected-vehicles-at-ces/

Last week I attended a talk given by Bryan Mistele, president of Seattle-based INRIX. Bryan’s talk provided a glimpse into the future of transportation, centering around four principle attributes, often abbreviated as ACES:

Autonomous – Cars and trucks are gaining the ability to scan and to make sense of their environments and to navigate without human input.

Connected – Vehicles of all types have the ability to take advantage of bidirectional connections (either full-time or intermittent) to other cars and to cloud-based resources. They can upload road and performance data, communicate with each other to run in packs, and take advantage of traffic and weather data.

Electric – Continued development of battery and motor technology, will make electrics vehicles more convenient, cost-effective, and environmentally friendly.

Shared – Ride-sharing services will change usage from an ownership model to an as-a-service model (sound familiar?).

Individually and in combination, these emerging attributes mean that the cars and trucks we will see and use in the decade to come will be markedly different than those of the past.

On the Road with AWS
AWS customers are already using our AWS IoT, edge computing, Amazon Machine Learning, and Alexa products to bring this future to life – vehicle manufacturers, their tier 1 suppliers, and AutoTech startups all use AWS for their ACES initiatives. AWS Greengrass is playing an important role here, attracting design wins and helping our customers to add processing power and machine learning inferencing at the edge.

AWS customer Aptiv (formerly Delphi) talked about their Automated Mobility on Demand (AMoD) smart vehicle architecture in a AWS re:Invent session. Aptiv’s AMoD platform will use Greengrass and microservices to drive the onboard user experience, along with edge processing, monitoring, and control. Here’s an overview:

Another customer, Denso of Japan (one of the world’s largest suppliers of auto components and software) is using Greengrass and AWS IoT to support their vision of Mobility as a Service (MaaS). Here’s a video:

AWS at CES
The AWS team will be out in force at CES in Las Vegas and would love to talk to you. They’ll be running demos that show how AWS can help to bring innovation and personalization to connected and autonomous vehicles.

Personalized In-Vehicle Experience – This demo shows how AWS AI and Machine Learning can be used to create a highly personalized and branded in-vehicle experience. It makes use of Amazon Lex, Polly, and Amazon Rekognition, but the design is flexible and can be used with other services as well. The demo encompasses driver registration, login and startup (including facial recognition), voice assistance for contextual guidance, personalized e-commerce, and vehicle control. Here’s the architecture for the voice assistance:

Connected Vehicle Solution – This demo shows how a connected vehicle can combine local and cloud intelligence, using edge computing and machine learning at the edge. It handles intermittent connections and uses AWS DeepLens to train a model that responds to distracted drivers. Here’s the overall architecture, as described in our Connected Vehicle Solution:

Digital Content Delivery – This demo will show how a customer uses a web-based 3D configurator to build and personalize their vehicle. It will also show high resolution (4K) 3D image and an optional immersive AR/VR experience, both designed for use within a dealership.

Autonomous Driving – This demo will showcase the AWS services that can be used to build autonomous vehicles. There’s a 1/16th scale model vehicle powered and driven by Greengrass and an overview of a new AWS Autonomous Toolkit. As part of the demo, attendees drive the car, training a model via Amazon SageMaker for subsequent on-board inferencing, powered by Greengrass ML Inferencing.

To speak to one of my colleagues or to set up a time to see the demos, check out the Visit AWS at CES 2018 page.

Some Resources
If you are interested in this topic and want to learn more, the AWS for Automotive page is a great starting point, with discussions on connected vehicles & mobility, autonomous vehicle development, and digital customer engagement.

When you are ready to start building a connected vehicle, the AWS Connected Vehicle Solution contains a reference architecture that combines local computing, sophisticated event rules, and cloud-based data processing and storage. You can use this solution to accelerate your own connected vehicle projects.

Jeff;

Tech Companies Meet EC to Discuss Removal of Pirate & Illegal Content

Post Syndicated from Andy original https://torrentfreak.com/tech-companies-meet-ec-to-discuss-removal-of-pirate-illegal-content-180109/

Thousands perhaps millions of pieces of illegal content flood onto the Internet every single day, a problem that’s only increasing with each passing year.

In the early days of the Internet very little was done to combat the problem but with the rise of social media and millions of citizens using it to publish whatever they like – not least terrorist propaganda and racist speech – governments around the world are beginning to take notice.

Of course, running parallel is the multi-billion dollar issue of intellectual property infringement. Eighteen years on from the first wave of mass online piracy and the majority of popular movies, TV shows, games, software and books are still available to download.

Over the past couple of years and increasingly in recent months, there have been clear signs that the EU in particular wishes to collectively mitigate the spread of all illegal content – from ISIS videos to pirated Hollywood movies – with assistance from major tech companies.

Google, YouTube, Facebook and Twitter are all expected to do their part, with the looming stick of legislation behind the collaborative carrots, should they fail to come up with a solution.

To that end, five EU Commissioners – Dimitris Avramopoulos, Elżbieta Bieńkowska, Věra Jourová, Julian King and Mariya Gabriel – will meet today in Brussels with representatives of several online platforms to discuss progress made in dealing with the spread of the aforementioned material.

In a joint statement together with EC Vice-President Andrus Ansip, the Commissioners describe all illegal content as a threat to security, safety, and fundamental rights, demanding a “collective response – from all actors, including the internet industry.”

They note that online platforms have committed significant resources towards removing violent and extremist content, including via automated removal, but more needs to be done to tackle the issue.

“This is starting to achieve results. However, even if tens of thousands of pieces of illegal content have been taken down, there are still hundreds of thousands more out there,” the Commissioners writes.

“And removal needs to be speedy: the longer illegal material stays online, the greater its reach, the more it can spread and grow. Building on the current voluntary approach, more efforts and progress have to be made.”

The Commission says it is relying on online platforms such as Google and Facebook to “step up and speed up their efforts to tackle these threats quickly and comprehensively.” This should include closer cooperation with law enforcement, sharing of information with other online players, plus action to ensure that once taken down, illegal content does not simply reappear.

While it’s clear that that the EC would prefer to work collaboratively with the platforms to find a solution to the illegal content problem, as expected there’s the veiled threat of them being compelled by law to do so, should they fall short of their responsibilities.

“We will continue to promote cooperation with social media companies to detect and remove terrorist and other illegal content online, and if necessary, propose legislation to complement the existing regulatory framework,” the EC warns.

Today’s discussions run both in parallel and in tandem with others specifically targeted at intellectual property abuses. Late November the EC presented a set of new measures to ensure that copyright holders are well protected both online and in the physical realm.

A key aim is to focus on large-scale facilitators, such as pirate site operators, while cutting their revenue streams.

“The Commission seeks to deprive commercial-scale IP infringers of the revenue flows that make their criminal activity lucrative – this is the so-called ‘follow the money’ approach which focuses on the ‘big fish’ rather than individuals,” the Commission explained.

This presentation followed on the heels of a proposal last September which had the EC advocating the take-down-stay-down principle, with pirate content being taken down, automated filters ensuring infringement can be tackled proactively, with measures being taken against repeat infringers.

Again, the EC warned that should cooperation with Internet platforms fail to come up with results, future legislation cannot be ruled out.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

[$] Future directions for PGP

Post Syndicated from jake original https://lwn.net/Articles/742542/rss

Back in October, LWN reported on a talk
about the
state of the GNU Privacy Guard (GnuPG)
project, an asymmetric public-key encryption and
signing tool that had been almost abandoned by its lead developer due to lack
of resources before receiving a significant infusion of funding and community
attention. GnuPG 2 has brought about a number of changes and
improvements but,
at the same time, several efforts are underway to significantly change the way
GnuPG and OpenPGP are used. This article will look at the current
state of GnuPG and the OpenPGP web of trust, as compared to new implementations
of the OpenPGP standard and other trust systems.

A hedgehog cam or two

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/a-hedgehog-cam-or-two/

Here we are, hauling ourselves out of the Christmas and New Year holidays and into January proper. It’s dawning on me that I have to go back to work, even though it’s still very cold and gloomy in northern Europe, and even though my duvet is lovely and warm. I found myself envying beings that hibernate, and thinking about beings that hibernate, and searching for things to do with hedgehogs. And, well, the long and the short of it is, today’s blog post is a short meditation on the hedgehog cam.

A hedgehog in a garden, photographed in infrared light by a hedgehog cam

Success! It’s a hedgehog!
Photo by Andrew Wedgbury

Hedgehog watching

Someone called Barker has installed a Raspberry Pi–based hedgehog cam in a location with a distant view of a famous Alp, and as well as providing live views by visible and infrared light for the dedicated and the insomniac, they also make a sped-up version of the previous night’s activity available. With hedgehogs usually being in hibernation during January, you mightn’t see them in any current feed — but don’t worry! You’re guaranteed a few hedgehogs on Barker’s website, because they have also thrown in some lovely GIFs of hoggy (and foxy) divas that their camera captured in the past.

A Hedgehog eating from a bowl on a patio, captured by a hedgehog cam

Nom nom nom!
GIF by Barker’s Site

Build your own hedgehog cam

For pointers on how to replicate this kind of setup, you could do worse than turn to Andrew Wedgbury’s hedgehog cam write-up. Andrew’s Twitter feed reveals that he’s a Cambridge local, and there are hints that he was behind RealVNC’s hoggy mascot for Pi Wars 2017.

RealVNC on Twitter

Another day at the office: testing our #PiWars mascot using a @Raspberry_Pi 3, #VNC Connect and @4tronix_uk Picon Zero. Name suggestions? https://t.co/iYY3xAX9Bk

Our infrared bird box and time-lapse camera resources will also set you well on the way towards your own custom wildlife camera. For a kit that wraps everything up in a weatherproof enclosure made with love, time, and serious amounts of design and testing, take a look at Naturebytes’ wildlife cam kit.

Or, if you’re thinking that a robot mascot is more dependable than real animals for the fluffiness you need in order to start your January with something like productivity and with your soul intact, you might like to put your own spin on our robot buggy.

Happy 2018

While we’re on the subject of getting to grips with the new year, do take a look at yesterday’s blog post, in which we suggest a New Year’s project that’s different from the usual resolutions. However you tackle 2018, we wish you an excellent year of creative computing.

The post A hedgehog cam or two appeared first on Raspberry Pi.

Could you write for Hello World magazine?

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/could-you-write-for-hello-world-magazine/

Thinking about New Year’s resolutions? Ditch the gym and tone up your author muscles instead, by writing an article for Hello World magazine. We’ll help you, you’ll expand your knowledge of a topic you care about, and you’ll be contributing something of real value to the computing education community.

Join our pool of Hello World writers in 2018

The computing and digital making magazine for educators

Hello World is our free computing magazine for educators, published in partnership with Computing At School and kindly supported by BT. We launched at the Bett Show in January 2017, and over the past twelve months, we’ve grown to a readership of 15000 subscribers. You can get your own free copy here.

Our work is sustained by wonderful educational content from around the world in every issue. We’re hugely grateful to our current pool of authors – keep it up, veterans of 2017! – and we want to provide opportunities for new voices in the community to join them. You might be a classroom teacher sharing your scheme of work, a volunteer reflecting on running an after-school club, an industry professional sharing your STEM expertise, or an academic providing insights into new research – we’d love contributions from all kinds of people in all sorts of roles.

Your article doesn’t have to be finished and complete: if you send us an outline, we will work with you to develop it into a full piece.

Like my desk, but tidier

Five reasons to write for Hello World

Here are five reasons why writing for Hello World is a great way to start 2018:

1. You’ll learn something new

Researching an article is one of the best ways to broaden your knowledge about something that interests you.

2. You’ll think more clearly

Notes in hand, you sit at your desk and wonder how to craft all this information into a coherent piece of writing. It’s a situation we’re all familiar with. Writing an article makes you examine and clarify what you really think about a subject.

Share your expertise and make more interesting projects along the way

3. You’ll make cool projects

Testing a project for a Hello World resource is a perfect opportunity to build something amazing that’s hitherto been locked away inside your brain.

4. You’ll be doing something that matters

Sharing your knowledge and experience in Hello World helps others to teach and learn computing. It helps bring the power of digital making to more and more educators and learners.

5. You’ll share with an open and supportive community

The computing education community is full of people who lend their experience to help colleagues. Contributing to Hello World is a great way to take an active part in this supportive community, and you’ll be adding to a body of free, open source learning resources that are available for everyone to use, adapt, and share. It’s also a tremendous platform to broadcast your work: the digital version alone of Hello World has been downloaded over 50000 times.

Yes! What do I do next?

Feeling inspired? Email our editorial team with your idea.

Issue 4 of Hello World is out this month! Subscribe for free today to have it delivered to your inbox or your home.

The post Could you write for Hello World magazine? appeared first on Raspberry Pi.

Supporting Conservancy Makes a Difference

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2017/12/31/donate-conservancy.html

Earlier this year, in
February, I wrote a blog post encouraging people to donate
to where I
work, Software Freedom Conservancy. I’ve not otherwise blogged too much
this year. It’s been a rough year for many reasons, and while I
personally and Conservancy in general have accomplished some very
important work this year, I’m reminded as always that more resources do
make things easier.

I understand the urge, given how bad the larger political crises have
gotten, to want to give to charities other than those related to software
freedom. There are important causes out there that have become more urgent
this year. Here’s three issues which have become shockingly more acute
this year:

  • making sure the USA keeps it commitment
    to immigrants to allow them make a new life here just like my own ancestors
    did,
  • assuring that the great national nature reserves are maintained and
    left pristine for generations to come,
  • assuring that we have zero tolerance abusive behavior —
    particularly by those in power against people who come to them for help and
    job opportunities.

These are just three of the many issues this year that I’ve seen get worse,
not better. I am glad that I know and support people who work on these
issues, and I urge everyone to work on these issues, too.

Nevertheless, as I plan my primary donations this year, I’m again, as I
always do, giving to the FSF and my
own employer, Software
Freedom Conservancy
. The reason is simple: software freedom is still
an essential cause and it is frankly one that most people don’t understand
(yet). I wrote almost
two years ago about the phenomenon I dubbed Kuhn’s
Paradox
. Simply put: it keeps getting more and more difficult
to avoid proprietary software in a normal day’s tasks, even while the
number of lines of code licensed freely gets larger every day.

As long as that paradox remains true, I see software freedom as urgent. I
know that we’re losing ground on so many other causes, too. But those of
you who read my blog are some of the few people in the world that
understand that software freedom is under threat and needs the urgent work
that the very few software-freedom-related organizations,
like the FSF
and Software Freedom
Conservancy
are doing. I hope you’ll donate now to both of them. For
my part, I gave $120 myself to FSF as part of the monthly Associate
Membership program, and in a few minutes, I’m going to give $400 to
Conservancy. I’ll be frank: if you work in technology in an industrialized
country, I’m quite sure you can afford that level of money, and I suspect
those amounts are less than most of you spent on technology equipment
and/or network connectivity charges this year. Make a difference for us
and give to the cause of software freedom at least as much a you’re giving
to large technology companies.

Finally, a good reason to give to smaller charities like FSF and
Conservancy is that your donation makes a bigger difference. I do think
bigger organizations, such as (to pick an example of an organization I used
to give to) my local NPR station does important work. However, I was
listening this week to my local NPR station, and they said their goal
for that day was to raise $50,000. For Conservancy, that’s closer
to a goal we have for entire fundraising season, which for this year was
$75,000. The thing is: NPR is an important part of USA society, but it’s
one that nearly everyone understands. So few people understand the threats
looming from proprietary software, and they may not understand at all until
it’s too late — when all their devices are locked down, DRM is
fully ubiquitous, and no one is allowed to tinker with the software on
their devices and learn the wonderful art of computer programming. We are
at real risk of reaching that distopia before 90% of the world’s
population understands the threat!

Thus, giving to organizations in the area of software freedom is just
going to have a bigger and more immediate impact than more general causes
that more easily connect with people. You’re giving to prevent a future
that not everyone understands yet, and making an impact on our
work to help explain the dangers to the larger population.

Instrumenting Web Apps Using AWS X-Ray

Post Syndicated from Bharath Kumar original https://aws.amazon.com/blogs/devops/instrumenting-web-apps-using-aws-x-ray/

This post was written by James Bowman, Software Development Engineer, AWS X-Ray

AWS X-Ray helps developers analyze and debug distributed applications and underlying services in production. You can identify and analyze root-causes of performance issues and errors, understand customer impact, and extract statistical aggregations (such as histograms) for optimization.

In this blog post, I will provide a step-by-step walkthrough for enabling X-Ray tracing in the Go programming language. You can use these steps to add X-Ray tracing to any distributed application.

Revel: A web framework for the Go language

This section will assist you with designing a guestbook application. Skip to “Instrumenting with AWS X-Ray” section below if you already have a Go language application.

Revel is a web framework for the Go language. It facilitates the rapid development of web applications by providing a predefined framework for controllers, views, routes, filters, and more.

To get started with Revel, run revel new github.com/jamesdbowman/guestbook. A project base is then copied to $GOPATH/src/github.com/jamesdbowman/guestbook.

$ tree -L 2
.
├── README.md
├── app
│ ├── controllers
│ ├── init.go
│ ├── routes
│ ├── tmp
│ └── views
├── conf
│ ├── app.conf
│ └── routes
├── messages
│ └── sample.en
├── public
│ ├── css
│ ├── fonts
│ ├── img
│ └── js
└── tests
└── apptest.go

Writing a guestbook application

A basic guestbook application can consist of just two routes: one to sign the guestbook and another to list all entries.
Let’s set up these routes by adding a Book controller, which can be routed to by modifying ./conf/routes.

./app/controllers/book.go:
package controllers

import (
    "math/rand"
    "time"

    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/endpoints"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/dynamodb"
    "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
    "github.com/revel/revel"
)

const TABLE_NAME = "guestbook"
const SUCCESS = "Success.\n"
const DAY = 86400

var letters = []rune("ABCDEFGHIJKLMNOPQRSTUVWXYZ")

func init() {
    rand.Seed(time.Now().UnixNano())
}

// randString returns a random string of len n, used for DynamoDB Hash key.
func randString(n int) string {
    b := make([]rune, n)
    for i := range b {
        b[i] = letters[rand.Intn(len(letters))]
    }
    return string(b)
}

// Book controls interactions with the guestbook.
type Book struct {
    *revel.Controller
    ddbClient *dynamodb.DynamoDB
}

// Signature represents a user's signature.
type Signature struct {
    Message string
    Epoch   int64
    ID      string
}

// ddb returns the controller's DynamoDB client, instatiating a new client if necessary.
func (c Book) ddb() *dynamodb.DynamoDB {
    if c.ddbClient == nil {
        sess := session.Must(session.NewSession(&aws.Config{
            Region: aws.String(endpoints.UsWest2RegionID),
        }))
        c.ddbClient = dynamodb.New(sess)
    }
    return c.ddbClient
}

// Sign allows users to sign the book.
// The message is to be passed as application/json typed content, listed under the "message" top level key.
func (c Book) Sign() revel.Result {
    var s Signature

    err := c.Params.BindJSON(&s)
    if err != nil {
        return c.RenderError(err)
    }
    now := time.Now()
    s.Epoch = now.Unix()
    s.ID = randString(20)

    item, err := dynamodbattribute.MarshalMap(s)
    if err != nil {
        return c.RenderError(err)
    }

    putItemInput := &dynamodb.PutItemInput{
        TableName: aws.String(TABLE_NAME),
        Item:      item,
    }
    _, err = c.ddb().PutItem(putItemInput)
    if err != nil {
        return c.RenderError(err)
    }

    return c.RenderText(SUCCESS)
}

// List allows users to list all signatures in the book.
func (c Book) List() revel.Result {
    scanInput := &dynamodb.ScanInput{
        TableName: aws.String(TABLE_NAME),
        Limit:     aws.Int64(100),
    }
    res, err := c.ddb().Scan(scanInput)
    if err != nil {
        return c.RenderError(err)
    }

    messages := make([]string, 0)
    for _, v := range res.Items {
        messages = append(messages, *(v["Message"].S))
    }
    return c.RenderJSON(messages)
}

./conf/routes:
POST /sign Book.Sign
GET /list Book.List

Creating the resources and testing

For the purposes of this blog post, the application will be run and tested locally. We will store and retrieve messages from an Amazon DynamoDB table. Use the following AWS CLI command to create the guestbook table:

aws dynamodb create-table --region us-west-2 --table-name "guestbook" --attribute-definitions AttributeName=ID,AttributeType=S AttributeName=Epoch,AttributeType=N --key-schema AttributeName=ID,KeyType=HASH AttributeName=Epoch,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

Now, let’s test our sign and list routes. If everything is working correctly, the following result appears:

$ curl -d '{"message":"Hello from cURL!"}' -H "Content-Type: application/json" http://localhost:9000/book/sign
Success.
$ curl http://localhost:9000/book/list
[
  "Hello from cURL!"
]%

Integrating with AWS X-Ray

Download and run the AWS X-Ray daemon

The AWS SDKs emit trace segments over UDP on port 2000. (This port can be configured.) In order for the trace segments to make it to the X-Ray service, the daemon must listen on this port and batch the segments in calls to the PutTraceSegments API.
For information about downloading and running the X-Ray daemon, see the AWS X-Ray Developer Guide.

Installing the AWS X-Ray SDK for Go

To download the SDK from GitHub, run go get -u github.com/aws/aws-xray-sdk-go/... The SDK will appear in the $GOPATH.

Enabling the incoming request filter

The first step to instrumenting an application with AWS X-Ray is to enable the generation of trace segments on incoming requests. The SDK conveniently provides an implementation of http.Handler which does exactly that. To ensure incoming web requests travel through this handler, we can modify app/init.go, adding a custom function to be run on application start.

import (
    "github.com/aws/aws-xray-sdk-go/xray"
    "github.com/revel/revel"
)

...

func init() {
  ...
    revel.OnAppStart(installXRayHandler)
}

func installXRayHandler() {
    revel.Server.Handler = xray.Handler(xray.NewFixedSegmentNamer("GuestbookApp"), revel.Server.Handler)
}

The application will now emit a segment for each incoming web request. The service graph appears:

You can customize the name of the segment to make it more descriptive by providing an alternate implementation of SegmentNamer to xray.Handler. For example, you can use xray.NewDynamicSegmentNamer(fallback, pattern) in place of the fixed namer. This namer will use the host name from the incoming web request (if it matches pattern) as the segment name. This is often useful when you are trying to separate different instances of the same application.

In addition, HTTP-centric information such as method and URL is collected in the segment’s http subsection:

"http": {
    "request": {
        "url": "/book/list",
        "method": "GET",
        "user_agent": "curl/7.54.0",
        "client_ip": "::1"
    },
    "response": {
        "status": 200
    }
},

Instrumenting outbound calls

To provide detailed performance metrics for distributed applications, the AWS X-Ray SDK needs to measure the time it takes to make outbound requests. Trace context is passed to downstream services using the X-Amzn-Trace-Id header. To draw a detailed and accurate representation of a distributed application, outbound call instrumentation is required.

AWS SDK calls

The AWS X-Ray SDK for Go provides a one-line AWS client wrapper that enables the collection of detailed per-call metrics for any AWS client. We can modify the DynamoDB client instantiation to include this line:

// ddb returns the controller's DynamoDB client, instatiating a new client if necessary.
func (c Book) ddb() *dynamodb.DynamoDB {
    if c.ddbClient == nil {
        sess := session.Must(session.NewSession(&aws.Config{
            Region: aws.String(endpoints.UsWest2RegionID),
        }))
        c.ddbClient = dynamodb.New(sess)
        xray.AWS(c.ddbClient.Client) // add subsegment-generating X-Ray handlers to this client
    }
    return c.ddbClient
}

We also need to ensure that the segment generated by our xray.Handler is passed to these AWS calls so that the X-Ray SDK knows to which segment these generated subsegments belong. In Go, the context.Context object is passed throughout the call path to achieve this goal. (In most other languages, some variant of ThreadLocal is used.) AWS clients provide a *WithContext method variant for each AWS operation, which we need to switch to:

_, err = c.ddb().PutItemWithContext(c.Request.Context(), putItemInput)
    res, err := c.ddb().ScanWithContext(c.Request.Context(), scanInput)

We now see much more detail in the Timeline view of the trace for the sign and list operations:

We can use this detail to help diagnose throttling on our DynamoDB table. In the following screenshot, the purple in the DynamoDB service graph node indicates that our table is underprovisioned. The red in the GuestbookApp node indicates that the application is throwing faults due to this throttling.

HTTP calls

Although the guestbook application does not make any non-AWS outbound HTTP calls in its current state, there is a similar one-liner to wrap HTTP clients that make outbound requests. xray.Client(c *http.Client) wraps an existing http.Client (or nil if you want to use a default HTTP client). For example:

resp, err := ctxhttp.Get(ctx, xray.Client(nil), "https://aws.amazon.com/")

Instrumenting local operations

X-Ray can also assist in measuring the performance of local compute operations. To see this in action, let’s create a custom subsegment inside the randString method:


// randString returns a random string of len n, used for DynamoDB Hash key.
func randString(ctx context.Context, n int) string {
    xray.Capture(ctx, "randString", func(innerCtx context.Context) {
        b := make([]rune, n)
        for i := range b {
            b[i] = letters[rand.Intn(len(letters))]
        }
        s := string(b)
    })
    return s
}

// we'll also need to change the callsite

s.ID = randString(c.Request.Context(), 20)

Summary

By now, you are an expert on how to instrument X-Ray for your Go applications. Instrumenting X-Ray with your applications is an easy way to analyze and debug performance issues and understand customer impact. Please feel free to give any feedback or comments below.

For more information about advanced configuration of the AWS X-Ray SDK for Go, see the AWS X-Ray SDK for Go in the AWS X-Ray Developer Guide and the aws/aws-xray-sdk-go GitHub repository.

For more information about some of the advanced X-Ray features such as histograms, annotations, and filter expressions, see the Analyzing Performance for Amazon Rekognition Apps Written on AWS Lambda Using AWS X-Ray blog post.

Thank you for my new Raspberry Pi, Santa! What next?

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/thank-you-for-my-new-raspberry-pi-santa-what-next/

Note: the Pi Towers team have peeled away from their desks to spend time with their families over the festive season, and this blog will be quiet for a while as a result. We’ll be back in the New Year with a bushel of amazing projects, awesome resources, and much merriment and fun times. Happy holidays to all!

Now back to the matter at hand. Your brand new Christmas Raspberry Pi.

Your new Raspberry Pi

Did you wake up this morning to find a new Raspberry Pi under the tree? Congratulations, and welcome to the Raspberry Pi community! You’re one of us now, and we’re happy to have you on board.

But what if you’ve never seen a Raspberry Pi before? What are you supposed to do with it? What’s all the fuss about, and why does your new computer look so naked?

Setting up your Raspberry Pi

Are you comfy? Good. Then let us begin.

Download our free operating system

First of all, you need to make sure you have an operating system on your micro SD card: we suggest Raspbian, the Raspberry Pi Foundation’s official supported operating system. If your Pi is part of a starter kit, you might find that it comes with a micro SD card that already has Raspbian preinstalled. If not, you can download Raspbian for free from our website.

An easy way to get Raspbian onto your SD card is to use a free tool called Etcher. Watch The MagPi’s Lucy Hattersley show you what you need to do. You can also use NOOBS to install Raspbian on your SD card, and our Getting Started guide explains how to do that.

Plug it in and turn it on

Your new Raspberry Pi 3 comes with four USB ports and an HDMI port. These allow you to plug in a keyboard, a mouse, and a television or monitor. If you have a Raspberry Pi Zero, you may need adapters to connect your devices to its micro USB and micro HDMI ports. Both the Raspberry Pi 3 and the Raspberry Pi Zero W have onboard wireless LAN, so you can connect to your home network, and you can also plug an Ethernet cable into the Pi 3.

Make sure to plug the power cable in last. There’s no ‘on’ switch, so your Pi will turn on as soon as you connect the power. Raspberry Pi uses a micro USB power supply, so you can use a phone charger if you didn’t receive one as part of a kit.

Learn with our free projects

If you’ve never used a Raspberry Pi before, or you’re new to the world of coding, the best place to start is our projects site. It’s packed with free projects that will guide you through the basics of coding and digital making. You can create projects right on your screen using Scratch and Python, connect a speaker to make music with Sonic Pi, and upgrade your skills to physical making using items from around your house.

Here’s James to show you how to build a whoopee cushion using a Raspberry Pi, paper plates, tin foil and a sponge:

Whoopee cushion PRANK with a Raspberry Pi: HOW-TO

Explore the world of Raspberry Pi physical computing with our free FutureLearn courses: http://rpf.io/futurelearn Free make your own Whoopi Cushion resource: http://rpf.io/whoopi For more information on Raspberry Pi and the charitable work of the Raspberry Pi Foundation, including Code Club and CoderDojo, visit http://rpf.io Our resources are free to use in schools, clubs, at home and at events.

Diving deeper

You’ve plundered our projects, you’ve successfully rigged every chair in the house to make rude noises, and now you want to dive deeper into digital making. Good! While you’re digesting your Christmas dinner, take a moment to skim through the Raspberry Pi blog for inspiration. You’ll find projects from across our worldwide community, with everything from home automation projects and retrofit upgrades, to robots, gaming systems, and cameras.

You’ll also find bucketloads of ideas in The MagPi magazine, the official monthly Raspberry Pi publication, available in both print and digital format. You can download every issue for free. If you subscribe, you’ll get a Raspberry Pi Zero W to add to your new collection. HackSpace magazine is another fantastic place to turn for Raspberry Pi projects, along with other maker projects and tutorials.

And, of course, simply typing “Raspberry Pi projects” into your preferred search engine will find thousands of ideas. Sites like Hackster, Hackaday, Instructables, Pimoroni, and Adafruit all have plenty of fab Raspberry Pi tutorials that they’ve devised themselves and that community members like you have created.

And finally

If you make something marvellous with your new Raspberry Pi – and we know you will – don’t forget to share it with us! Our Twitter, Facebook, Instagram and Google+ accounts are brimming with chatter, projects, and events. And our forums are a great place to visit if you have questions about your Raspberry Pi or if you need some help.

It’s good to get together with like-minded folks, so check out the growing Raspberry Jam movement. Raspberry Jams are community-run events where makers and enthusiasts can meet other makers, show off their projects, and join in with workshops and discussions. Find your nearest Jam here.

Have a great festive holiday and welcome to the community. We’ll see you in 2018!

The post Thank you for my new Raspberry Pi, Santa! What next? appeared first on Raspberry Pi.

Set Up a Continuous Delivery Pipeline for Containers Using AWS CodePipeline and Amazon ECS

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/set-up-a-continuous-delivery-pipeline-for-containers-using-aws-codepipeline-and-amazon-ecs/

This post contributed by Abby FullerAWS Senior Technical Evangelist

Last week, AWS announced support for Amazon Elastic Container Service (ECS) targets (including AWS Fargate) in AWS CodePipeline. This support makes it easier to create a continuous delivery pipeline for container-based applications and microservices.

Building and deploying containerized services manually is slow and prone to errors. Continuous delivery with automated build and test mechanisms helps detect errors early, saves time, and reduces failures, making this a popular model for application deployments. Previously, to automate your container workflows with ECS, you had to build your own solution using AWS CloudFormation. Now, you can integrate CodePipeline and CodeBuild with ECS to automate your workflows in just a few steps.

A typical continuous delivery workflow with CodePipeline, CodeBuild, and ECS might look something like the following:

  • Choosing your source
  • Building your project
  • Deploying your code

We also have a continuous deployment reference architecture on GitHub for this workflow.

Getting Started

First, create a new project with CodePipeline and give the project a name, such as “demo”.

Next, choose a source location where the code is stored. This could be AWS CodeCommit, GitHub, or Amazon S3. For this example, enter GitHub and then give CodePipeline access to the repository.

Next, add a build step. You can import an existing build, such as a Jenkins server URL or CodeBuild project, or create a new step with CodeBuild. If you don’t have an existing build project in CodeBuild, create one from within CodePipeline:

  • Build provider: AWS CodeBuild
  • Configure your project: Create a new build project
  • Environment image: Use an image managed by AWS CodeBuild
  • Operating system: Ubuntu
  • Runtime: Docker
  • Version: aws/codebuild/docker:1.12.1
  • Build specification: Use the buildspec.yml in the source code root directory

Now that you’ve created the CodeBuild step, you can use it as an existing project in CodePipeline.

Next, add a deployment provider. This is where your built code is placed. It can be a number of different options, such as AWS CodeDeploy, AWS Elastic Beanstalk, AWS CloudFormation, or Amazon ECS. For this example, connect to Amazon ECS.

For CodeBuild to deploy to ECS, you must create an image definition JSON file. This requires adding some instructions to the pre-build, build, and post-build phases of the CodeBuild build process in your buildspec.yml file. For help with creating the image definition file, see Step 1 of the Tutorial: Continuous Deployment with AWS CodePipeline.

  • Deployment provider: Amazon ECS
  • Cluster name: enter your project name from the build step
  • Service name: web
  • Image filename: enter your image definition filename (“web.json”).

You are almost done!

You can now choose an existing IAM service role that CodePipeline can use to access resources in your account, or let CodePipeline create one. For this example, use the wizard, and go with the role that it creates (AWS-CodePipeline-Service).

Finally, review all of your changes, and choose Create pipeline.

After the pipeline is created, you’ll have a model of your entire pipeline where you can view your executions, add different tests, add manual approvals, or release a change.

You can learn more in the AWS CodePipeline User Guide.

Happy automating!