Tag Archives: Events

One night in Beijing

Post Syndicated from Chris Chua original https://blog.cloudflare.com/one-night-in-beijing/

One night in Beijing

One night in Beijing

As the old saying goes, good things come in pairs, 好事成双! The month of May marks a double celebration in China for our customers, partners and Cloudflare.

First and Foremost

A Beijing Customer Appreciation Cocktail was held in the heart of Beijing at Yintai Centre Xiu Rooftop Garden Bar on the 10 May 2019, an RSVP event graced by our supportive group of partners and customers.

We have been blessed with almost 10 years of strong growth at Cloudflare – sharing our belief in providing access to internet security and performance to customers of all sizes and industries. This success has been the result of collaboration between our developers, our product team as represented today by our special guest, Jen Taylor, our Global Head of Product, Business Leaders Xavier Cai, Head of China business, and Aliza Knox Head of our APAC Business, James Ball our Head of Solutions Engineers for APAC, most importantly, by the trust and faith that our partners, such as Baidu, and customers have placed in us.

One night in Beijing

One night in Beijing

Double Happiness, 双喜

One night in Beijing

On the same week, we embarked on another exciting journey in China with our grand office opening at WeWork. Beijing team consists of functions from Customer Development to Solutions Engineering and Customer Success lead by Xavier, Head of China business. The team has grown rapidly in size by double since it started last year.

We continue to invest in China and to grow our customer base, and importantly our methods for supporting our customers, here are well. Those of us who came from different parts of the world, are also looking to learn from the wisdom and experience of our customers in this market. And to that end, we look forward to many more years of openness, trust, and mutual success.

感谢所有花时间来参加我们这次北京鸡尾酒会的客户和合作伙伴,谢谢各位对此活动的大力支持与热烈交流!

One night in Beijing

One night in Beijing

Join Cloudflare & Yandex at our Moscow meetup! Присоединяйтесь к митапу в Москве!

Post Syndicated from Andrew Fitch original https://blog.cloudflare.com/moscow-developers-join-cloudflare-yandex-at-our-meetup/

Join Cloudflare & Yandex at our Moscow meetup! Присоединяйтесь к митапу в Москве!
Photo by Serge Kutuzov / Unsplash

Join Cloudflare & Yandex at our Moscow meetup! Присоединяйтесь к митапу в Москве!

Are you based in Moscow? Cloudflare is partnering with Yandex to produce a meetup this month in Yandex’s Moscow headquarters.  We would love to invite you to join us to learn about the newest in the Internet industry. You’ll join Cloudflare’s users, stakeholders from the tech community, and Engineers and Product Managers from both Cloudflare and Yandex.

Cloudflare Moscow Meetup

Tuesday, May 30, 2019: 18:00 – 22:00

Location: Yandex – Ulitsa L’va Tolstogo, 16, Moskva, Russia, 119021

Talks will include “Performance and scalability at Cloudflare”, “Security at Yandex Cloud”, and “Edge computing”.

Speakers will include Evgeny Sidorov, Information Security Engineer at Yandex, Ivan Babrou, Performance Engineer at Cloudflare, Alex Cruz Farmer, Product Manager for Firewall at Cloudflare, and Olga Skobeleva, Solutions Engineer at Cloudflare.

Agenda:

18:00 – 19:00 – Registration and welcome cocktail

19:00 – 19:10 – Cloudflare overview

19:10 – 19:40 – Performance and scalability at Cloudflare

19:40 – 20:10 – Security at Yandex Cloud

20:10 – 20:40 – Cloudflare security solutions and industry security trends

20:40 – 21:10 – Edge computing

Q&A

The talks will be followed by food, drinks, and networking.

View Event Details & Register Here »

We’ll hope to meet you soon.

Разработчики, присоединяйтесь к Cloudflare и Яндексу на нашей предстоящей встрече в Москве!

Cloudflare сотрудничает с Яндексом, чтобы организовать мероприятие в этом месяце в штаб-квартире Яндекса. Мы приглашаем вас присоединиться к встрече посвященной новейшим достижениям в интернет-индустрии. На мероприятии соберутся клиенты Cloudflare, профессионалы из технического сообщества, инженеры из Cloudflare и Яндекса.

Вторник, 30 мая: 18:00 – 22:00

Место встречи: Яндекс, улица Льва Толстого, 16, Москва, Россия, 119021

Доклады будут включать себя такие темы как «Решения безопасности Cloudflare и тренды в области безопасности», «Безопасность в Yandex Cloud», “Производительность и масштабируемость в Cloudflare и «Edge computing» от докладчиков из Cloudflare и Яндекса.

Среди докладчиков будут Евгений Сидоров, Заместитель руководителя группы безопасности сервисов в Яндексе, Иван Бобров, Инженер по производительности в Cloudflare, Алекс Круз Фармер, Менеджер продукта Firewall в Cloudflare, и Ольга Скобелева, Инженер по внедрению в Cloudflare.

Программа:

18:00 – 19:00 – Регистрация, напитки и общение

19:00 – 19:10 – Обзор Cloudflare

19:10 – 19:40 – Производительность и масштабируемость в Cloudflare

19:40 – 20:10 – Решения для обеспечения безопасности в Яндексе

20:10 – 20:40 – Решения безопасности Cloudflare и тренды в области безопасности

20:40 – 21:10 – Примеры Serverless-решений по безопасности

Q&A

Вслед за презентациям последует общение, еда и напитки.

Посмотреть детали события и зарегистрироваться можно здесь »

Ждем встречи с вами!

AWS Security Profiles: Tracy Pierce, Senior Consultant, Security Specialty, Remote Consulting Services

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-tracy-pierce-senior-consultant-security-specialty-rcs/

AWS Security Profiles: Tracy Pierce, Senior Consultant, Security Specialty, Remote Consulting Services

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


You’ve worn a lot of hats at AWS. What do you do in your current role, and how is it different from previous roles?

I joined AWS as a Customer Support Engineer. Currently, I’m a Senior Consultant, Security Specialty, for Remote Consulting Services, which is part of the AWS Professional Services (ProServe) team.

In my current role, I work with ProServe agents and Solution Architects who might be out with customers onsite and who need stuff built. “Stuff” could be automation, like AWS Lambda functions or AWS CloudFormation templates, or even security best practices documentation… you name it. When they need it built, they come to my team. Right now, I’m working on an AWS Lambda function to pull AWS CloudTrail logs so that you can see if anyone is making policy changes to any of your AWS resources—and if so, have it written to an Amazon Aurora database. You can then check to see if it matches the security requirements that you have set up. It’s fun! It’s new. I’m developing new skills along the way.

In my previous Support role, my work involved putting out fires, walking customers through initial setup, and showing them how to best use resources within their existing environment and architecture. My position as a Senior Consultant is a little different—I get to work with the customer from the beginning of a project rather than engaging much later in the process.

What’s your favorite part of your job

Talking with customers! I love explaining how to use AWS services. A lot of people understand our individual services but don’t always understand how to use multiple services together. We launch so many features and services that it’s understandably hard to keep up. Getting to help someone understand, “Hey, this cool new service will do exactly what I want!” or showing them how it can be combined in a really cool way with this other new service—that’s the fun part.

What’s the most challenging part of your job?

Right now? Learning to code. I don’t have a programming background, so I’m learning Python on the fly with the help of some teammates. I’m a very graphic-oriented, visual learner, so writing lines of code is challenging. But I’m getting there.

What career advice would you offer to someone just starting out at AWS?

Find a thing that you’re passionate about, and go for it. When I first started, I was on the Support team in the Linux profile, but I loved figuring out permissions and firewall rules and encryption. I think AWS had about ten services at the time, and I kept pushing myself to learn as much as I could about AWS Identity and Access Management (IAM). I asked enough questions to turn myself into an expert in that field. So, my best advice is to find a passion and don’t let anything hold you back.

What inspires you about security? Why is it something you’re passionate about?

It’s a puzzle, and I love puzzles. We’re always trying to stay one step ahead, which means there’s always something new to learn. Every day, there are new developments. Working in Security means trying to figure out how this ever-growing set of puzzles and pieces can fit together—if one piece could potentially open a back door, how can you find a different piece that will close it? Figuring out how to solve these challenges, often while others in the security field are also working on them, is a lot of fun.

In your opinion, what’s the biggest challenge facing cloud security right now?

There aren’t enough people focusing on cybersecurity. We’re in an era where people are migrating from on-prem to cloud, and it requires a huge shift in mindset to go from working with on-prem hardware to systems that you can no longer physically put your hands on. People are used to putting in physical security restraints, like making sure doors locks and badges are required for entry. When you move to the cloud, you have to start thinking not just about security group rules—like who’s allowed access to your data—but about all the other functions, features, and permissions that are a part of your cloud environment. How do you restrict those permissions? How do you restrict them for a certain team versus certain projects? How can you best separate job functions, projects, and teams in your organization? There’s so much more to cybersecurity than the stories of “hackers” you see on TV.

What’s the most common misperception you encounter about cloud security?

That it’s a one-and-done thing. I meet a lot of people who think, “Oh, I set it up” but who haven’t touched their environment in four years. The cloud is ever-changing, so your production environment and workloads are ever-changing. They’re going to grow; they’ll need to be audited in some fashion. It’s important to keep on top of that. You need to audit permissions, audit who’s accessing which systems, and make sure the systems are doing what they’re supposed to. You can’t just set it up and be finished.

How do you help educate customers about these types of misperceptions?

I go to AWS Pop-up Lofts periodically, plus conferences like re:Inforce and re:Invent, where I spend a lot of time helping people understand that security is a continuous thing. Writing blog posts also really helps, since it’s a way to show customers new ways of securing their environment using methods that they might not have considered. I can take edge cases that we might hear about from one or two customers, but which probably affect hundreds of other organizations, and reach out to them with some different setups.

You’re leading a re:Inforce builders session called “Automating password and secrets, and disaster recovery.” What’s a builders session?

Builders sessions are basically labs: There will be a very short introduction to the session, where you’re introduced to the concepts and services used in the lab. In this case, I’ll talk a little about how you can make sure your databases and resources are resilient and that you’ve set up disaster recovery scenarios.

After that, I walk around while people try out the services, hands-on, for themselves, and I see if anyone has questions. A lot of people learn better if they actually get a chance to play with things instead of just read about them. If people run into issues, like, “Why does the code say this for example?” or “Why does it create this folder over here in a different region?” I can answer those questions in the moment.

How did you arrive at your topic?

It’s based on a blog post that I wrote, called “How to automate replication of secrets in AWS Secrets Manager across AWS Regions.” It was a highly requested feature from customers that were already dealing with RDS databases. I actually wrote two posts–the second post focused on Windows passwords, and it demonstrated how you can have a secure password for Windows without having to share an SSH key across multiple entities in an organization. These two posts gave me the idea for the builders session topic: I want to show customers that you can use Secrets Manager to store sensitive information without needing to have a human manually read it in plain text.

A lot of customers are used to an on-premises access model, where everything is physical and things are written in a manual—but then you have to worry about safeguarding the manual so that only the appropriate people can read it. With the approach I’m sharing, you can have two or three people out of your entire organization who are in charge of creating the security aspects, like password policy, creation, rotation, and function. And then all other users can log in: The system pulls the passwords for them, inputs the passwords into the application, and the users do not see them in plain text. And because users have to be authenticated to access resources like the password, this approach prevents people from outside your organization from going to a webpage and trying to pull that secret and log in. They’re not going to have permissions to access it. It’s one more way for customers to lock down their sensitive data.

What are you hoping that your audience will do differently as a result of this session?

I hope they’ll begin migrating their sensitive data—whether that’s the keys they’re using to encrypt their client-side databases, or their passwords for Windows—because their data is safer in the cloud. I want people to realize that they have all of these different options available, and to start figuring ways to utilize these solutions in their own environment.

I also hope that people will think about the processes that they have in their own workflow, even if those processes don’t extend to the greater organization and it’s something that only affects their job. For example, how can they make changes so that someone can’t just walk into their office on any given day and see their password? Those are the kinds of things I hope people will start thinking about.

Is there anything else you want people to know about your session?

Security is changing so much and so quickly that nobody is 100% caught up, so don’t be afraid to ask for help. It can feel intimidating to have to figure out new security methods, so I like to remind people that they shouldn’t be afraid to reach out or ask questions. That’s how we all learn.

You love otters. What’s so great about them?

I’m obsessed with them—and otters and security actually go together! When otters are with their family group, they’re very dedicated to keeping outsiders away and not letting anybody or anything get into their den, into their home, or into their family. There are large Amazon river otters that will actually take on Cayman alligators, as a family group, to make sure the alligators don’t get anywhere near the nest and attack the pups. Otters also try to work smarter, not harder, which I’ve found to be a good motto. If you can accomplish your goal through a small task, and it’s efficient, and it works, and it’s secure, then go for it. That’s what otters do.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Meet us at Maker Faire Bay Area 2019

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/maker-faire-bay-area-2019/

We’ll be attending Maker Faire Bay Area this month and we’d love to see as many of you there as we can, so be sure to swing by the Raspberry Pi stand and say hi!

Our North America team will be on-hand and hands-on all weekend to show you the wonders of the Raspberry Pi, with some great tech experiments for you to try. Do you like outer space? Of course, why wouldn’t you? So come try out the Sense HAT, our multi-sensor add-on board that we created especially for our two Astro Pi units aboard the International Space Station!

We’ll also have stickers, leaflets, and a vast array of information to share about the Raspberry Pi, our clubs and programmes, and how you can get more involved in the Raspberry Pi community.

And that’s not all!

Onstage talks!

Matt Richardson, Executive Director of the Raspberry Pi Foundation North America and all-round incredible person, will be making an appearance on the Make: Electronics by Digi-Key stage at 3pm Saturday 18 May to talk about Making Art with Raspberry Pi.

Matt Richardson

Hi, Matt!

And I’m presenting too! On the Sunday, I’ll be on the DIY Content Creators Stage at 12:30pm with special guests Joel “3D Printing Nerd” Telling and Estefannie Explains it All for a live recording of my podcast to discuss the importance of community for makers and brands.

There will also be a whole host of incredible creations by makers from across the globe, and a wide variety of talks and presentations throughout the weekend. So if you’re a fan of creative contraptions and beastly builds, you’ll be blown away at this year’s Maker Faire.

Showcasing your projects

If you’re planning to attend Maker Faire to showcase your project, we want to hear from you. Leave a comment below with information on your build so we can come and find you on the day. Our trusty videographer Fiacre and I will be scouting for our next favourite Raspberry Pi make, and we’ll also have Andrew with us, who is eager to fill the pages of HackSpace magazine with any cool, creative wonders we find — Pi-related or otherwise!

Discounted tickets!

Maker Fair Bay Area 2019 will be running at the San Mateo County Event Center from Friday 17 to Sunday 19 May.

If you’re in the area and would like to attend Maker Fair Bay Area, make use of  our 15% community discount on tickets. Wooh!

For more information on Maker Faire, check out the Maker Faire website, or follow Maker Faire on Twitter.

See you there!

The post Meet us at Maker Faire Bay Area 2019 appeared first on Raspberry Pi.

We want to host your technical meetup at Cloudflare London

Post Syndicated from Andrew Fitch original https://blog.cloudflare.com/we-want-to-host-your-technical-meetup-at-cloudflare-london/

We want to host your technical meetup at Cloudflare London

Cloudflare recently moved to County Hall, the building just behind the London Eye. We have a very large event space which we would love to open up to the developer community. If you organize a technical meetup, we’d love to host you. If you attend technical meetups, please share this post with the meetup organizers.

We want to host your technical meetup at Cloudflare London
We’re on the upper floor of County Hall

About the space

Our event space is large enough to hold up to 280 attendees, but can also be used for a small group as well. There is a large entry way for people coming into our 6th floor lobby where check-in may be managed. Once inside the event space, you will see a large, open kitchen area which can be used to set up event food and beverages. Beyond that is Cloudflare’s all-hands space, which may be used for your events.

We have several gender-neutral toilets for your guests’ use as well.

Lobby

You may welcome your guests here. The event space is just to the left of this spot.

We want to host your technical meetup at Cloudflare London

Event space

This space may be used for talks, workshops, or large panels. We can rearrange seating, based on the format of your meetup.

We want to host your technical meetup at Cloudflare London

Food & beverages

Cloudflare will gladly provide the light snacks and beverages including beer, wine or cider, and sodas or juices we have in our kitchen area. If your attendees would like to have additional food, you are welcome to order additional food. If your meetup is eligible, we may even be able to sponsor your additional food orders. Check out our pizza reimbursement rules for more details.

We want to host your technical meetup at Cloudflare London
Our kitchen area is attached to the event space

How to book the space

If this all sounds good to you and you’re interested in hosting your technical meetup at Cloudflare London, please fill out this form with all the details of your event. If you’d like a tour of the space before booking it, I will gladly show you around and go through date options with you.

Host at Cloudflare »

You may also email me directly with any questions you have.

I’ll hope to meet and host you soon!


Want to host an event at Cloudflare’s San Francisco office?

We also warmly welcome meetups in our San Francisco all-hands space. Please read and submit this form if your meetup is Bay Area-based.

Enriching Event-Driven Architectures with AWS Event Fork Pipelines

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/enriching-event-driven-architectures-with-aws-event-fork-pipelines/

This post is courtesy of Otavio Ferreira, Mgr, Amazon SNS, and James Hood, Sr. Software Dev Engineer

Many customers are choosing to build event-driven applications in which subscriber services automatically perform work in response to events triggered by publisher services. This architectural pattern can make services more reusable, interoperable, and scalable.

Customers often fork event processing into pipelines that address common event handling requirements, such as event storage, backup, search, analytics, or replay. To help you build event-driven applications even faster, AWS introduces Event Fork Pipelines, a collection of open-source event handling pipelines that you can subscribe to Amazon SNS topics in your AWS account.

Event Fork Pipelines is a suite of open-source nested applications, based on the AWS Serverless Application Model (AWS SAM). You can deploy it directly from the AWS Serverless Application Repository into your AWS account.

Event Fork Pipelines is built on top of serverless services, including Amazon SNS, Amazon SQS, and AWS Lambda. These services provide serverless building blocks that help you build fully managed, highly available, and scalable event-driven platforms. Lambda enables you to build event-driven microservices as serverless functions. SNS and SQS provide serverless topics and queues for integrating these microservices and other distributed systems in your architecture. These building blocks are at the core of the modern application development best practices.

Surfacing the event fork pattern

At AWS, we’ve worked closely with customers across market segments and geographies on event-driven architectures. For example:

  • Financial platforms that handle events related to bank transactions and stock ticks
  • Retail platforms that trigger checkout and fulfillment events

At scale, event-driven architectures often require a set of supporting services to address common requirements such as system auditability, data discoverability, compliance, business insights, and disaster recovery. Translated to AWS, customers often connect event-driven applications to services such as Amazon S3 for event storage and backup, and to Amazon Elasticsearch Service for event search and analytics. Also, customers often implement an event replay mechanism to recover from failure modes in their applications.

AWS created Event Fork Pipelines to encapsulate these common requirements, reducing the amount of effort required for you to connect your event-driven architectures to these supporting AWS services.

AWS then started sharing this pattern more broadly, so more customers could benefit. At the 2018 AWS re:Invent conference in Las Vegas, Amazon CTO Werner Vogels announced the launch of nested applications in his keynote. Werner shared the Event Fork Pipelines pattern with the audience as an example of common application logic that had been encapsulated as a set of nested applications.

The following reference architecture diagram shows an application supplemented by three nested applications:

Each pipeline is subscribed to the same SNS topic, and can process events in parallel as these events are published to the topic. Each pipeline is independent and can set its own subscription filter policy. That way, it processes only the subset of events that it’s interested in, rather than all events published to the topic.

Amazon SNS Fork pipelines reference architecture

Figure 1 – Reference architecture using Event Fork Pipelines

The three event fork pipelines are placed alongside your regular event processing pipelines, which are potentially already subscribed to your SNS topic. Therefore, you don’t have to change any portion of your current message publisher to take advantage of Event Fork Pipelines in your existing workloads. The following sections describe these pipelines and how to deploy them in your system architecture.

Understanding the catalog of event fork pipelines

In the abstract, Event Fork Pipelines is a serverless design pattern. Concretely, Event Fork Pipelines is also a suite of nested serverless applications, based on AWS SAM. You deploy the nested applications directly from the AWS Serverless Application Repository to your AWS account, to enrich your event-driven platforms. You can deploy them individually in your architecture, as needed.

Here’s more information about each nested application in the Event Fork Pipelines suite.

Event Storage & Backup pipeline

Event Fork Pipeline for Event Storage & Backup

Figure 2 – Event Fork Pipeline for Event Storage & Backup

The preceding diagram shows the Event Storage & Backup pipeline. You can subscribe this pipeline to your SNS topic to automatically back up the events flowing through your system. This pipeline is composed of the following resources:

  • An SQS queue that buffers the events delivered by the SNS topic
  • A Lambda function that automatically polls for these events in the queue and pushes them into an Amazon Kinesis Data Firehose delivery stream
  • An S3 bucket that durably backs up the events loaded by the stream

You can configure this pipeline to fine-tune the behavior of your delivery stream. For example, you can configure your pipeline so that the underlying delivery stream buffers, transforms, and compresses your events before loading them into the bucket. As events are loaded, you can use Amazon Athena to query the bucket using standard SQL queries. Also, you can configure the pipeline to either reuse an existing S3 bucket or create a new one for you.

Event Search & Analytics pipeline

Event Fork Pipeline for Event Search & Analytics

Figure 3 – Event Fork Pipeline for Event Search & Analytics

The preceding diagram shows the Event Search & Analytics pipeline. You can subscribe this pipeline to your SNS topic to index in a search domain the events flowing through your system, and then run analytics on them. This pipeline is composed of the following resources:

  • An SQS queue that buffers the events delivered by the SNS topic
  • A Lambda function that polls events from the queue and pushes them into a Data Firehose delivery stream
  • An Amazon ES domain that indexes the events loaded by the delivery stream
  • An S3 bucket that stores the dead-letter events that couldn’t be indexed in the search domain

You can configure this pipeline to fine-tune your delivery stream in terms of event buffering, transformation and compression. You can also decide whether the pipeline should reuse an existing Amazon ES domain in your AWS account or create a new one for you. As events are indexed in the search domain, you can use Kibana to run analytics on your events and update visual dashboards in real time.

Event Replay pipeline

Event Fork Pipeline for Event Replay

Figure 4 – Event Fork Pipeline for Event Replay

The preceding diagram shows the Event Replay pipeline. You can subscribe this pipeline to your SNS topic to record the events that have been processed by your system for up to 14 days. You can then reprocess them in case your platform is recovering from a failure or a disaster. This pipeline is composed of the following resources:

  • An SQS queue that buffers the events delivered by the SNS topic
  • A Lambda function that polls events from the queue and redrives them into your regular event processing pipeline, which is also subscribed to your topic

By default, the replay function is disabled, which means it isn’t redriving your events. If the events need to be reprocessed, your operators must enable the replay function.

Applying event fork pipelines in a use case

This is how everything comes together. The following scenario describes an event-driven, serverless ecommerce application that uses the Event Fork Pipelines pattern. This example ecommerce application is available in AWS Serverless Application Repository. You can deploy it to your AWS account using the Lambda console, test it, and look at its source code in GitHub.

Example ecommerce application using Event Fork Pipelines

Figure 5 – Example e-commerce application using Event Fork Pipelines

The ecommerce application takes orders from buyers through a RESTful API hosted by Amazon API Gateway and backed by a Lambda function named CheckoutFunction. This function publishes all orders received to an SNS topic named CheckoutEventsTopic, which in turn fans out the orders to four different pipelines. The first pipeline is the regular checkout-processing pipeline designed and implemented by you as the ecommerce application owner. This pipeline has the following resources:

  • An SQS queue named CheckoutQueue that buffers all orders received
  • A Lambda function named CheckoutFunction that polls the queue to process these orders
  • An Amazon DynamoDB table named CheckoutTable that securely saves all orders as they’re placed

The components of the system described thus far handle what you might think of as the core business logic. But in addition, you should address the set of elements necessary for making the system resilient, compliant, and searchable:

  • Backing up all orders securely. Compressed backups must be encrypted at rest, with sensitive payment details removed for security and compliance purposes.
  • Searching and running analytics on orders, if the amount is $100 or more. Analytics are needed for key ecommerce metrics, such as average ticket size, average shipping time, most popular products, and preferred payment options.
  • Replaying recent orders. If the fulfillment process is disrupted at any point, you should be able to replay the most recent orders from up to two weeks. This is a key requirement that guarantees the continuity of the ecommerce business.

Rather than implementing all the event processing logic yourself, you can choose to subscribe Event Fork Pipelines to your existing SNS topic CheckoutEventsTopic. The pipelines are configured as follows:

  • The Event Storage & Backup pipeline is configured to transform data as follows:
    • Remove credit card details
    • Buffer data for 60 seconds
    • Compress data using GZIP
    • Encrypt data using the default customer master key (CMK) for S3

This CMK is managed by AWS and powered by AWS Key Management Service (AWS KMS). For more information, see Choosing Amazon S3 for Your Destination, Data Transformation, and Configuration Settings in the Amazon Kinesis Data Firehose Developer Guide.

  • The Event Search & Analytics pipeline is configured with:
    • An index retry duration of 30 seconds
    • A bucket for storing orders that failed to be indexed in the search domain
    • A filter policy to restrict the set of orders that are indexed

For more information, see Choosing Amazon ES for Your Destination, in the Amazon Kinesis Data Firehose Developer Guide.

  • The Event Replay pipeline is configured with the SQS queue name that is part of the regular checkout processing pipeline. For more information, see Queue Name and URL in the Amazon SQS Developer Guide.

The filter policy, shown in JSON format, is set in the configuration for the Event Search & Analytics pipeline. This filter policy matches only incoming orders in which the total amount is $100 or more. For more information, see Message Filtering in the Amazon SNS Developer Guide.


{

    "amount": [

        { "numeric": [ ">=", 100 ] }

    ]

}

By using the Event Fork Pipelines pattern, you avoid the development overhead associated with coding undifferentiated logic for handling events.

Event Fork Pipelines can be deployed directly from AWS Serverless Application Repository into your AWS account.

Deploying event fork pipelines

Event Fork Pipelines is available as a set of public apps in the AWS Serverless Application Repository (to find the apps, select the ‘Show apps that create custom IAM roles or resource policies’ check box under the search bar). It can be deployed and tested manually via the Lambda console. In a production scenario, we recommend embedding fork pipelines within the AWS SAM template of your overall application. The nested applications feature enables you to do this by adding an AWS::Serverless::Application resource to your AWS SAM template. The resource references the ApplicationId and SemanticVersion values of the application to nest.

For example, you can include the Event Storage & Backup pipeline as a nested application by adding the following YAML snippet to the Resources section of your AWS SAM template:


Backup:

  Type: AWS::Serverless::Application

  Properties:

    Location:

      ApplicationId: arn:aws:serverlessrepo:us-east-1:012345678901:applications/fork-event-storage-backup-pipeline

      SemanticVersion: 1.0.0

    Parameters:

      # SNS topic ARN whose messages should be backed up to the S3 bucket.

      TopicArn: !Ref MySNSTopic

When specifying parameter values, you can use AWS CloudFormation intrinsic functions to reference other resources in your template. In the preceding example, the TopicArn parameter is filled in by referencing an AWS::SNS::Topic called MySNSTopic, defined elsewhere in the AWS SAM template. For more information, see Intrinsic Function Reference in the AWS CloudFormation User Guide.

To copy the YAML required for nesting, in the Lambda console page for an AWS Serverless Application Repository application, choose Copy as SAM Resource.

Authoring new event fork pipelines

We invite you to fork the Event Fork Pipelines repository in GitHub and submit pull requests for contributing with new pipelines. In addition to event storage and backup, event search and analytics, and event replay, what other common event handling requirements have you seen?

We look forward to seeing what you’ll come up with for extending the Event Fork Pipelines suite.

Summary

Event Fork Pipelines is a serverless design pattern and a suite of open-source nested serverless applications, based on AWS SAM. You can deploy it directly from AWS Serverless Application Repository to enrich your event-driven system architecture. Event Fork Pipelines lets you store, back up, replay, search, and run analytics on the events flowing through your system. There’s no need to write code, manually stitch resources together, or set up infrastructure.

You can deploy Event Fork Pipelines in any AWS Region that supports the underlying AWS services used in the pipelines. There are no additional costs associated with Event Fork Pipelines itself, and you pay only for using the AWS resources inside each nested application.

Get started today by deploying the example ecommerce application or searching for Event Fork Pipelines in AWS Serverless Application Repository.

Celebrate with us this weekend!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/celebrate-with-us-this-weekend/

The Raspberry Jam Big Birthday is almost here! In celebration of our seventh birthday, we’re coordinating with over 130 community‑led Raspberry Jams in 40 countries across six continents this weekend, 3-4 March 2019.

Raspberry Jams come in all shapes and sizes. They range from small pub gatherings fueled by local beer and amiable nerdy chatter to vast multi-room events with a varied programme of project displays, workshops, and talks.

To find your nearest Raspberry Jam, check out our interactive Jam map.

And if you can’t get to a Jam location this time, follow #PiParty on Twitter, where people around the world are already getting excited about their Big Birthday Weekend plans. Over the weekend you’ll see Raspberry Jams happening from the UK to the US, from Africa to – we hope – Antarctica, and everywhere in between.

Coolest Projects UK

The first of this year’s Coolest Projects events is also taking place this weekend in Manchester, UK. Coolest Projects is the world’s leading technology fair for young people, showcasing some of the very best creations by young makers across the country (and beyond), and it’s open for members of the public to attend.

Tickets are still available from the Coolest Projects website, and you can follow the action on #CoolestProjects on Twitter.

CBeebies’ Maddie Moate and the BBC’s Greg Foot will be taking over Raspberry Pi’s Instagram story on the day, so be sure to follow @RaspberryPiFoundation on Instagram.

The post Celebrate with us this weekend! appeared first on Raspberry Pi.

We’re hosting the UK’s first-ever Scratch Conference Europe

Post Syndicated from Helen Drury original https://www.raspberrypi.org/blog/announcing-scratch-conference-europe-2019/

We are excited to announce that we will host the first-ever Scratch Conference Europe in the UK this summer: from Friday 23 to Sunday 25 August at Churchill College, Cambridge!

A graphic highlighting the Scratch Conference Europe 2019 - taking place at Friday 23 to Sunday 25 August at Churchill College, Cambridge

Scratch Conference is a participatory event that gives hundreds of educators the chance to explore the creative ways in which people are programming and learning with Scratch. In even-numbered years, the conference is held at the MIT Media Lab, the birthplace of Scratch; in odd-numbered years, it takes place in other places around the globe.

Another graphic highlighting the Scratch Conference Europe 2019

Since 2019 is also the launch year of Scratch 3, we think it’s a fantastic opportunity for us to bring Scratch Conference Europe to the UK for the first time.

What you can look forward to

  • Hands-on, easy-to-follow workshops across a range of topics, including the new Scratch 3
  • Interactive projects to play with
  • Thought-provoking talks and keynotes
  • Plenty of informal chats, meetups, and opportunities for you to connect with other educators

Join us to become part of a growing community, discover how the Raspberry Pi Foundation can support you further, and develop your skills with Scratch as a creative tool for helping your students learn to code.

Contribute to Scratch Conference Europe

Would you like to contribute your own content at the event? We are looking for you in the community to share or host:

  • Project demos
  • Posters
  • Workshops
  • Discussion sessions
  • Presentations
  • Ignite talks

We warmly welcome young people under 18 as content contributors; they must be supported by an adult. All content contributors will be able to attend the whole event for free.

An over view of two people taking electronics pieces out of a box in order to try their hand at digital making using a Raspberry Pi and Scratch.

Find more details and apply to participate in this short online form.

Attend the conference

Tickets for Scratch Conference Europe will go on sale in April.

For updates, subscribe to Raspberry Pi LEARN, our monthly newsletter for educators, and keep an eye on @Raspberry_Pi on Twitter!

An update on Raspberry Fields

Since we’re hosting Scratch Conference Europe this year, our digital making festival Raspberry Fields will be back in 2020, even bigger and more packed with interactive family fun!

A young girl tries out a digital project at the Raspberry Pi event, Raspberry Fields 2018

Scratch is a project of the Lifelong Kindergarten group at the MIT Media Lab. It is available for free at scratch.mit.edu.

The post We’re hosting the UK’s first-ever Scratch Conference Europe appeared first on Raspberry Pi.

Amazon SageMaker Neo – Train Your Machine Learning Models Once, Run Them Anywhere

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-sagemaker-neo-train-your-machine-learning-models-once-run-them-anywhere/

Machine learning (ML) is split in two distinct phases: training and inference. Training deals with building the model, i.e. running a ML algorithm on a dataset in order to identify meaningful patterns. This often requires large amounts of storage and computing power, making the cloud a natural place to train ML jobs with services such as Amazon SageMaker and the AWS Deep Learning AMIs.

Inference deals with using the model, i.e. predicting results for data samples that the model has never seen. Here, the requirements are different: developers are typically concerned with optimizing latency (how long does a single prediction take?) and throughput (how many predictions can I run in parallel?). Of course, the hardware architecture of your prediction environment has a very significant impact on such metrics, especially if you’re dealing with resource-constrained devices: as a Raspberry Pi enthusiast, I often wish the little fellow packed a little more punch to speed up my inference code.

Tuning a model for a specific hardware architecture is possible, but the lack of tooling makes this an error-prone and time-consuming process. Minor changes to the ML framework or the model itself usually require the user to start all over again. Unfortunately, this forces most ML developers to deploy the same model everywhere regardless of the underlying hardware, thus missing out on significant performance gains.

Well, no more. Today, I’m very happy to announce Amazon SageMaker Neo, a new capability of Amazon SageMaker that enables machine learning models to train once and run anywhere in the cloud and at the edge with optimal performance.

Introducing Amazon SageMaker Neo

Without any manual intervention, Amazon SageMaker Neo optimizes models deployed on Amazon EC2 instances, Amazon SageMaker endpoints and devices managed by AWS Greengrass.

Here are the supported configurations:

  • Frameworks and algorithms: TensorFlow, Apache MXNet, PyTorch, ONNX, and XGBoost.
  • Hardware architectures: ARM, Intel, and NVIDIA starting today, with support for Cadence, Qualcomm, and Xilinx hardware coming soon. In addition, Amazon SageMaker Neo is released as open source code under the Apache Software License, enabling hardware vendors to customize it for their processors and devices.

The Amazon SageMaker Neo compiler converts models into an efficient common format, which is executed on the device by a compact runtime that uses less than one-hundredth of the resources that a generic framework would traditionally consume. The Amazon SageMaker Neo runtime is optimized for the underlying hardware, using specific instruction sets that help speed up ML inference.

This has three main benefits:

  • Converted models perform at up to twice the speed, with no loss of accuracy.
  • Sophisticated models can now run on virtually any resource-limited device, unlocking innovative use cases like autonomous vehicles, automated video security, and anomaly detection in manufacturing.
  • Developers can run models on the target hardware without dependencies on the framework.

Under the hood

Most machine learning frameworks represent a model as a computational graph: a vertex represents an operation on data arrays (tensors) and an edge represents data dependencies between operations. The Amazon SageMaker Neo compiler exploits patterns in the computational graph to apply high-level optimizations including operator fusion, which fuses multiple small operations together; constant-folding, which statically pre-computes portions of the graph to save execution costs; a static memory planning pass, which pre-allocates memory to hold each intermediate tensor; and data layout transformations, which transform internal data layouts into hardware-friendly forms. The compiler then produces efficient code for each operator.

Once a model has been compiled, it can be run by the Amazon SageMaker Neo runtime. This runtime takes about 1MB of disk space, compared to the 500MB-1GB required by popular deep learning libraries. An application invokes a model by first loading the runtime, which then loads the model definition, model parameters, and precompiled operations.

I can’t wait to try this on my Raspberry Pi. Let’s get to work.

Downloading a pre-trained model

Plenty of pre-trained models are available in the Apache MXNet, Gluon CV or TensorFlow model zoos: here, I’m using a 50-layer model based on the ResNet architecture, pre-trained with Apache MXNet on the ImageNet dataset.

First, I’m downloading the 227MB model as well as the JSON file defining its different layers. This file is particularly important: it tells me that the input symbol is called ‘data’ and that its shape is [1, 3, 224, 224], i.e. 1 image, 3 channels (red, green and blue), 224×224 pixels. I’ll need to make sure that images passed to the model have this exact shape. The output shape is [1, 1000], i.e. a vector containing the probability for each one of the 1,000 classes present in the ImageNet dataset.

To define a performance baseline, I use this model and a vanilla unoptimized version of Apache MXNet 1.2 to predict a few images: on average, inference takes about 6.5 seconds and requires about 306 MB of RAM.

That’s pretty slow: let’s compile the model and see how fast it gets.

Compiling the model for the Raspberry Pi

First, let’s store both model files in a compressed TAR archive and upload it to an Amazon S3 bucket.

$ tar cvfz model.tar.gz resnet50_v1-symbol.json resnet50_v1-0000.params
a resnet50_v1-symbol.json
a resnet50_v1-0000.paramsresnet50_v1-0000.params
$ aws s3 cp model.tar.gz s3://jsimon-neo/
upload: ./model.tar.gz to s3://jsimon-neo/model.tar.gz

Then, I just have to write a simple configuration file for my compilation job. If you’re curious about other frameworks and hardware targets, ‘aws sagemaker create-compilation-job help‘ will give you the exact syntax to use.

{
    "CompilationJobName": "resnet50-mxnet-raspberrypi",
    "RoleArn": $SAGEMAKER_ROLE_ARN,
    "InputConfig": {
        "S3Uri": "s3://jsimon-neo/model.tar.gz",
        "DataInputConfig": "{\"data\": [1, 3, 224, 224]}",
        "Framework": "MXNET"
    },
    "OutputConfig": {
        "S3OutputLocation": "s3://jsimon-neo/",
        "TargetDevice": "rasp3b"
    },
    "StoppingCondition": {
        "MaxRuntimeInSeconds": 300
    }
}

Launching the compilation process takes a single command.

$ aws sagemaker create-compilation-job --cli-input-json file://job.json

Compilation is complete in seconds. Let’s figure out the name of the compilation artifact, fetch it from Amazon S3 and extract it locally

$ aws sagemaker describe-compilation-job \
--compilation-job-name resnet50-mxnet-raspberrypi \
--query "ModelArtifacts"
{
"S3ModelArtifacts": "s3://jsimon-neo/model-rasp3b.tar.gz"
}
$ aws s3 cp s3://jsimon-neo/model-rasp3b.tar.gz .
$ tar xvfz model-rasp3b.tar.gz
x compiled.params
x compiled_model.json
x compiled.so

As you can see, the artifact contains:

  • The original model and symbol files.
  • A shared object file storing compiled, hardware-optimized, operators used by the model.

For convenience, let’s rename them to ‘model.params’, ‘model.json’ and ‘model.so’, and then copy them on the Raspberry pi in a ‘resnet50’ directory.

$ mkdir resnet50
$ mv compiled.params resnet50/model.params
$ mv compiled_model.json resnet50/model.json
$ mv compiled.so resnet50/model.so
$ scp -r resnet50 [email protected]:~

Setting up the inference environment on the Raspberry Pi

Before I can predict images with the model, I need to install the appropriate runtime on my Raspberry Pi. Pre-built packages are available [neopackages]: I just have to download the one for ‘armv7l’ architectures and to install it on my Pi with the provided script. Please note that I don’t need to install any additional deep learning framework (Apache MXNet in this case), saving up to 1GB of persistent storage.

$ scp -r dlr-1.0-py2.py3-armv7l [email protected]:~
<ssh to the Pi>
$ cd dlr-1.0-py2.py3-armv7l
$ sh ./install-py3.sh

We’re all set. Time to predict images!

Using the Amazon SageMaker Neo runtime

On the Pi, the runtime is available as a Python package named ‘dlr’ (deep learning runtime). Using it to predict images is what you would expect:

  • Load the model, defining its input and output symbols.
  • Load an image.
  • Predict!

Here’s the corresponding Python code.

import os
import numpy as np
from dlr import DLRModel

# Load the compiled model
input_shape = {'data': [1, 3, 224, 224]} # A single RGB 224x224 image
output_shape = [1, 1000]                 # The probability for each one of the 1,000 classes
device = 'cpu'                           # Go, Raspberry Pi, go!
model = DLRModel('resnet50', input_shape, output_shape, device)

# Load names for ImageNet classes
synset_path = os.path.join(model_path, 'synset.txt')
with open(synset_path, 'r') as f:
    synset = eval(f.read())

# Load an image stored as a numpy array
image = np.load('dog.npy').astype(np.float32)
print(image.shape)
input_data = {'data': image}

# Predict 
out = model.run(input_data)
top1 = np.argmax(out[0])
prob = np.max(out)
print("Class: %s, probability: %f" % (synset[top1], prob))

Let’s give it a try on this image. Aren’t chihuahuas and Raspberry Pis made for one another?



(1, 3, 224, 224)
Class: Chihuahua, probability: 0.901803

The prediction is correct, but what about speed and memory consumption? Well, this prediction takes about 0.85 second and requires about 260MB of RAM: with Amazon SageMaker Neo, it’s now 5 times faster and 15% more RAM-efficient than with a vanilla model.

This impressive performance gain didn’t require any complex and time-consuming work: all we had to do was to compile the model. Of course, your mileage will vary depending on models and hardware architectures, but you should see significant improvements across the board, including on Amazon EC2 instances such as the C5 or P3 families.

Now available

I hope this post was informative. Compiling models with Amazon SageMaker Neo is free of charge, you will only pay for the underlying resource using the model (Amazon EC2 instances, Amazon SageMaker instances and devices managed by AWS Greengrass).

The service is generally available today in US-East (N. Virginia), US-West (Oregon) and Europe (Ireland). Please start exploring and let us know what you think. We can’t wait to see what you will build!

Julien;

NEW – Machine Learning algorithms and model packages now available in AWS Marketplace

Post Syndicated from Shaun Ray original https://aws.amazon.com/blogs/aws/new-machine-learning-algorithms-and-model-packages-now-available-in-aws-marketplace/

At AWS, our mission is to put machine learning in the hands of every developer. That’s why in 2017 we launched Amazon SageMaker. Since then it has become one of the fastest growing services in AWS history, used by thousands of customers globally. Customers using Amazon SageMaker can use optimized algorithms offered in Amazon SageMaker, to run fully-managed MXNet, TensorFlow, PyTorch, and Chainer algorithms, or bring their own algorithms and models. When it comes to building their own machine learning model, many customers spend significant time developing algorithms and models that are solutions to problems that have already been solved.

 

Introducing Machine Learning in AWS Marketplace

I am pleased to announce the new Machine Learning category of products offered by AWS Marketplace, which includes over 150+ algorithms and model packages, with more coming every day. AWS Marketplace offers a tailored selection for vertical industries like retail (35 products), media (19 products), manufacturing (17 products), HCLS (15 products), and more. Customers can find solutions to critical use cases like breast cancer prediction, lymphoma classifications, hospital readmissions, loan risk prediction, vehicle recognition, retail localizer, botnet attack detection, automotive telematics, motion detection, demand forecasting, and speech recognition.

Customers can search and browse a list of algorithms and model packages in AWS Marketplace. Once customers have subscribed to a machine learning solution, they can deploy it directly from the SageMaker console, a Jupyter Notebook, the SageMaker SDK, or the AWS CLI. Amazon SageMaker protects buyers data by employing security measures such as static scans, network isolation, and runtime monitoring.

The intellectual property of sellers on the AWS Marketplace is protected by encrypting the algorithms and model package artifacts in transit and at rest, using secure (SSL) connections for communications, and ensuring role based access for deployment of artifacts. AWS provides a secure way for the sellers to monetize their work with a frictionless self-service process to publish their algorithms and model packages.

 

Machine Learning category in Action

Having tried to build my own models in the past, I sure am excited about this feature. After browsing through the available algorithms and model packages from AWS Marketplace, I’ve decided to try the Deep Vision vehicle recognition model, published by Deep Vision AI. This model will allow us to identify the make, model and type of car from a set of uploaded images. You could use this model for insurance claims, online car sales, and vehicle identification in your business.

I continue to subscribe and accept the default options of recommended instance type and region. I read and accept the subscription contract, and I am ready to get started with our model.

My subscription is listed in the Amazon SageMaker console and is ready to use. Deploying the model with Amazon SageMaker is the same as any other model package, I complete the steps in this guide to create and deploy our endpoint.

With our endpoint deployed I can start asking the model questions. In this case I will be using a single image of a car; the model is trained to detect the model, maker, and year information from any angle. First, I will start off with a Volvo XC70 and see what results I get:

Results:

{'result': [{'mmy': {'make': 'Volvo', 'score': 0.97, 'model': 'Xc70', 'year': '2016-2016'}, 'bbox': {'top': 146, 'left': 50, 'right': 1596, 'bottom': 813}, 'View': 'Front Left View'}]}

My model has detected the make, model and year correctly for the supplied image. I was recently on holiday in the UK and stayed with a relative who had a McLaren 570s supercar. The thought that crossed my mind as the gulf-wing doors opened for the first time and I was about to be sitting in the car, was how much it would cost for the insurance excess if things went wrong! Quite apt for our use case today.

Results:

{'result': [{'mmy': {'make': 'Mclaren', 'score': 0.95, 'model': '570S', 'year': '2016-2017'}, 'bbox': {'top': 195, 'left': 126, 'right': 757, 'bottom': 494}, 'View': 'Front Right View'}]}

The score (0.95) measures how confident the model is that the result is right. The range of the score is 0.0 to 1.0. My score is extremely accurate for the McLaren car, with the make, model and year all correct. Impressive results for a relatively rare type of car on the road. I test a few more cars given to me by the launch team who are excitedly looking over my shoulder and now it’s time to wrap up.

Within ten minutes, I have been able to choose a model package, deploy an endpoint and accurately detect the make, model and year of vehicles, with no data scientists, expensive GPU’s for training or writing any code. You can be sure I will be subscribing to a whole lot more of these models from AWS Marketplace throughout re:Invent week and trying to solve other use cases in less than 15 minutes!

Access for the machine learning category in AWS Marketplace can be achieved through the Amazon SageMaker console, or directly through AWS Marketplace itself. Once an algorithm or model has been successfully subscribed to, it is accessible via the console, SDK, and AWS CLI. Algorithms and models from the AWS Marketplace can be deployed just like any other model or algorithm, by selecting the AWS Marketplace option as your package source. Once you have chosen an algorithm or model, you can deploy it to Amazon SageMaker by following this guide.

 

Availability & Pricing

Customers pay a subscription fee for the use of an algorithm or model package and the AWS resource fee. AWS Marketplace provides a consolidated monthly bill for all purchased subscriptions.

At launch, AWS Marketplace for Machine Learning includes algorithms and models from Deep Vision AI Inc, Knowledgent, RocketML, Sensifai, Cloudwick Technologies, Persistent Systems, Modjoul, H2Oai Inc, Figure Eight [Crowdflower], Intel Corporation, AWS Gluon Model Zoos, and more with new sellers being added regularly. If you are interested in selling machine learning algorithms and model packages, please reach out to [email protected]

 

 

NEW – AWS Marketplace makes it easier to govern software procurement with Private Marketplace

Post Syndicated from Shaun Ray original https://aws.amazon.com/blogs/aws/new-aws-marketplace-makes-it-easier-to-govern-software-procurement-with-private-marketplace/

Over six years ago, we launched AWS Marketplace with the ambitious goal of providing users of the cloud with the software applications and infrastructure they needed to run their business. Today, more than 200,000 AWS active customers are using software from AWS Marketplace from categories such as security, data and analytics, log analysis and machine learning. Those customers use over 650 million hours a month of Amazon EC2 for products in AWS Marketplace and have more than 950,000 active software subscriptions. AWS Marketplace offers 35 categories and more than 4,500 software listings from more than 1,400 Independent Software Vendors (ISVs) to help you on your cloud journey, no matter what stage of adoption you are up to.

Customers have told us that they love the flexibility and myriad of options that AWS Marketplace provides. Today, I am excited to announce we are offering even more flexibility for AWS Marketplace with the launch of Private Marketplace from AWS Marketplace.

Private Marketplace is a new feature that enables you to create a custom digital catalog of pre-approved products from AWS Marketplace. As an administrator, you can select products that meet your procurement policies and make them available for your users. You can also further customize Private Marketplace with company branding, such as logo, messaging, and color scheme. All controls for Private Marketplace apply across your entire AWS Organizations entity, and you can define fine-grained controls using Identity and Access Management for roles such as: administrator, subscription manager and end user.

Once you enable Private Marketplace, users within your AWS Organizations redirect to Private Marketplace when they sign into AWS Marketplace. Now, your users can quickly find, buy, and deploy products knowing they are pre-approved.

 

Private Marketplace in Action

To get started we need to be using a master account, if you have a single account, it will automatically be classified as a master account. If you are a member of an AWS Organizations managed account, the master account will need to enable Private Marketplace access. Once done, you can add subscription managers and administrators through AWS Identity and Access Management (IAM) policies.

 

1- My account meets the requirement of being a master, I can proceed to create a Private Marketplace. I click “Create Private Marketplace” and am redirected to the admin page where I can whitelist products from AWS Marketplace. To grant other users access to approve products for listing, I can use AWS Organizations policies to grant the AWSMarketplaceManageSubscriptions role.

2- I select some popular software and operating systems from the list and add them to Private Marketplace. Once selected we can now see our whitelisted products.

3- One thing that I appreciate, and I am sure that the administrators of their organization’s Private Marketplace will, is some customization to bring the style and branding inline with the company. In this case, we can choose the name, logo, color, and description of our Private Marketplace.

4- After a couple of minutes we have our freshly minted Private Marketplace ready to go, there is an explicit step that we need to complete to push our Private Marketplace live. This allows us to create and edit without enabling access to users.

 

5 -For the next part, we will switch to a member account and see what our Private Marketplace looks like.

6- We can see the five pieces of software I whitelisted and our customizations to our Private Marketplace. We can also see that these products are “Approved for Procurement” and can be subscribed to by our end users. Other products are still discoverable by our users, but cannot be subscribed to until an administrator whitelists the product.

 

Conclusion

Users in a Private Marketplace can launch products knowing that all products in their Private Marketplace comply with their company’s procurement policies. When users search for products in Private Marketplace, they can see which products are labeled as “Approved for Procurement” and quickly filter between their company’s catalog and the full catalog of software products in AWS Marketplace.

 

Pricing and Availability

Subscription costs remain the same as all products in AWS Marketplace once consumed. Private Marketplace from AWS Marketplace is available in all commercial regions today.

 

 

 

New – Amazon Route 53 Resolver for Hybrid Clouds

Post Syndicated from Shaun Ray original https://aws.amazon.com/blogs/aws/new-amazon-route-53-resolver-for-hybrid-clouds/

I distinctly remember the excitement I felt when I created my first Virtual Private Cloud (VPC) as a customer. I had just spent months building a similar environment on-premises and had been frustrated at the complicated setup. One of the immediate benefits that the VPC provided was a magical address at 10.0.0.2 where our EC2 instances sent Domain Name Service (DNS) queries. It was reliable, scaled with our workloads, and resolved both public and private domains without any input from us.

 

Like a lot of customers, we connected our on-premises environment with our AWS one via Direct Connect (DX), leading to cases where DNS names required resolution across the connection. Back then we needed to build DNS servers and provide forwarders to achieve this. That’s why today I am very excited to announce Amazon Route 53 Resolver for Hybrid Clouds. It’s a set of features that enable bi-directional querying between on-premises and AWS over private connections.

 

Before I dive into the new functionality, I would like to provide a shout out to our old faithful .2 resolver. As part of our announcement today I would like to let you know that we have officially named the .2 DNS resolver – Route 53 Resolver, in honor of the trillions of queries the service has resolved on behalf of our customers. Route 53 Resolver continues to provide DNS query capability for your VPC, free of charge. To support DNS queries across hybrid environments, we are providing two new capabilities: Route 53 Resolver Endpoints for inbound queries and Conditional Forwarding Rules for outbound queries.

 

Route 53 Resolver Endpoints

Inbound query capability is provided by Route 53 Resolver Endpoints, allowing DNS queries that originate on-premises to resolve AWS hosted domains. Connectivity needs to be established between your on-premises DNS infrastructure and AWS through a Direct Connect (DX) or a Virtual Private Network (VPN). Endpoints are configured through IP address assignment in each subnet for which you would like to provide a resolver.

 

Conditional Forwarding Rules

Outbound DNS queries are enabled through the use of Conditional Forwarding Rules. Domains hosted within your on-premises DNS infrastructure can be configured as forwarding rules in Route 53 Resolver. Rules will trigger when a query is made to one of those domains and will attempt to forward DNS requests to your DNS servers that were configured along with the rules. Like the inbound queries, this requires a private connection over DX or VPN.

 

When combined, these two capabilities allow for recursive DNS lookup for your hybrid workloads. This saves you from the overhead of managing, operating and maintaining additional DNS infrastructre while operating both environments.

 

Route 53 Resolver in Action

1. Route 53 Resolver for Hybrid Clouds is region specific, so our first step is to choose the region we would like to configure our hybrid workloads. Once we have selected a region, we choose the query direction – outbound, inbound or both.

 

2. We have selected both inbound and outbound traffic for this workload. First up is our inbound query configuration. We enter a name and choose a VPC. We assign one or more subnets from within the VPC (in this case we choose two for availability). From these subnets we can assign specific IP addresses to use as our endpoints, or let Route 53 Resolver assign them automatically.

3. We create a rule for our on-premises domain so that workloads inside the VPC can route DNS queries to your DNS infrastructure. We enter one or more IP addresses for our on-premises DNS servers and create our rule.

4. Everything is created and our VPC is associated with our inbound and outbound rules and can start routing traffic. Conditional Forwarding Rules can be shared across multiple accounts using AWS Resource Access Manager.

Availability and Pricing

Route 53 Resolver remains free for DNS queries served within your VPC. Resolver Endpoints use Elastic Network Interfaces (ENIs) costing $0.125 per hour. DNS queries that are resolved by a Conditional Forwarding Rule or a Resolver Endpoint cost $0.40 per million queries up to the first billion and $0.20 per million after that. Route 53 Resolver for Hybrid Cloud is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo) and Asia Pacific (Singapore), with other commercial regions to follow.

 

-Shaun

AWS Quest 2: Reaching Las Vegas

Post Syndicated from Greg Bilsland original https://aws.amazon.com/blogs/aws/aws-quest-2-reaching-las-vegas/

Hey AWS Questers and puzzlehunters! We’ve reached the last day of AWS Quest: The Road to re:Invent! Ozz has made it from Seattle to Las Vegas—after taking the long way via Sydney, Tokyo, Beijing, Seoul, Singapore, Mumbai, Stockholm, Cape Town, Paris, London, Sao Paulo, New York City, Toronto, and Mexico City. Now in Vegas, Ozz plans to meet up with a new robotic friend for re:Invent 2018! This is a very special guest for re:Invent, and you might even have a chance to meet that friend at the conference if you’re attending. But first, you need to wrap up this hunt by finding a final answer. This answer is a little different from the previous ones: It’s an instruction to Ozz on how to find this new friend.

To uncover the final solution, you’ll need the answers to the puzzles so far. Here’s what we’ve done to date: Ozz started this journey in a coffee shop in Seattle. Ozz then met a bouncy animal friend in Sydney, sampled sushi in Tokyo, and got control of the nozzles on the Banpo Bridge in Seoul. The little robot then found many terra cotta warriors in Beijing, met a wordy merlion in Singapore, and tasted the spices of Mumbai.

After a quick shopping trip in Stockholm, Ozz investigated the music of Cape Town, did a whole lot more shopping in Paris, and received a letter from a clockwork friend in London. Then, it was off across the Atlantic to São Paulo where Ozz engaged in some healthy capoeira. Our robot hero went to New York City and toured the skyscrapers, had a puck-shaped hockey treat in Toronto, and rode the roller coasters at Chapultepec Park in Mexico City. Along the way, with your help, we decoded the puzzles and got 15 different postcards of Ozz with special souvenirs from each city!

After reaching Las Vegas, Ozz has passed on this message to this new robotic friend:

“Boop boop boop beeeep beeeep. Beeeep beeeep boop boop boop. Beeeep boop boop boop boop. Beeeep boop boop boop boop. Boop boop boop beeeep beeeep. Beeeep beeeep beeeep beeeep boop. Beeeep beeeep beeeep boop boop. Boop boop boop beeeep beeeep. Boop boop boop boop boop. Boop boop boop beeeep beeeep. Boop beeeep beeeep beeeep beeeep boop boop beeeep beeeep beeeep. Beeeep beeeep boop boop boop. Boop boop boop boop boop. Beeeep beeeep boop boop boop. Boop beeeep beeeep beeeep beeeep!”

Well, that didn’t make much sense. But it’s likely just another puzzling example of our little robot’s sense of humor. Join the AWS Slack community in solving the puzzle and then type the solution into the submission page. If correct, you’ll see the final postcard from Ozz.

If you’ve been playing along and managed to solve the final puzzle, be sure to tweet at me and Jeff if you’ll be at re:Invent. We have Ozz pins to give out as well as a few other treats. You can also get an Ozz pin by visiting the Swag Booth at the Venetian and letting them know you’re an AWS blog reader.

Thanks for playing AWSQuest! For more puzzling fun, visit the Camp re:Invent Trivia Challenge with Jeff Barr at 7 PM on November 28th in the Venetian Theatre.

New Podcast: Preview the security track at re:Invent, learn what’s new and maximize your time

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/previewing-the-security-track-at-reinvent-learn-whats-new-and-maximize-your-time/

There are about 60 security-focused sessions and talks at re:Invent this year. That’s in addition to more than 2,000 other sessions, activities, chalk talks, and demos planned throughout the week. We want to help you get the most out the event and maximize your time. That’s why we’re previewing the security track and highlighting what’s new in the latest AWS Security & Compliance podcast.

Staffers developing security track content offer their advice for navigating the learning conference that is expected to draw 50,000 people from around the world. Listen to the podcast and learn about the newest hands-on session, which was designed to give you deep technical insight within a small-group setting. Plus, find out about the event change that is meant to make it easier to attend more of the talks that interest you.

Announcing Coolest Projects 2019

Post Syndicated from Philip Colligan original https://www.raspberrypi.org/blog/announcing-coolest-projects-2019/

Coolest Projects is the world’s leading technology fair for young people. It’s the science fair for the digital age, where thousands of young people showcase amazing projects that they’ve built using digital technologies. If you want to meet the innovators of the future, this is the place to be, so today we’re really excited to announce three Coolest Projects events in 2019.

Will you be attending Coolest Projects 2019?

Dates are now live for Coolest Projects 2019. Will you be joining us in the UK, Republic of Ireland, or North America?

I’ll never forget my first Coolest Projects

My first experience was in Dublin in 2016. I had been told Coolest Projects was impressive, but I was blown away by the creativity, innovation, and sheer effort that everyone had put in. Every bit as impressive as the technology was the sense of community, particularly among the young people. Girls and boys, with different backgrounds and levels of skill, travelled from all over the world to show off what they’d made and to be inspired by each other.

Igniting imaginations

Coolest Projects began in 2012, the work of CoderDojo volunteers Noel King and Ben Chapman. The first event was held in Dublin, and this city remains the location of the annual Coolest Projects International event. Since then, it has sparked off events all over the world, organised by the community and engaging thousands more young people.

This year, the baton passed to the Raspberry Pi Foundation. We’ve just completed our first season managing the Coolest Projects events and brand, including the first-ever UK event, which took place in April, and a US event that we held at Discovery Cube in Orange County on 23 September. We’ve had a lot of fun!

We’ve seen revolutionary ideas, including a robot guide dog for blind people and a bot detector that could disrupt the games industry. We’ve seen kids’ grit and determination in overcoming heinous obstacles such as their projects breaking in transit and having to rebuild everything from scratch on the morning of the event.

We’ve also seen hundreds of young people who are levelling up, being inspired to learn more, and bringing more ambitious and challenging projects to every new event.

Coolest Projects 2019

We want to expand Coolest Projects and provide a space for even more young people to showcase their digital makes. Today we’re announcing the dates for three Coolest Projects events that are taking place in 2019:

  • Coolest Projects UK, Saturday 2 March, The Sharp Project, Manchester
  • Coolest Projects USA, Saturday 23 March, Discovery Cube Orange County, California
  • Coolest Projects International, Sunday 5 May, RDS, Dublin, Ireland

These are the events that we’ll be running directly, and there will also be community-led events happening in Milan, the Netherlands, Belgium, and Bulgaria.

Project registration for all three events we’re leading opens in January 2019, so you’ve got plenty of time to plan for your next big idea.

If you need some inspiration, there are plenty of places to start. You could check out our How to make a project worksheets worksheets, or get try out one of our online projects before you plan your own.



Head to coolestprojects.org to find out about the 2019 events and how you can get involved!

The post Announcing Coolest Projects 2019 appeared first on Raspberry Pi.

Hang out with Raspberry Pi this month in California, New York, and Boston

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-california-new-york-boston/

This month sees two wonderful events where you can meet the Raspberry Pi team, both taking place on the weekend of September 22 and 23 in the USA.

And for more impromptu fun, you can also hang out with our Social Media Editor and fellow Pi enthusiasts on the East Coast on September 24–28.

Coolest Projects North America

In the Discovery Cube Orange County in Santa Ana, California, team members of the Raspberry Pi Foundation North America, CoderDojo, and Code Club will be celebrating the next generation of young makers at Coolest Projects North America.

Coolest Projects is a world-leading showcase that empowers and inspires the next generation of digital creators, innovators, changemakers, and entrepreneurs. This year, for the first time, we are bringing Coolest Projects to North America for a spectacular event!

While project submissions for the event are now closed, you can still get the last FREE tickets to attend this showcase on Sunday, September 23.

To get your free tickets, click here. And for more information on the event, visit the Coolest Projects North America homepage.

World Maker Faire New York

For those on the east side of the continent at World Maker Faire New York, we’ll have representation in the form of Alex, our Social Media Editor.

The East Coast’s largest celebration of invention, creativity, and curiosity showcases the very best of the global Maker Movement. Get immersed in hundreds of projects and multiple stages focused on making for social good, health, technology, electronics, 3D printing & fabrication, food, robotics, art and more!

Alex will be adorned in Raspberry Pi stickers while exploring the cornucopia of incredible projects on show. She’ll be joined by Raspberry Pi’s videographer Brian, and they’ll gather footage of Raspberry Pis being used across the event for videos like this one from last year’s World Maker Faire:

Raspberry Pi Coffee Robot || Mugsy || Maker Faire NY ’17

Labelled ‘the world’s first hackable, customisable, dead simple, robotic coffee maker’, and powered by a Raspberry Pi, Mugsy allows you to take control of every aspect of the coffee-making process: from grind size and water temperature, to brew and bloom time.

So if you’re planning to attend World Maker Faire, either as a registered exhibitor or an attendee showing off your most recent project, we want to know! Share your project in the comments so we can find you at the event.

A week of New York and Boston meetups

Lastly, since she’ll be in New York, Alex will be out and about after MFNY, meeting up with members of the Raspberry Pi community. If you’d be game for a Raspberry Pi-cnic in Central Park, Coffee and Pi in a cafe, or any other semi-impromptu meetup in the city, let us know the best days for you between Monday, September 24 to Thursday, September 27! Alex will organise some fun gatherings in the Big Apple.

You can also join her in Boston, Massachusetts, on Friday, September 28, where Alex will again be looking to meet up with makers and Pi enthusiasts — let us know if you’re game!

This is weird

Does anyone else think it’s weird that I’ve been referring to myself in the third person throughout this post?

The post Hang out with Raspberry Pi this month in California, New York, and Boston appeared first on Raspberry Pi.

Amazon sponsors R00tz at DEF CON 2018

Post Syndicated from Patrick McCanna original https://aws.amazon.com/blogs/security/amazon-sponsors-rootz-at-def-con-2018/

It’s early August, and we’re quickly approaching Hacker summer camp (AKA DEF CON). The Black Hat Briefings start August 8, DEF CON starts August 9, and many people will be closely following the latest security presentations at both conferences. But there’s another, exclusive conference happening at DEF CON that Amazon is excited to be a part of: we’re sponsoring R00tz Asylum. R00tz is a conference dedicated to teaching kids ages 8-18 how to become white-hat hackers.

Kids who attend will learn how hackers break into computers and networks and how cybersecurity experts defend against hackers. They’ll get hands-on experience soldering and disassembling computers. They’ll learn about lock-picking and how to use 3D printers, and they’ll even get to compete in a security-oriented “Capture the Flag” event where competitors are rewarded for discovering secrets embedded in servers and encrypted files. And throughout the conference, they’ll hear talks from top-notch presenters on topics that include hacking web apps, hacking elections, and hacking conference badges covered in blinky LEDs.

Many of Amazon’s best security experts started young, with limited access to mentors. We learned how to keep hackers from breaking into computers and networks before cybersecurity became the industry it is today. Amazon’s support for R00tz is our chance to give back to the next generation of cyber-security professionals. Kids who are interested in learning about security will get a safe environment and access to mentors. If you’re in Las Vegas for DEF CON, head over to the R00tz FAQs to learn more about the event, and be sure to check out the conference schedule.

Raspberry Fields 2018: ice cream, robots, and coding

Post Syndicated from Tom Evans original https://www.raspberrypi.org/blog/relive-raspberry-fields-2018/

Umbrella trees, giant mushrooms, and tiny museums. A light-up Lovelace, LED cubes, LED eyelashes, and LED coding (we have a bit of a thing for LEDs). Magic cocktails, melted ice creams, and the coolest hot dog around. Face paint masterpieces, swag bags, and bingo. More stickers than a laptop can cope with, a flock of amazing volunteers, and it all ending with an exploding microwave! This can only mean one thing: Raspberry Fields 2018.

The #RaspberryFields digital making festival 2018

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Raspberry Fields forever

On 30 June and 1 July, our community of makers, vendors, speakers, volunteers, and drop-in activity leaders impressed over 1300 visitors who braved the heat to visit our festival of digital making at Cambridge Junction.

Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018

Our mini festival was both a thank you to our wonderful community and a demonstration of the sheer scale of support and ideas we offer to people looking to get involved in digital making for the first time.

Projects and talks galore

Our community of makers came out in force at Raspberry Fields, with shops, hands-on activities, installations, and show-and-tells demonstrating some of the coolest stuff you can do with a Raspberry Pi and with digital making in general.

Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018

Many visitors we spoke to couldn’t believe some of the incredible creations and projects our community members had brought along for them to learn about and play with.

Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018

Over the weekend, e had 29 talks on two stages, with our community speakers coming from all over the UK, as well as France, Germany, Korea, Japan, and Australia! Their talks covered a fascinating range of topics such as volunteering with our coding clubs, digital inclusion, drones, wildlife conservation, and so much more! If you missed any of the speakers, don’t worry: we will be uploading talks to our Youtube channel for everyone to see.

Spectacular live shows

We rounded off the two days with three smashing performances: on Saturday, the fantastic Neil Monteiro showed off some of the awesome things you can do with an Astro Pi at home. He was followed by the outstanding Ada.Ada.Ada., in which Ada Lovelace, kitted out in an epic tech-covered dress, taught people all about her programming legacy.

Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018

Sunday’s finale brought the mischief of Brainiac Live! to Raspberry Fields: the Brainiacs showed us just how much they laugh in the face of science, before providing us with the explosive finish every good festival needs!

Outstanding volunteers

A whopping 60 community members came and helped us out, many of whom had never volunteered at a Raspberry Pi event before! Our festival of digital making would not have happened without these lovely people willing to give up some of their precious weekend to ensure that everything went off without a hitch.

Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018
Raspberry Pi event Raspberry Fields 2018

The volunteers were doing everything from greeting and registering guests as they arrived, handing out swag bags, and stamping bingo cards, to giving directions, helping out with activities, and managing our two stages. They were absolutely fantastic, and we hope to see them all again at future events!

Join our community today

Raspberry Fields was just a taster of what is going on around the world every day within the marvellous Raspberry Pi community at Raspberry Jams, Code Clubs, CoderDojos, Coolest Projects events, or at home, where people use our products and free resources to create their own projects. If our festival has made you curious, then dive in and join the amazing people that have made it possible!

Till next time!

The whole Raspberry Pi team is hugely grateful to all our community members who helped out in some way with Raspberry Fields, as well as to all the staff at Cambridge Junction, who were so open and friendly, and happy to let us taking over the whole venue for a weekend. We would like to say a massive thank you for making the event so much fun for everyone involved, and for being so welcoming to everyone who took part!

Raspberry Pi event Raspberry Fields 2018

We look forward to seeing all of you at upcoming events!

The post Raspberry Fields 2018: ice cream, robots, and coding appeared first on Raspberry Pi.

Moonhack 2018: reaching for the stars!

Post Syndicated from Katherine Leadbetter original https://www.raspberrypi.org/blog/moonhack-2018/

Last year, Code Club Australia set a new world record during their Moonhack event for the most young people coding within 24 hours. This year, they’re hoping to get 50000 kids involved — here’s how you can take part in this interstellar record attempt!

Moonhack 2018 Code Club Raspberry Pi

Celebrating the Apollo 11 moon landing

Nearly 50 years ago, humankind took one giant leap and landed on the moon for the first time. The endeavour involved an incredible amount of technological innovation that, amongst other things, helped set the stage for modern coding.

Apollo 11 moon landing

To celebrate this amazing feat, Code Club Australia are hosting Moonhack, an annual world record attempt to get as many young people as possible coding space-themed projects over 24 hours. This year, Moonhack is even bigger and better, and we want you to take part!

Moonhack past and present

The first Moonhack took place in 2016 in Sydney, Australia, and has since spread across the globe. More than 28000 young people from 56 countries took part last year, from Syria to South Korea and Croatia to Guatemala.

This year, the aim is to break that world record with 50000 young people — the equivalent of the population of a small town — coding over 24 hours!

Moonhack 2018
Moonhack 2018
Moonhack 2018

Get involved

Taking part in Moonhack is super simple: code a space-themed project and submit it on 20 July, the anniversary of the moon landing. Young people from 8 to 18 can take part, and Moonhack is open to everyone, wherever you are in the world.

The event is perfect for Code Clubs, CoderDojos, and Raspberry Jams looking for a new challenge, but you can also take part at home with your family. Or, if you have access to a great venue, you could also host a Moonhackathon event and invite young people from your community to get involved — the Moonhack team is offering online resources to help you do this.

On the Moonhack website, you’ll find four simple, astro-themed projects to choose from, one each for Scratch, Python, micro:bit, and Gamefroot. If your young coders are feeling adventurous, they can also create their own space-themed projects: last year we saw some amazing creations, from a ‘dogs vs aliens’ game to lunar football!

Moonhack 2018

For many young people, Moonhack falls in the last week of term, so it’s a perfect activity to celebrate the end of the academic year. If you’re in a part of the world that’s already on break from school, you can hold a Moonhack coding party, which is a great way to keep coding over the holidays!

To register to take part in Moonhack, head over to moonhack.com and fill in your details. If you’re interested in hosting a Moonhackathon, you can also download an information pack here.

The post Moonhack 2018: reaching for the stars! appeared first on Raspberry Pi.

Look who’s coming to Raspberry Fields 2018!

Post Syndicated from Helen Drury original https://www.raspberrypi.org/blog/raspberry-fields-2018-highlights/

For those that don’t yet know, Raspberry Fields is the all-new community festival of digital making we’re hosting in Cambridge, UK on 30 June and 1 July 2018!

Raspberry Pi two-day digital making event Raspberry Fields

It will be a chance for people of all ages and skill levels to have a go at getting creative with tech! Raspberry Fields is a celebration of all that our digital makers have already learnt and achieved, whether through taking part in Code Clubs, CoderDojos, or Raspberry Jams, or through trying our resources at home.

We have a packed festival programme of exciting activities, talks, and shows for you to experience! So clear the weekend of 30 June and 1 July, because you won’t want to miss a thing.

Saturday

On Saturday, we’ll be welcoming two very special acts to the Raspberry Fields stage.

Neil Monterio

Neil Monterio - Raspberry Fields

Originally trained as a physicist, Neil is famous for his live shows exploring the power of scientific thinking and how it helps us tell the difference between the real and the impossible.

Ada.Ada.Ada

AdaAdaAda - Raspberry Fields

The spellbinding interactive show about computing pioneer Ada Lovelace — catch a sneak peek here!

Sunday

On Sunday, “Science Museum meets Top Gear” as Brainiac Live! takes to the stage to close Raspberry Fields in style.

Brainiac Live!

Brainiac Live! - Raspberry Fields

Strap on your safety goggles — due to popular demand science’s greatest and most volatile live show arrives with a vengeance. The West End and international touring favourite is coming to Raspberry Fields!

More mischievous than ever before, Brainiac Live! will take you on a breathless ride through the wild world of the weird and wonderful. Watch from the safety of your seat as the Brainiacs fearlessly delve into the mysteries of science and do all those things on stage that you’re too scared to do at home!

Weekend highlights

And that’s not all — we’ll also be welcoming some very special guests who will display their projects throughout the weekend. These include:

The Cauldron

The Cauldron - Raspberry Fields

Brew potions with molecular mixology and responsive magic wands using science and technology, and bring the magic from fantasy books to life in this immersive, interactive experience! Learn more about The Cauldron here.

The mechanical Umbrella Tree

The Umbrella Tree - Raspberry Fields

The Umbrella Tree is a botanical, mechanical contraption designed to bemuse, baffle, delight, and amuse all ages. Audiences discover it in the landscape singing to itself and dancing its strange mechanical ballet. The four-metre high structure weaves a creaky choreography of mechanically operated umbrellas, lights, and smoke.

Museum in a Box

Artefacts in the classroom with Museum in a Box || Raspberry Pi Stories

Museum in a Box bridges the gap between museums and schools by creating a more hands-on approach to conservation education through 3D printing and digital making.

Museum in a Box puts museum collections and expert knowledge into your hands, wherever you are in the world. It’s an intriguing and interactive mix of replica objects and contextual content from museum curators and educators, directly at the tips of your fingers!

And there’s still more to discover

Alongside these exciting and explosive performances and displays, we’ll be hosting loads of amazing projects and hands-on activities built by our awesome community of young people and enthusiasts, as well as licensed resellers for you to get all the latest kit and gadgets!

If you’re wondering about bringing along young children or less technologically minded family members or friends, there’ll be plenty for them to enjoy — with lots of festival-themed activities such as face painting, fun performances, free giveaways, and delicious food, Raspberry Fields will have something for everyone!

Tickets!

Tickets are selling fast, so don’t miss out — buy your tickets here today!

Fancy helping out? Find out about our volunteering opportunities.

The post Look who’s coming to Raspberry Fields 2018! appeared first on Raspberry Pi.