Tag Archives: asia

Cloudflare is redefining employee well-being in Japan

Post Syndicated from Tomonari Sato original https://blog.cloudflare.com/cloudflare-is-redefining-employee-well-being-in-japan/

Cloudflare is redefining employee well-being in Japan

This post is also available in 日本語

Cloudflare is redefining employee well-being in Japan

“You can accomplish anything if you do it. Nothing will be accomplished unless you do it. If nothing is not accomplished, that’s because no one did it.“
— Yozan Uesugi

Long hours and hard work. If you ask anyone in Japan what our work culture is like, chances are, these are the words that will come to mind. Different countries have their own cultures and also specific work habits and ways of having a work-life balance. The pandemic brought everyone (companies and their people) a new reality, new lessons, and new habits. Here at Cloudflare, our thinking around where and how we do our best work has evolved over the course of the pandemic. We care about addressing the diverse needs of our workforce and our policies and benefits are designed to optimize for their flexibility and needs. To that end, Cloudflare Japan is making a few important changes to our employee benefits:

  • “take what you need” time off for all our employees
  • 16-week gender-neutral paid parental leave
  • flexible working hours

First, let’s try to understand a bit of the Japanese work culture. According to Japan’s labor laws, Japanese employed workers are assumed to work a maximum of 8 hours a day, or 40 hours per week. But ask any employed person in Japan and you will soon discover that people work much longer hours than that. A 2015 study by the Organization for Economic Co-operation and Development (OECD) found that about 22% of Japanese employees work 50 hours or more each week on average, well above 11% in the U.S., and 6% in Spain. On top of that, people are also less likely to take personal time off. While existing labor laws provide every employed person with at least 10 days of annual leave (+1 day for every year of service, usually capped at 20 days), a 2017 General Survey on Working Conditions published by the Ministry of Health, Labor, and Welfare found that on average, people only actually took 8.8 days of annual leave per year.

Then came the COVID-19 pandemic and things started to change. With restrictions put in place, a lot of us had no choice but to work from home, a concept that’s completely foreign to the Japanese work culture. And now two years into the pandemic, there has been a shift in the Japanese way of working. In a recent Zero Trust survey that Cloudflare conducted in Japan, 74% of IT and cybersecurity decision makers said their organization will be implementing a combination of return-to-office and work-from-home. This means that the future of work in Japan is flexible.

While we encourage our teams to always get their work across the finish line, we also appreciate the value and importance of having personal time to be able to spend with loved ones, take up a hobby, or simply for rest and relaxation. We believe that time away from work helps you be better at work. Our time away from work policies are designed for that and reflect the reality that technology has enabled us to be more mobile and flexible in the 21st century.

On parental leave, we strongly believe that parents should have equal opportunity to bond with their new family member, and don’t believe in forcing a parent to designate themselves as a “primary” or “secondary” caregiver. We believe these designations create a false dichotomy that does not reflect the modern family, nor reflect our values of diversity and equality; especially when we know that these designations typically disadvantage the careers of women more than men in the workplace.

Lastly, we remain committed to providing great physical spaces for our employees to work, collaborate, and celebrate in, while they’re in the office. While remote work is currently still the norm, it will be up to teams and individuals to decide what works best for them for the task at hand. People may wish to come into our offices to meet with their colleagues, socialize, or join on-site workshops, but then choose to do their quiet focus time work from home. As such, we just completely redesigned and renovated our offices in San Francisco and London —  starting with these offices with experimentation in-mind and with the purpose of reimagining our other global offices. Our way of working has changed, and as such our spaces should support this shift, to be a place where teams can come together and collaborate most effectively.

Cloudflare in Japan: 12 years in and a 100% increase in blocked attacks

Cloudflare has had a longstanding presence in Japan, expanding our network into Tokyo in 2010, just months after launching. Today, we have seven points of presence across four cities, and we also announced our Tokyo office in 2020.

Also, it’s important to mention that in Q4 2021, Cloudflare blocked an average of 1.9 billion attacks per day in Japan. That number has grown to 3.8 billion attacks per day blocked by Cloudflare in Q1 2022, an increase of 100% since the previous quarter.

My goal when I joined Cloudflare almost six months ago remains the same — to help customers in Japan accelerate their digital transformation, that will in turn help improve Japan’s competitiveness in the world. In order to do this, we need to continue to provide a great work environment and build a great team. And we’re just getting started!

We are actively recruiting in Japan and have many open roles across different functions. If you’d like to join us in our mission to help build a better Internet, come talk to us!

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

Post Syndicated from David Belson original https://blog.cloudflare.com/aae-1-smw5-cable-cuts/

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

Just after 1200 UTC on Tuesday, June 7, the Africa-Asia-Europe-1 (AAE-1) and SEA-ME-WE-5 (SMW-5) submarine cables suffered cable cuts. The damage reportedly occurred in Egypt, and impacted Internet connectivity for millions of Internet users across multiple countries in the Middle East and Africa, as well as thousands of miles away in Asia. In addition, Google Cloud Platform and OVHcloud reported connectivity issues due to these cable cuts.

The impact

Data from Cloudflare Radar showed significant drops in traffic across the impacted countries as the cable damage occurred, recovering approximately four hours later as the cables were repaired.

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

It appears that Saudi Arabia may have also been affected by the cable cut(s), but the impact was much less significant, and traffic recovered almost immediately.

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

In the graphs above, we show that Ethiopia was one of the impacted countries. However, as it is landlocked, there are obviously no submarine cable landing points within the country. The Afterfibre map from the Network Startup Resource Center (NSRC) shows that that fiber in Ethiopia connects to fiber in Somalia, which experienced an impact. In addition, Ethio Telecom also routes traffic through network providers in Kenya and Djibouti. Djibouti Telecom, one of these providers, in turn peers with larger global providers like Telecom Italia (TI) Sparkle, which is one of the owners of SMW5.

In addition to impacting end-user connectivity in the impacted countries, the cable cuts also reportedly impacted cloud providers including Google Cloud Platform and OVHcloud. In their incident report, Google Cloud noted “Google Cloud Networking experienced increased packet loss for egress traffic from Google to the Middle East, and elevated latency between our Europe and Asia Regions as a result, for 3 hours and 12 minutes, affecting several related products including Cloud NAT, Hybrid Connectivity and Virtual Private Cloud (VPC). From preliminary analysis, the root cause of the issue was a capacity shortage following two simultaneous fiber-cuts.” OVHcloud noted that “Backbone links between Marseille and Singapore are currently down” and that “Upon further investigation, our Network OPERATION teams advised that the fault was related to our partner fiber cuts.”

When concurrent disruptions like those highlighted above are observed across multiple countries in one or more geographic areas, the culprit is often a submarine cable that connects the impacted countries to the global Internet. The impact of such cable cuts will vary across countries, largely due to the levels of redundancy that they may have in place. That is, are these countries solely dependent on an impacted cable for global Internet connectivity, or do they have redundant connectivity across other submarine or terrestrial cables? Additionally, the location of the country relative to the cable cut will also impact how connectivity in a given country may be affected. Due to these factors, we didn’t see a similar impact across all of the countries connected to the AAE-1 and SMW5 cables.

What happened?

Specific details are sparse, but as noted above, the cable damage reportedly occurred in Egypt – both of the impacted cables land in Abu Talat and Zafarana, which also serve as landing points for a number of other submarine cables. According to a 2021 article in Middle East Eye, “There are 10 cable landing stations on Egypt’s Mediterranean and Red Sea coastlines, and some 15 terrestrial crossing routes across the country.” Alan Mauldin, research director at telecommunications research firm TeleGeography, notes that routing cables between Europe and the Middle East to India is done via Egypt, because there is the least amount of land to cross. This places the country in a unique position as a choke point for international Internet connectivity, with damage to infrastructure locally impacting the ability of millions of people thousands of miles away to access websites and applications, as well as impacting connectivity for leading cloud platform providers.

As the graphs above show, traffic returned to normal levels within a matter of hours, with tweets from telecommunications authorities in Pakistan and Oman also noting that Internet services had returned to their countries. Such rapid repairs to submarine cable infrastructure are unusual, as repair timeframes are often measured in days or weeks, as we saw with the cables damaged by the volcanic eruption in Tonga earlier this year. This is due to the need to locate the fault, send repair ships to the appropriate location, and then retrieve the cable and repair it. Given this, the damage to these cables likely occurred on land, after they came ashore.

Keeping content available

By deploying in data centers close to end users, Cloudflare helps to keep traffic local, which can mitigate the impact of catastrophic events like cable cuts, while improving performance, availability, and security. Being able to deliver content from our network generally requires first retrieving it from an origin, and with end users around the world, Cloudflare needs to be able to reach origins from multiple points around the world at the same time. However, a customer origin may be reachable from some networks but not from others, due to a cable cut or some other network disruption.

In September 2021, Cloudflare announced Orpheus, which provides reachability benefits for customers by finding unreachable paths on the Internet in real time, and guiding traffic away from those paths, ensuring that Cloudflare will always be able to reach an origin no matter what is happening on the Internet.

Conclusion

Because the Internet is an interconnected network of networks, an event such as a cable cut can have a ripple effect across the whole Internet, impacting connectivity for users thousands of miles away from where the incident occurred. Users may be unable to access content or applications, or the content/applications may suffer from reduced performance. Additionally, the providers of those applications may experience problems within their own network infrastructure due to such an event.

For network providers, the impact of such events can be mitigated through the use of multiple upstream providers/peers, and diverse physical paths for critical infrastructure like submarine cables. Cloudflare’s globally deployed network can help content and application providers ensure that their content and applications remain available and performant in the face of network disruptions.

Wendy Komadina: No one excited me more than Cloudflare, so I joined.

Post Syndicated from Wendy Komadina original https://blog.cloudflare.com/wendy-komadina-no-one-excited-me-more-than-cloudflare-so-i-joined/

Wendy Komadina:
No one excited me more than Cloudflare, so I joined.

Wendy Komadina:
No one excited me more than Cloudflare, so I joined.

I joined Cloudflare in March to lead Partnerships & Alliances for Asia Pacific, Japan, and China (APJC). In the last month I’ve been asked many times: “Why Cloudflare?” I’ll be honest, I’ve had opportunities to join other technology companies, but no other organization excited me more than Cloudflare. So I jumped. And I couldn’t be more thrilled for the opportunity to build a strong partner ecosystem for APJC.

Wendy Komadina:
No one excited me more than Cloudflare, so I joined.

When I considered joining Cloudflare, I recall consistently reading the message around “Helping to Build a Better Internet”. At first those words didn’t connect with me, but they sounded like an important mission.

I did my research and read analyst reports to learn about Cloudflare’s market position, and then it dawned on me, Cloudflare is leading a transformation. Taking traditional on-premise networking and security hardware and building a transformational cloud-based solution, so customers don’t need to worry about which company supplied their kit. I was excited to learn that Cloudflare customers can simply access the vast global network that has been designed to make everything that customers connect to on the Internet secure, private, fast, and reliable. So hasn’t this been done before? For compute and storage that transformation is almost a commodity now, but for networking and security, Cloudflare is leading that transformation and I want to be part of that.

As I continued to learn more about Cloudflare, I connected with the mission of Project Galileo, Cloudflare’s response to cyber attacks launched against important, yet vulnerable groups such as social activists, humanitarian organizations, minority groups and the voices of political dissent, who are repeatedly flooded with malicious cyber attacks in an attempt to take them offline. I was inspired that Cloudflare was part of something beyond a technology transformation. Vulnerable groups and communities who are part of Project Galileo, have access to Cloudflare security services at no cost.

So now that I’m on the inside I shouldn’t be surprised that I continue to find reasons why Cloudflare is the place to work for. Female leadership is well represented, including our President, COO, and co-founder, Michelle Zatlyn, who took the time to meet me during the interview process, and Jen Taylor our Chief Product Officer, whom I met while she was in Sydney meeting customers and partners, gave me a warm welcome.

In my third week in the company, I met a new colleague at a team gathering. We immediately hit it off chatting and getting to know each other. She had built a career in the sports industry which was ripped from under her during the pandemic, where she was one of the many who lost their jobs. What inspired me about her story was how Cloudflare embraced this as an opportunity to bring diverse talent into the company. They opened their virtual arms and doors to offer her an opportunity to build a career. Cloudflare crafted a path that led her into a Business Development role and now into an Associate Solutions Engineer role. Who does that? Cloudflare does, and I’m working with inspiring leaders who are committed to making that happen.

Finally, early in my career I learned the importance of working with Partners. It is important to commit to joint goals, build trust, celebrate success and carry each other through the trenches when things get tough. As a freshly anointed Cloudflare employee, my top priority is to build a strong culture of partnering. Partners are an important extension of our team and through Partners we can provide customers with deeper engagement and expert knowledge on Cloudflare products and services. My initial priority will be to focus on building Zero Trust Partner Practices supporting a significant number of APJC businesses who are planning a Zero Trust strategy, driven by an increase in cyber attacks. This year, we are rolling out sales and technical enablement, in addition to marketing funding to accelerate the ramp up of our Zero Trust partners.

In addition, the team will lean into partnerships who offer professional services and consulting practices that can support customer implementations. Our partners are critical to our joint success, and together we can support customers in their journey through network and security transformation. Finally, I’m excited to share that our co-founders Matthew Prince and Michelle Zatlyn will be in Sydney in September for Cloudflare Connect. I look forward to leveraging that platform to share more detail on the APJC Partnerships strategy and launching the APJC Partner Advisory Board.

Tomonari Sato: Why I joined Cloudflare and why I’m helping Cloudflare grow in Japan

Post Syndicated from Tomonari Sato original https://blog.cloudflare.com/tomonari-sato-why-i-joined-cloudflare-and-why-im-helping-cloudflare-grow-in-japan/

Tomonari Sato: Why I joined Cloudflare and why I’m helping Cloudflare grow in Japan

This post is also available in 日本語.

Tomonari Sato: Why I joined Cloudflare and why I’m helping Cloudflare grow in Japan

I’m excited to announce that I recently joined Cloudflare in Japan as Vice-President and Managing Director, to help build and expand our customer, partner base, and presence in Japan. Cloudflare expanded its network in Japan in 2010, just months after launching. Now, 12 years later, Cloudflare is continuing its mission to help build a better Internet in Japan and across the globe, and I’m looking forward to being able to contribute to that mission!

Tomonari Sato: Why I joined Cloudflare and why I’m helping Cloudflare grow in Japan

A little about me

In my 35-year career in the IT industry, I have been fortunate enough to work with some of the biggest technology companies in the world, working in various roles in both sales and technical sides of the business. I consider this one of my biggest strengths. In addition, working in the IT industry has allowed me to acquire industry knowledge across a number of different solutions such as custom development, packaged systems (ERP, CRM), MS Office products, and cloud solutions.

Most recently, I was director of the Enterprise Business Group for Japan at AWS, where I was responsible for all commercial industries such as Manufacturing, Process, Distribution, Retail, Telecommunications, Utility, Media, Service, Pharmaceuticals, among others. Prior to AWS, I was Microsoft’s Managing Executive Officer in charge of the Public Sector. In this role, I managed business and strategic relationships with the central government and local government, as well as the healthcare, pharmaceutical, and education industries to help customers accelerate their digital transformation, especially when it comes to their shift to the cloud. In 2005, I joined SAP Japan and spent eight years establishing the partner ecosystem, managing about 250 partners. My last role in SAP was to drive business as a sales leader for three industries (public, utility, and telecommunication). In 1999, I joined IBM to be an initial member of the ERP business unit. At IBM, I got the opportunity to manage large ERP implementations as a Senior Project Manager.

If I look back on my career, I experienced so many things from many dimensions. I started my career as an engineer after I graduated from university. It was the first time I learned what a computer was. I enjoyed my first job as a programmer. I remember how it was a great time for me to learn new things every day since technology was rapidly changing, even in the old days, many, many years ago. I am proud that I have always kept the engineering spirit even after I moved to a sales and management position. After two years as a programmer, I moved to Digital Equipment Corporation (DEC) and spent 12 years as a Systems Engineer. At that time, DEC decided to establish a new manufacturing facility in Japan to provide better quality for Japanese customers. My mission was to design, develop, and maintain all the application systems required to ensure a smooth and seamless manufacturing process, including master production schedule, manufacturing resource planning, inventory, purchasing, work order, shop floor control, and finally developed an automated warehousing system. My last job in DEC was to implement SAP R/3 as a Japan implementation manager. The Japan implementation team was part of the global SAP implementation project, giving me the opportunity to work in a multinational environment. I really enjoyed working at DEC. It was a truly excellent experience for me.

Why Cloudflare

As I look back on my career, one of the things I consider my strength is that throughout those years I got to experience working on technology and computers — as a customer, as a partner, and as a salesperson. Now 35 years later, I’m finally convinced that my role in a global IT company is to contribute to the digital transformation of our customers as well as the society as a whole in Japan, by being able to share global best practices. I decided to join Cloudflare to help accelerate the digital transformation that will help improve Japan’s competitiveness in the world. I believe we have a lot of opportunities to help companies in Japan in this transformation. I remember the feeling I had when I started my first-ever job. I felt a thrill and great motivation. I have the same feeling now with this excellent opportunity for me to launch my new journey with Cloudflare.

Growth opportunities in Japan

It’s often been said that Japan has been slow to adopt digital models, compared to the United States, Europe, and even some countries in Asia. In order to accelerate this digital transformation, the Japanese Government launched a new policy called “Cloud By Default” and subsequently established a Digital Agency in September 2021. There is so much to do, and we are behind. The shift to the cloud has just begun. Businesses are starting to move from on-premise to the cloud, and many organizations are selecting a multi-cloud environment as the next generation platform. Cloudflare has the right solutions, the right people and the right strategy to help Japanese organizations make that shift.

Cloudflare is in a unique position to transform the way we do business by providing security, enhancing the performance of business-critical applications, and eliminating the cost and complexity of managing individual hardware, all within a global cloud platform. Cloudflare’s vast global network, which is one of the fastest on the planet, is trusted by millions of web properties. With direct connections to nearly every service and cloud provider, the Cloudflare network can reach 95% of the world’s population within 50 ms. Cloudflare already has 250 data centers including two Japan sites, Tokyo and Osaka.

Cloudflare is ready to help customers in Japan accelerate their digital transformation and be a trusted solution provider for the Japanese market. I am very much looking forward to contributing to the growth of the business, and the acceleration of the digital transformation for businesses in Japan.

Satyen Desai: Why I joined Cloudflare and why I am helping Cloudflare grow in Southeast Asia and Korea

Post Syndicated from Satyen Desai original https://blog.cloudflare.com/satyen-desai-why-i-joined-cloudflare-and-why-i-am-helping-cloudflare-grow-in-southeast-asia-and-korea/

Satyen Desai: Why I joined Cloudflare and why I am helping Cloudflare grow in Southeast Asia and Korea

Satyen Desai: Why I joined Cloudflare and why I am helping Cloudflare grow in Southeast Asia and Korea

I am excited to announce that I have joined Cloudflare as the Head of Southeast Asia and Korea (SEAK) region to help build a better Internet and to expand Cloudflare’s growing customer, partner and local teams across all the countries in SEAK. Cloudflare is at an emergence phase in this region, with immense growth potential, and this is just the beginning. Cloudflare has had a lot of success globally and our charter is to build on that success and momentum to grow our presence locally to address the demands in Singapore, Malaysia, Thailand, Indonesia, Philippines, Indochina and Korea. Customer engagements in each of the countries in SEAK presents a unique, rich and fulfilling engagement each with their own intricacies.

A little about me

I was born in India (Surat, Gujarat), and at the age of four our family moved to Bahrain where we lived for eight years. We then moved to New Zealand, which is where I completed my senior years of high school and also my Bachelor’s Degree in Information Engineering at Massey University. After graduation, we moved to Melbourne, Australia which is our family home and where my career started.

I love meeting and working with diverse and interesting people who bring different views, thoughts and perspectives. The experiences growing up and working in so many countries has made me a more dynamic leader, while working with so many cultures and diverse teams. Diversity is what drives innovation and growth, more so true than ever in this exciting region.

I love my sports (cricket, squash, golf), traveling and spending time with family & friends.

My journey to Cloudflare

I joined IBM Australia as a graduate in 1997, gaining valuable experiences across many roles from delivery to sales, in a career spanning 15 years. Having been in the IT industry for more than 27 years, career experiences at large global organisations like IBM, SAP, Cisco, NTT and Oracle, all of these amazing organisations and colleagues (many of whom are friends), have provided me with the best set of tools and experiences which I can bring to Cloudflare to help drive the growth agenda.

Below are the main reasons I joined Cloudflare to embark on this amazing journey:

  1. Cloudflare’s Growth potential: Cloudflare has an immense growth potential in APJC and subsequently in Southeast Asia & Korea.  In our recently announced Q3 earnings, we reported a 51% year-over-year increase in revenue, with a record addition of 170 large customers.
  2. Cloudflare’s ever-growing Portfolio: I was lucky enough to join during Birthday Week, Cloudflare’s 11th birthday. Many new products and solutions were announced during the week to further enhance our growing portfolio of solutions. I am amazed at the pace of innovation, where Cloudflare is continuously releasing new products and features on the Cloud that are then instantly available at all our data centers globally for our clients to consume and adopt.
  3. Cloudflare People: During the interview process, I met with 11 Cloudflare colleagues, and all of these felt more like a discussion with a two-way dialogue and a view for Cloudflare to get to know me better, and for me to better understand Cloudflare. This emphasised in my mind the like-minded people I will be working with, where we all work collaboratively, leveraging the experiences we all bring from our past to achieve greater outcomes.
  4. Cloudflare Culture: having now met with so many of my colleagues at Cloudflare, the one thing that stands out for me is the humility with which everyone operates from Global and Regional leaders to our local teams. The all-inclusive culture at Cloudflare along with the three tenets of Curious, Transparent and Principled are very much aligned with my personal principles: Honesty, Integrity and Transparency.

It is an exciting time to be joining one the fastest growing Cloud companies in the world and I want to be part of the Cloudflare journey and contribute to the growth agenda.

We’re just getting started…

I am convinced that Cloudflare is and will be an even bigger global IT giant. Cloudflare’s mission is to help build a better Internet, by working collaboratively with our customers to make them more secure, providing a high level of performance to support their business critical applications, while reducing cost and the complexity of managing their network infrastructure.

The Southeast Asia and Korea region is such a diverse, dynamic and exciting region to be in, where the potential for growth is limitless. As many as 40 million people in six countries across the region — Singapore, Malaysia, Indonesia, the Philippines, Vietnam and Thailand — came online for the first time in 2020. That pushed the total number of internet users in Southeast Asia to 400 million with some of the biggest ecommerce markets in the world.

Similarly, Korea has the highest internet penetration rate with 96% of its population online. On top of that, the government is investing heavily in its Digital New Deal program, which will focus on development of technologies based on data, networks and AI, as well as a digitization plan that will create job opportunities in a number of industries across the country.

Cloudflare is in a unique position to transform the way business is conducted in this region with its global cloud platform that delivers a broad range of network and security services to businesses of all sizes across all geographies. Coverage across Large Enterprises, Public Sector, Mid-Market, Start-ups to the individual developer: companies of all sizes across all industries are being powered by Cloudflare to provide Security, Performance, and Reliability services.

If you are interested in joining Cloudflare and helping to build a more secure, fast, and reliable Internet, do explore our open roles. We are hiring talented people locally, building and strengthening our local teams across: Strategic / Account Executives, Channel Managers, Business Development Representatives, Strategic / Solution Engineers, Customer Success Managers and more.

It is a great honour and a privilege for me to be part of the Cloudflare family to help build Cloudflare’s future in Southeast Asia and Korea. The potential opportunity is enormous, and we are just getting started.

Feel free to reach out to me at [email protected]

Join Cloudflare India Forum in Bangalore on 6 June 2019!

Post Syndicated from Tingting (Teresa) Huang original https://blog.cloudflare.com/cloudflare-india-forum-in-bangaglore/

Join Cloudflare India Forum in Bangalore on 6 June 2019!

Join Cloudflare India Forum in Bangalore on 6 June 2019!

Please join us for an exclusive gathering to discover the latest in cloud solutions for Internet Security and Performance.

Cloudflare Bangalore Meetup

Thursday, 6 June, 2019:  15:30 – 20:00

Location: the Oberoi (37-39, MG Road, Yellappa Garden, Yellappa Chetty Layout, Sivanchetti Gardens, Bengalore)

We will discuss the newest security trends and introduce serverless solutions.

We have invited renowned leaders across industries, including big brands and some of the fastest-growing startups. You will  learn the insider strategies and tactics that will help you to protect your business, to accelerate the performance and to identify the quick-wins in a complex internet environment.

Speakers:

  • Vaidik Kapoor, Head of Engineering, Grofers
  • Nithyanand Mehta, VP of Technical Services & GM India, Catchpoint
  • Viraj Patel, VP of Technology, Bookmyshow
  • Kailash Nadh, CTO, Zerodha
  • Trey Guinn, Global Head of Solution Engineering, Cloudflare

Agenda:

15:30 – 16:00Registration and Refreshment

16:00 – 16:30 DDoS Landscapes and Security Trends

16:30 – 17:15Workers Overview and Demo

17:15 – 18:00 Panel Discussion – Best Practice on Successful Cyber Security and Performance Strategy

18:00 – 18:30 Keynote #1 – Future edge computing

18:30 – 19:00  Keynote # 2 – Cyber attacks are evolving, so should you: How to adopt a quick-win security policy

19:00 – 20:00 Happy Hour

View Event Details & Register Here »

We look forward to meeting you there!

Protecting coral reefs with Nemo-Pi, the underwater monitor

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/coral-reefs-nemo-pi/

The German charity Save Nemo works to protect coral reefs, and they are developing Nemo-Pi, an underwater “weather station” that monitors ocean conditions. Right now, you can vote for Save Nemo in the Google.org Impact Challenge.

Nemo-Pi — Save Nemo

Save Nemo

The organisation says there are two major threats to coral reefs: divers, and climate change. To make diving saver for reefs, Save Nemo installs buoy anchor points where diving tour boats can anchor without damaging corals in the process.

reef damaged by anchor
boat anchored at buoy

In addition, they provide dos and don’ts for how to behave on a reef dive.

The Nemo-Pi

To monitor the effects of climate change, and to help divers decide whether conditions are right at a reef while they’re still on shore, Save Nemo is also in the process of perfecting Nemo-Pi.

Nemo-Pi schematic — Nemo-Pi — Save Nemo

This Raspberry Pi-powered device is made up of a buoy, a solar panel, a GPS device, a Pi, and an array of sensors. Nemo-Pi measures water conditions such as current, visibility, temperature, carbon dioxide and nitrogen oxide concentrations, and pH. It also uploads its readings live to a public webserver.

Inside the Nemo-Pi device — Save Nemo
Inside the Nemo-Pi device — Save Nemo
Inside the Nemo-Pi device — Save Nemo

The Save Nemo team is currently doing long-term tests of Nemo-Pi off the coast of Thailand and Indonesia. They are also working on improving the device’s power consumption and durability, and testing prototypes with the Raspberry Pi Zero W.

web dashboard — Nemo-Pi — Save Nemo

The web dashboard showing live Nemo-Pi data

Long-term goals

Save Nemo aims to install a network of Nemo-Pis at shallow reefs (up to 60 metres deep) in South East Asia. Then diving tour companies can check the live data online and decide day-to-day whether tours are feasible. This will lower the impact of humans on reefs and help the local flora and fauna survive.

Coral reefs with fishes

A healthy coral reef

Nemo-Pi data may also be useful for groups lobbying for reef conservation, and for scientists and activists who want to shine a spotlight on the awful effects of climate change on sea life, such as coral bleaching caused by rising water temperatures.

Bleached coral

A bleached coral reef

Vote now for Save Nemo

If you want to help Save Nemo in their mission today, vote for them to win the Google.org Impact Challenge:

  1. Head to the voting web page
  2. Click “Abstimmen” in the footer of the page to vote
  3. Click “JA” in the footer to confirm

Voting is open until 6 June. You can also follow Save Nemo on Facebook or Twitter. We think this organisation is doing valuable work, and that their projects could be expanded to reefs across the globe. It’s fantastic to see the Raspberry Pi being used to help protect ocean life.

The post Protecting coral reefs with Nemo-Pi, the underwater monitor appeared first on Raspberry Pi.

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

New – Pay-per-Session Pricing for Amazon QuickSight, Another Region, and Lots More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-pay-per-session-pricing-for-amazon-quicksight-another-region-and-lots-more/

Amazon QuickSight is a fully managed cloud business intelligence system that gives you Fast & Easy to Use Business Analytics for Big Data. QuickSight makes business analytics available to organizations of all shapes and sizes, with the ability to access data that is stored in your Amazon Redshift data warehouse, your Amazon Relational Database Service (RDS) relational databases, flat files in S3, and (via connectors) data stored in on-premises MySQL, PostgreSQL, and SQL Server databases. QuickSight scales to accommodate tens, hundreds, or thousands of users per organization.

Today we are launching a new, session-based pricing option for QuickSight, along with additional region support and other important new features. Let’s take a look at each one:

Pay-per-Session Pricing
Our customers are making great use of QuickSight and take full advantage of the power it gives them to connect to data sources, create reports, and and explore visualizations.

However, not everyone in an organization needs or wants such powerful authoring capabilities. Having access to curated data in dashboards and being able to interact with the data by drilling down, filtering, or slicing-and-dicing is more than adequate for their needs. Subscribing them to a monthly or annual plan can be seen as an unwarranted expense, so a lot of such casual users end up not having access to interactive data or BI.

In order to allow customers to provide all of their users with interactive dashboards and reports, the Enterprise Edition of Amazon QuickSight now allows Reader access to dashboards on a Pay-per-Session basis. QuickSight users are now classified as Admins, Authors, or Readers, with distinct capabilities and prices:

Authors have access to the full power of QuickSight; they can establish database connections, upload new data, create ad hoc visualizations, and publish dashboards, all for $9 per month (Standard Edition) or $18 per month (Enterprise Edition).

Readers can view dashboards, slice and dice data using drill downs, filters and on-screen controls, and download data in CSV format, all within the secure QuickSight environment. Readers pay $0.30 for 30 minutes of access, with a monthly maximum of $5 per reader.

Admins have all authoring capabilities, and can manage users and purchase SPICE capacity in the account. The QuickSight admin now has the ability to set the desired option (Author or Reader) when they invite members of their organization to use QuickSight. They can extend Reader invites to their entire user base without incurring any up-front or monthly costs, paying only for the actual usage.

To learn more, visit the QuickSight Pricing page.

A New Region
QuickSight is now available in the Asia Pacific (Tokyo) Region:

The UI is in English, with a localized version in the works.

Hourly Data Refresh
Enterprise Edition SPICE data sets can now be set to refresh as frequently as every hour. In the past, each data set could be refreshed up to 5 times a day. To learn more, read Refreshing Imported Data.

Access to Data in Private VPCs
This feature was launched in preview form late last year, and is now available in production form to users of the Enterprise Edition. As I noted at the time, you can use it to implement secure, private communication with data sources that do not have public connectivity, including on-premises data in Teradata or SQL Server, accessed over an AWS Direct Connect link. To learn more, read Working with AWS VPC.

Parameters with On-Screen Controls
QuickSight dashboards can now include parameters that are set using on-screen dropdown, text box, numeric slider or date picker controls. The default value for each parameter can be set based on the user name (QuickSight calls this a dynamic default). You could, for example, set an appropriate default based on each user’s office location, department, or sales territory. Here’s an example:

To learn more, read about Parameters in QuickSight.

URL Actions for Linked Dashboards
You can now connect your QuickSight dashboards to external applications by defining URL actions on visuals. The actions can include parameters, and become available in the Details menu for the visual. URL actions are defined like this:

You can use this feature to link QuickSight dashboards to third party applications (e.g. Salesforce) or to your own internal applications. Read Custom URL Actions to learn how to use this feature.

Dashboard Sharing
You can now share QuickSight dashboards across every user in an account.

Larger SPICE Tables
The per-data set limit for SPICE tables has been raised from 10 GB to 25 GB.

Upgrade to Enterprise Edition
The QuickSight administrator can now upgrade an account from Standard Edition to Enterprise Edition with a click. This enables provisioning of Readers with pay-per-session pricing, private VPC access, row-level security for dashboards and data sets, and hourly refresh of data sets. Enterprise Edition pricing applies after the upgrade.

Available Now
Everything I listed above is available now and you can start using it today!

You can try QuickSight for 60 days at no charge, and you can also attend our June 20th Webinar.

Jeff;

 

10 visualizations to try in Amazon QuickSight with sample data

Post Syndicated from Karthik Kumar Odapally original https://aws.amazon.com/blogs/big-data/10-visualizations-to-try-in-amazon-quicksight-with-sample-data/

If you’re not already familiar with building visualizations for quick access to business insights using Amazon QuickSight, consider this your introduction. In this post, we’ll walk through some common scenarios with sample datasets to provide an overview of how you can connect yuor data, perform advanced analysis and access the results from any web browser or mobile device.

The following visualizations are built from the public datasets available in the links below. Before we jump into that, let’s take a look at the supported data sources, file formats and a typical QuickSight workflow to build any visualization.

Which data sources does Amazon QuickSight support?

At the time of publication, you can use the following data methods:

  • Connect to AWS data sources, including:
    • Amazon RDS
    • Amazon Aurora
    • Amazon Redshift
    • Amazon Athena
    • Amazon S3
  • Upload Excel spreadsheets or flat files (CSV, TSV, CLF, and ELF)
  • Connect to on-premises databases like Teradata, SQL Server, MySQL, and PostgreSQL
  • Import data from SaaS applications like Salesforce and Snowflake
  • Use big data processing engines like Spark and Presto

This list is constantly growing. For more information, see Supported Data Sources.

Answers in instants

SPICE is the Amazon QuickSight super-fast, parallel, in-memory calculation engine, designed specifically for ad hoc data visualization. SPICE stores your data in a system architected for high availability, where it is saved until you choose to delete it. Improve the performance of database datasets by importing the data into SPICE instead of using a direct database query. To calculate how much SPICE capacity your dataset needs, see Managing SPICE Capacity.

Typical Amazon QuickSight workflow

When you create an analysis, the typical workflow is as follows:

  1. Connect to a data source, and then create a new dataset or choose an existing dataset.
  2. (Optional) If you created a new dataset, prepare the data (for example, by changing field names or data types).
  3. Create a new analysis.
  4. Add a visual to the analysis by choosing the fields to visualize. Choose a specific visual type, or use AutoGraph and let Amazon QuickSight choose the most appropriate visual type, based on the number and data types of the fields that you select.
  5. (Optional) Modify the visual to meet your requirements (for example, by adding a filter or changing the visual type).
  6. (Optional) Add more visuals to the analysis.
  7. (Optional) Add scenes to the default story to provide a narrative about some aspect of the analysis data.
  8. (Optional) Publish the analysis as a dashboard to share insights with other users.

The following graphic illustrates a typical Amazon QuickSight workflow.

Visualizations created in Amazon QuickSight with sample datasets

Visualizations for a data analyst

Source:  https://data.worldbank.org/

Download and Resources:  https://datacatalog.worldbank.org/dataset/world-development-indicators

Data catalog:  The World Bank invests into multiple development projects at the national, regional, and global levels. It’s a great source of information for data analysts.

The following graph shows the percentage of the population that has access to electricity (rural and urban) during 2000 in Asia, Africa, the Middle East, and Latin America.

The following graph shows the share of healthcare costs that are paid out-of-pocket (private vs. public). Also, you can maneuver over the graph to get detailed statistics at a glance.

Visualizations for a trading analyst

Source:  Deutsche Börse Public Dataset (DBG PDS)

Download and resources:  https://aws.amazon.com/public-datasets/deutsche-boerse-pds/

Data catalog:  The DBG PDS project makes real-time data derived from Deutsche Börse’s trading market systems available to the public for free. This is the first time that such detailed financial market data has been shared freely and continually from the source provider.

The following graph shows the market trend of max trade volume for different EU banks. It builds on the data available on XETRA engines, which is made up of a variety of equities, funds, and derivative securities. This graph can be scrolled to visualize trade for a period of an hour or more.

The following graph shows the common stock beating the rest of the maximum trade volume over a period of time, grouped by security type.

Visualizations for a data scientist

Source:  https://catalog.data.gov/

Download and resources:  https://catalog.data.gov/dataset/road-weather-information-stations-788f8

Data catalog:  Data derived from different sensor stations placed on the city bridges and surface streets are a core information source. The road weather information station has a temperature sensor that measures the temperature of the street surface. It also has a sensor that measures the ambient air temperature at the station each second.

The following graph shows the present max air temperature in Seattle from different RWI station sensors.

The following graph shows the minimum temperature of the road surface at different times, which helps predicts road conditions at a particular time of the year.

Visualizations for a data engineer

Source:  https://www.kaggle.com/

Download and resources:  https://www.kaggle.com/datasnaek/youtube-new/data

Data catalog:  Kaggle has come up with a platform where people can donate open datasets. Data engineers and other community members can have open access to these datasets and can contribute to the open data movement. They have more than 350 datasets in total, with more than 200 as featured datasets. It has a few interesting datasets on the platform that are not present at other places, and it’s a platform to connect with other data enthusiasts.

The following graph shows the trending YouTube videos and presents the max likes for the top 20 channels. This is one of the most popular datasets for data engineers.

The following graph shows the YouTube daily statistics for the max views of video titles published during a specific time period.

Visualizations for a business user

Source:  New York Taxi Data

Download and resources:  https://data.cityofnewyork.us/Transportation/2016-Green-Taxi-Trip-Data/hvrh-b6nb

Data catalog: NYC Open data hosts some very popular open data sets for all New Yorkers. This platform allows you to get involved in dive deep into the data set to pull some useful visualizations. 2016 Green taxi trip dataset includes trip records from all trips completed in green taxis in NYC in 2016. Records include fields capturing pick-up and drop-off dates/times, pick-up and drop-off locations, trip distances, itemized fares, rate types, payment types, and driver-reported passenger counts.

The following graph presents maximum fare amount grouped by the passenger count during a period of time during a day. This can be further expanded to follow through different day of the month based on the business need.

The following graph shows the NewYork taxi data from January 2016, showing the dip in the number of taxis ridden on January 23, 2016 across all types of taxis.

A quick search for that date and location shows you the following news report:

Summary

Using Amazon QuickSight, you can see patterns across a time-series data by building visualizations, performing ad hoc analysis, and quickly generating insights. We hope you’ll give it a try today!

 


Additional Reading

If you found this post useful, be sure to check out Amazon QuickSight Adds Support for Combo Charts and Row-Level Security and Visualize AWS Cloudtrail Logs Using AWS Glue and Amazon QuickSight.


Karthik Odapally is a Sr. Solutions Architect in AWS. His passion is to build cost effective and highly scalable solutions on the cloud. In his spare time, he bakes cookies and cupcakes for family and friends here in the PNW. He loves vintage racing cars.

 

 

 

Pranabesh Mandal is a Solutions Architect in AWS. He has over a decade of IT experience. He is passionate about cloud technology and focuses on Analytics. In his spare time, he likes to hike and explore the beautiful nature and wild life of most divine national parks around the United States alongside his wife.

 

 

 

 

Artefacts in the classroom with Museum in a Box

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/museum-in-a-box/

Museum in a Box bridges the gap between museums and schools by creating a more hands-on approach to conservation education through 3D printing and digital making.

Artefacts in the classroom with Museum in a Box || Raspberry Pi Stories

Learn more: http://rpf.io/ Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Fantastic collections and where to find them

Large, impressive statues are truly a sight to be seen. Take for example the 2.4m Hoa Hakananai’a at the British Museum. Its tall stature looms over you as you read its plaque to learn of the statue’s journey from Easter Island to the UK under the care of Captain Cook in 1774, and you can’t help but wonder at how it made it here in one piece.

Hoa Hakananai’a Captain Cook British Museum
Hoa Hakananai’a Captain Cook British Museum

But unless you live near a big city where museums are plentiful, you’re unlikely to see the likes of Hoa Hakananai’a in person. Instead, you have to content yourself with online photos or videos of world-famous artefacts.

And that only accounts for the objects that are on display: conservators estimate that only approximately 5 to 10% of museums’ overall collections are actually on show across the globe. The rest is boxed up in storage, inaccessible to the public due to risk of damage, or simply due to lack of space.

Museum in a Box

Museum in a Box aims to “put museum collections and expert knowledge into your hand, wherever you are in the world,” through modern maker practices such as 3D printing and digital making. With the help of the ‘Scan the World’ movement, an “ambitious initiative whose mission is to archive objects of cultural significance using 3D scanning technologies”, the Museum in a Box team has been able to print small, handheld replicas of some of the world’s most recognisable statues and sculptures.

Museum in a Box Raspberry Pi

Each 3D print gets NFC tags so it can initiate audio playback from a Raspberry Pi that sits snugly within the laser-cut housing of a ‘brain box’. Thus the print can talk directly to us through the magic of wireless technology, replacing the dense, dry text of a museum plaque with engaging speech.

Museum in a Box Raspberry Pi

The Museum in a Box team headed by CEO George Oates (featured in the video above) makes use of these 3D-printed figures alongside original artefacts, postcards, and more to bridge the gap between large, crowded, distant museums and local schools. Modeled after the museum handling collections that used to be sent to schools, Museum in a Box is a cheaper, more accessible alternative. Moreover, it not only allows for hands-on learning, but also encourages children to get directly involved by hacking its technology! With NFC technology readily available to the public, students can curate their own collections about their local area, record their own messages, and send their own box-sized museums on to schools in other towns or countries. In this way, Museum in a Box enables students to explore, and expand the reach of, their own histories.

Moving forward

With the technology perfected and interest in the project ever-growing, Museum in a Box has a busy year ahead. Supporting the new ‘Unstacked’ learning initiative, the team will soon be delivering ten boxes to the Smithsonian Libraries. The team has curated two collections specifically for this: an exploration into Asia-Pacific America experiences of migration to the USA throughout the 20th century, and a look into the history of science.

Smithsonian Library Museum in a Box Raspberry Pi

The team will also be making a box for the British Museum to support their Iraq Scheme initiative, and another box will be heading to the V&A to support their See Red programme. While primarily installed in the Lansbury Micro Museum, the box will also take to the road to visit the local Spotlight high school.

Museum in a Box at Raspberry Fields

Lastly, by far the most exciting thing the Museum in a Box team will be doing this year — in our opinion at least — is showcasing at Raspberry Fields! This is our brand-new festival of digital making that’s taking place on 30 June and 1 July 2018 here in Cambridge, UK. Find more information about it and get your ticket here.

The post Artefacts in the classroom with Museum in a Box appeared first on Raspberry Pi.

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall

AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-secrets-manager-store-distribute-and-rotate-credentials-securely/

Today we’re launching AWS Secrets Manager which makes it easy to store and retrieve your secrets via API or the AWS Command Line Interface (CLI) and rotate your credentials with built-in or custom AWS Lambda functions. Managing application secrets like database credentials, passwords, or API Keys is easy when you’re working locally with one machine and one application. As you grow and scale to many distributed microservices, it becomes a daunting task to securely store, distribute, rotate, and consume secrets. Previously, customers needed to provision and maintain additional infrastructure solely for secrets management which could incur costs and introduce unneeded complexity into systems.

AWS Secrets Manager

Imagine that I have an application that takes incoming tweets from Twitter and stores them in an Amazon Aurora database. Previously, I would have had to request a username and password from my database administrator and embed those credentials in environment variables or, in my race to production, even in the application itself. I would also need to have our social media manager create the Twitter API credentials and figure out how to store those. This is a fairly manual process, involving multiple people, that I have to restart every time I want to rotate these credentials. With Secrets Manager my database administrator can provide the credentials in secrets manager once and subsequently rely on a Secrets Manager provided Lambda function to automatically update and rotate those credentials. My social media manager can put the Twitter API keys in Secrets Manager which I can then access with a simple API call and I can even rotate these programmatically with a custom lambda function calling out to the Twitter API. My secrets are encrypted with the KMS key of my choice, and each of these administrators can explicitly grant access to these secrets with with granular IAM policies for individual roles or users.

Let’s take a look at how I would store a secret using the AWS Secrets Manager console. First, I’ll click Store a new secret to get to the new secrets wizard. For my RDS Aurora instance it’s straightforward to simply select the instance and provide the initial username and password to connect to the database.

Next, I’ll fill in a quick description and a name to access my secret by. You can use whatever naming scheme you want here.

Next, we’ll configure rotation to use the Secrets Manager-provided Lambda function to rotate our password every 10 days.

Finally, we’ll review all the details and check out our sample code for storing and retrieving our secret!

Finally I can review the secrets in the console.

Now, if I needed to access these secrets I’d simply call the API.

import json
import boto3
secrets = boto3.client("secretsmanager")
rds = json.dumps(secrets.get_secrets_value("prod/TwitterApp/Database")['SecretString'])
print(rds)

Which would give me the following values:


{'engine': 'mysql',
 'host': 'twitterapp2.abcdefg.us-east-1.rds.amazonaws.com',
 'password': '-)Kw>THISISAFAKEPASSWORD:lg{&sad+Canr',
 'port': 3306,
 'username': 'ranman'}

More than passwords

AWS Secrets Manager works for more than just passwords. I can store OAuth credentials, binary data, and more. Let’s look at storing my Twitter OAuth application keys.

Now, I can define the rotation for these third-party OAuth credentials with a custom AWS Lambda function that can call out to Twitter whenever we need to rotate our credentials.

Custom Rotation

One of the niftiest features of AWS Secrets Manager is custom AWS Lambda functions for credential rotation. This allows you to define completely custom workflows for credentials. Secrets Manager will call your lambda with a payload that includes a Step which specifies which step of the rotation you’re in, a SecretId which specifies which secret the rotation is for, and importantly a ClientRequestToken which is used to ensure idempotency in any changes to the underlying secret.

When you’re rotating secrets you go through a few different steps:

  1. createSecret
  2. setSecret
  3. testSecret
  4. finishSecret

The advantage of these steps is that you can add any kind of approval steps you want for each phase of the rotation. For more details on custom rotation check out the documentation.

Available Now
AWS Secrets Manager is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo). Secrets are priced at $0.40 per month per secret and $0.05 per 10,000 API calls. I’m looking forward to seeing more users adopt rotating credentials to secure their applications!

Randall

Raspberry Jam Big Birthday Weekend 2018 roundup

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/big-birthday-weekend-2018-roundup/

A couple of weekends ago, we celebrated our sixth birthday by coordinating more than 100 simultaneous Raspberry Jam events around the world. The Big Birthday Weekend was a huge success: our fantastic community organised Jams in 40 countries, covering six continents!

We sent the Jams special birthday kits to help them celebrate in style, and a video message featuring a thank you from Philip and Eben:

Raspberry Jam Big Birthday Weekend 2018

To celebrate the Raspberry Pi’s sixth birthday, we coordinated Raspberry Jams all over the world to take place over the Raspberry Jam Big Birthday Weekend, 3-4 March 2018. A massive thank you to everyone who ran an event and attended.

The Raspberry Jam photo booth

I put together code for a Pi-powered photo booth which overlaid the Big Birthday Weekend logo onto photos and (optionally) tweeted them. We included an arcade button in the Jam kits so they could build one — and it seemed to be quite popular. Some Jams put great effort into housing their photo booth:



Here are some of my favourite photo booth tweets:

RGVSA on Twitter

PiParty photo booth @RGVSA & @ @Nerdvana_io #Rjam

Denis Stretton on Twitter

The @SouthendRPIJams #PiParty photo booth

rpijamtokyo on Twitter

PiParty photo booth

Preston Raspberry Jam on Twitter

Preston Raspberry Jam Photobooth #RJam #PiParty

If you want to try out the photo booth software yourself, find the code on GitHub.

The great Raspberry Jam bake-off

Traditionally, in the UK, people have a cake on their birthday. And we had a few! We saw (and tasted) a great selection of Pi-themed cakes and other baked goods throughout the weekend:






Raspberry Jams everywhere

We always say that every Jam is different, but there’s a common and recognisable theme amongst them. It was great to see so many different venues around the world filling up with like-minded Pi enthusiasts, Raspberry Jam–branded banners, and Raspberry Pi balloons!

Europe

Sergio Martinez on Twitter

Thank you so much to all the attendees of the Ikana Jam in Krakow past Saturday! We shared fun experiences, some of them… also painful 😉 A big thank you to @Raspberry_Pi for these global celebrations! And a big thank you to @hubraum for their hospitality! #PiParty #rjam

NI Raspberry Jam on Twitter

We also had a super successful set of wearables workshops using @adafruit Circuit Playground Express boards and conductive thread at today’s @Raspberry_Pi Jam! Very popular! #PiParty

Suzystar on Twitter

My SenseHAT workshop, going well! @SouthendRPiJams #PiParty

Worksop College Raspberry Jam on Twitter

Learning how to scare the zombies in case of an apocalypse- it worked on our young learners #PiParty @worksopcollege @Raspberry_Pi https://t.co/pntEm57TJl

Africa

Rita on Twitter

Being one of the two places in Kenya where the #PiParty took place, it was an amazing time spending the day with this team and getting to learn and have fun. @TaitaTavetaUni and @Raspberry_Pi thank you for your support. @TTUTechlady @mictecttu ch

GABRIEL ONIFADE on Twitter

@TheMagP1

GABRIEL ONIFADE on Twitter

@GABONIAVERACITY #PiParty Lagos Raspberry Jam 2018 Special International Celebration – 6th Raspberry-Pi Big Birthday! Lagos Nigeria @Raspberry_Pi @ben_nuttall #RJam #RaspberryJam #raspberrypi #physicalcomputing #robotics #edtech #coding #programming #edTechAfrica #veracityhouse https://t.co/V7yLxaYGNx

North America

Heidi Baynes on Twitter

The Riverside Raspberry Jam @Vocademy is underway! #piparty

Brad Derstine on Twitter

The Philly & Pi #PiParty event with @Bresslergroup and @TechGirlzorg was awesome! The Scratch and Pi workshop was amazing! It was overall a great day of fun and tech!!! Thank you everyone who came out!

Houston Raspi on Twitter

Thanks everyone who came out to the @Raspberry_Pi Big Birthday Jam! Special thanks to @PBFerrell @estefanniegg @pcsforme @pandafulmanda @colnels @bquentin3 couldn’t’ve put on this amazing community event without you guys!

Merge Robotics 2706 on Twitter

We are back at @SciTechMuseum for the second day of @OttawaPiJam! Our robot Mergius loves playing catch with the kids! #pijam #piparty #omgrobots

South America

Javier Garzón on Twitter

Así terminamos el #Raspberry Jam Big Birthday Weekend #Bogota 2018 #PiParty de #RaspberryJamBogota 2018 @Raspberry_Pi Nos vemos el 7 de marzo en #ArduinoDayBogota 2018 y #RaspberryJamBogota 2018

Asia

Fablab UP Cebu on Twitter

Happy 6th birthday, @Raspberry_Pi! Greetings all the way from CEBU,PH! #PiParty #IoTCebu Thanks @CebuXGeeks X Ramos for these awesome pics. #Fablab #UPCebu

福野泰介 on Twitter

ラズパイ、6才のお誕生日会スタート in Tokyo PCNブースで、いろいろ展示とhttps://t.co/L6E7KgyNHFとIchigoJamつないだ、こどもIoTハッカソンmini体験やってます at 東京蒲田駅近 https://t.co/yHEuqXHvqe #piparty #pipartytokyo #rjam #opendataday

Ren Camp on Twitter

Happy birthday @Raspberry_Pi! #piparty #iotcebu @coolnumber9 https://t.co/2ESVjfRJ2d

Oceania

Glenunga Raspberry Pi Club on Twitter

PiParty photo booth

Personally, I managed to get to three Jams over the weekend: two run by the same people who put on the first two Jams to ever take place, and also one brand-new one! The Preston Raspberry Jam team, who usually run their event on a Monday evening, wanted to do something extra special for the birthday, so they came up with the idea of putting on a Raspberry Jam Sandwich — on the Friday and Monday around the weekend! This meant I was able to visit them on Friday, then attend the Manchester Raspberry Jam on Saturday, and finally drop by the new Jam at Worksop College on my way home on Sunday.

Ben Nuttall on Twitter

I’m at my first Raspberry Jam #PiParty event of the big birthday weekend! @PrestonRJam has been running for nearly 6 years and is a great place to start the celebrations!

Ben Nuttall on Twitter

Back at @McrRaspJam at @DigInnMMU for #PiParty

Ben Nuttall on Twitter

Great to see mine & @Frans_facts Balloon Pi-Tay popper project in action at @worksopjam #rjam #PiParty https://t.co/GswFm0UuPg

Various members of the Foundation team attended Jams around the UK and US, and James from the Code Club International team visited AmsterJam.

hackerfemo on Twitter

Thanks to everyone who came to our Jam and everyone who helped out. @phoenixtogether thanks for amazing cake & hosting. Ademir you’re so cool. It was awesome to meet Craig Morley from @Raspberry_Pi too. #PiParty

Stuart Fox on Twitter

Great #PiParty today at the @cotswoldjam with bloody delicious cake and lots of raspberry goodness. Great to see @ClareSutcliffe @martinohanlon playing on my new pi powered arcade build:-)

Clare Sutcliffe on Twitter

Happy 6th Birthday @Raspberry_Pi from everyone at the #PiParty at #cotswoldjam in Cheltenham!

Code Club on Twitter

It’s @Raspberry_Pi 6th birthday and we’re celebrating by taking part in @amsterjam__! Happy Birthday Raspberry Pi, we’re so happy to be a part of the family! #PiParty

For more Jammy birthday goodness, check out the PiParty hashtag on Twitter!

The Jam makers!

A lot of preparation went into each Jam, and we really appreciate all the hard work the Jam makers put in to making these events happen, on the Big Birthday Weekend and all year round. Thanks also to all the teams that sent us a group photo:

Lots of the Jams that took place were brand-new events, so we hope to see them continue throughout 2018 and beyond, growing the Raspberry Pi community around the world and giving more people, particularly youths, the opportunity to learn digital making skills.

Philip Colligan on Twitter

So many wonderful people in the @Raspberry_Pi community. Thanks to everyone at #PottonPiAndPints for a great afternoon and for everything you do to help young people learn digital making. #PiParty

Special thanks to ModMyPi for shipping the special Raspberry Jam kits all over the world!

Don’t forget to check out our Jam page to find an event near you! This is also where you can find free resources to help you get a new Jam started, and download free starter projects made especially for Jam activities. These projects are available in English, Français, Français Canadien, Nederlands, Deutsch, Italiano, and 日本語. If you’d like to help us translate more content into these and other languages, please get in touch!

PS Some of the UK Jams were postponed due to heavy snowfall, so you may find there’s a belated sixth-birthday Jam coming up where you live!

S Organ on Twitter

@TheMagP1 Ours was rescheduled until later in the Spring due to the snow but here is Babbage enjoying the snow!

The post Raspberry Jam Big Birthday Weekend 2018 roundup appeared first on Raspberry Pi.

Message Filtering Operators for Numeric Matching, Prefix Matching, and Blacklisting in Amazon SNS

Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/message-filtering-operators-for-numeric-matching-prefix-matching-and-blacklisting-in-amazon-sns/

This blog was contributed by Otavio Ferreira, Software Development Manager for Amazon SNS

Message filtering simplifies the overall pub/sub messaging architecture by offloading message filtering logic from subscribers, as well as message routing logic from publishers. The initial launch of message filtering provided a basic operator that was based on exact string comparison. For more information, see Simplify Your Pub/Sub Messaging with Amazon SNS Message Filtering.

Today, AWS is announcing an additional set of filtering operators that bring even more power and flexibility to your pub/sub messaging use cases.

Message filtering operators

Amazon SNS now supports both numeric and string matching. Specifically, string matching operators allow for exact, prefix, and “anything-but” comparisons, while numeric matching operators allow for exact and range comparisons, as outlined below. Numeric matching operators work for values between -10e9 and +10e9 inclusive, with five digits of accuracy right of the decimal point.

  • Exact matching on string values (Whitelisting): Subscription filter policy   {"sport": ["rugby"]} matches message attribute {"sport": "rugby"} only.
  • Anything-but matching on string values (Blacklisting): Subscription filter policy {"sport": [{"anything-but": "rugby"}]} matches message attributes such as {"sport": "baseball"} and {"sport": "basketball"} and {"sport": "football"} but not {"sport": "rugby"}
  • Prefix matching on string values: Subscription filter policy {"sport": [{"prefix": "bas"}]} matches message attributes such as {"sport": "baseball"} and {"sport": "basketball"}
  • Exact matching on numeric values: Subscription filter policy {"balance": [{"numeric": ["=", 301.5]}]} matches message attributes {"balance": 301.500} and {"balance": 3.015e2}
  • Range matching on numeric values: Subscription filter policy {"balance": [{"numeric": ["<", 0]}]} matches negative numbers only, and {"balance": [{"numeric": [">", 0, "<=", 150]}]} matches any positive number up to 150.

As usual, you may apply the “AND” logic by appending multiple keys in the subscription filter policy, and the “OR” logic by appending multiple values for the same key, as follows:

  • AND logic: Subscription filter policy {"sport": ["rugby"], "language": ["English"]} matches only messages that carry both attributes {"sport": "rugby"} and {"language": "English"}
  • OR logic: Subscription filter policy {"sport": ["rugby", "football"]} matches messages that carry either the attribute {"sport": "rugby"} or {"sport": "football"}

Message filtering operators in action

Here’s how this new set of filtering operators works. The following example is based on a pharmaceutical company that develops, produces, and markets a variety of prescription drugs, with research labs located in Asia Pacific and Europe. The company built an internal procurement system to manage the purchasing of lab supplies (for example, chemicals and utensils), office supplies (for example, paper, folders, and markers) and tech supplies (for example, laptops, monitors, and printers) from global suppliers.

This distributed system is composed of the four following subsystems:

  • A requisition system that presents the catalog of products from suppliers, and takes orders from buyers
  • An approval system for orders targeted to Asia Pacific labs
  • Another approval system for orders targeted to European labs
  • A fulfillment system that integrates with shipping partners

As shown in the following diagram, the company leverages AWS messaging services to integrate these distributed systems.

  • Firstly, an SNS topic named “Orders” was created to take all orders placed by buyers on the requisition system.
  • Secondly, two Amazon SQS queues, named “Lab-Orders-AP” and “Lab-Orders-EU” (for Asia Pacific and Europe respectively), were created to backlog orders that are up for review on the approval systems.
  • Lastly, an SQS queue named “Common-Orders” was created to backlog orders that aren’t related to lab supplies, which can already be picked up by shipping partners on the fulfillment system.

The company also uses AWS Lambda functions to automatically process lab supply orders that don’t require approval or which are invalid.

In this example, because different types of orders have been published to the SNS topic, the subscribing endpoints have had to set advanced filter policies on their SNS subscriptions, to have SNS automatically filter out orders they can’t deal with.

As depicted in the above diagram, the following five filter policies have been created:

  • The SNS subscription that points to the SQS queue “Lab-Orders-AP” sets a filter policy that matches lab supply orders, with a total value greater than $1,000, and that target Asia Pacific labs only. These more expensive transactions require an approver to review orders placed by buyers.
  • The SNS subscription that points to the SQS queue “Lab-Orders-EU” sets a filter policy that matches lab supply orders, also with a total value greater than $1,000, but that target European labs instead.
  • The SNS subscription that points to the Lambda function “Lab-Preapproved” sets a filter policy that only matches lab supply orders that aren’t as expensive, up to $1,000, regardless of their target lab location. These orders simply don’t require approval and can be automatically processed.
  • The SNS subscription that points to the Lambda function “Lab-Cancelled” sets a filter policy that only matches lab supply orders with total value of $0 (zero), regardless of their target lab location. These orders carry no actual items, obviously need neither approval nor fulfillment, and as such can be automatically canceled.
  • The SNS subscription that points to the SQS queue “Common-Orders” sets a filter policy that blacklists lab supply orders. Hence, this policy matches only office and tech supply orders, which have a more streamlined fulfillment process, and require no approval, regardless of price or target location.

After the company finished building this advanced pub/sub architecture, they were then able to launch their internal procurement system and allow buyers to begin placing orders. The diagram above shows six example orders published to the SNS topic. Each order contains message attributes that describe the order, and cause them to be filtered in a different manner, as follows:

  • Message #1 is a lab supply order, with a total value of $15,700 and targeting a research lab in Singapore. Because the value is greater than $1,000, and the location “Asia-Pacific-Southeast” matches the prefix “Asia-Pacific-“, this message matches the first SNS subscription and is delivered to SQS queue “Lab-Orders-AP”.
  • Message #2 is a lab supply order, with a total value of $1,833 and targeting a research lab in Ireland. Because the value is greater than $1,000, and the location “Europe-West” matches the prefix “Europe-“, this message matches the second SNS subscription and is delivered to SQS queue “Lab-Orders-EU”.
  • Message #3 is a lab supply order, with a total value of $415. Because the value is greater than $0 and less than $1,000, this message matches the third SNS subscription and is delivered to Lambda function “Lab-Preapproved”.
  • Message #4 is a lab supply order, but with a total value of $0. Therefore, it only matches the fourth SNS subscription, and is delivered to Lambda function “Lab-Cancelled”.
  • Messages #5 and #6 aren’t lab supply orders actually; one is an office supply order, and the other is a tech supply order. Therefore, they only match the fifth SNS subscription, and are both delivered to SQS queue “Common-Orders”.

Although each message only matched a single subscription, each was tested against the filter policy of every subscription in the topic. Hence, depending on which attributes are set on the incoming message, the message might actually match multiple subscriptions, and multiple deliveries will take place. Also, it is important to bear in mind that subscriptions with no filter policies catch every single message published to the topic, as a blank filter policy equates to a catch-all behavior.

Summary

Amazon SNS allows for both string and numeric filtering operators. As explained in this post, string operators allow for exact, prefix, and “anything-but” comparisons, while numeric operators allow for exact and range comparisons. These advanced filtering operators bring even more power and flexibility to your pub/sub messaging functionality and also allow you to simplify your architecture further by removing even more logic from your subscribers.

Message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). SNS filtering operators for numeric matching, prefix matching, and blacklisting are available now in all AWS Regions, for no extra charge.

To experiment with these new filtering operators yourself, and continue learning, try the 10-minute Tutorial Filter Messages Published to Topics. For more information, see Filtering Messages with Amazon SNS in the SNS documentation.

AWS Summit Season is Almost Here – Get Ready to Register!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-summit-season-is-almost-here-get-ready-to-register/

I’m writing this post from my hotel room in Tokyo while doing my best to fight jet lag! I’m here to speak at JAWS Days and Startup Day, and to meet with some local customers.

I do want to remind you that the AWS Global Summit series is just about to start! With events planned for North America, Latin America, Japan and the rest of Asia, Europe, the Middle East, Africa, and Greater China, odds are that there’s one not too far from you. You can register for the San Francisco Summit today and you can ask to be notified as soon as registration for the other 30+ cities opens up.

The Summits are offered at no charge and are an excellent way for you to learn more about AWS. You’ll get to hear from our leaders and tech teams, our partners, and from other customers. You can also participate in hands-on workshops, labs, and team challenges.

Because the events are multi-track, you may want to bring a colleague or two in order to make sure that you don’t miss something of interest to your organization.

Jeff;

PS – I keep meaning to share this cool video that my friend Mike Selinker took at AWS re:Invent. Check it out!