Tag Archives: Partners

Why I’m helping Cloudflare build its partnerships worldwide

Post Syndicated from Matthew Harrell original https://blog.cloudflare.com/helping-cloudflare-build-its-partnerships-worldwide/

Cloudflare has always had an audacious mission: to help build a better Internet. From its inception, the company realized that a mission this big couldn’t be taken on alone. Such an undertaking would require the help of an extraordinary group of partners. Early in the company’s history, Cloudflare built strong relationships with many hosting providers to protect and accelerate internet traffic. And through the years, Cloudflare has continued to build some amazing Enterprise partnerships and strategic alliances.

As we continue to grow and foster our partner ecosystem, we are excited to announce Cloudflare’s next iteration of its Partner Program—to engage and enable an equally audacious set of partners that want to help build a better Internet, together.

I recently joined Cloudflare to run Global Channel Sales & Partnerships after spending over nine years at Google Cloud in various indirect and direct leadership roles. At Google, I witnessed the powerful impact that a strong partner ecosystem could have on solving complex organizational and societal problems. By combining innovative technologies provided by the manufacturer, with deep domain expertise provided by the partner, we delivered valuable industry solutions to our customers. And through this process, we helped our partners build valuable businesses, accelerate growth, and bring new innovation economies to all parts of the globe.

I joined Cloudflare because I strongly believe in its mission to help build a better Internet, and believe this mission, paired with its massive global network, will enable the company to continue to deliver incredibly innovative solutions to customers of all segments. Cloudflare has strong brand recognition, a market leading product portfolio, an ambitious vision, and a leadership team that is 100% committed to building out the channel and partner program.

I’m excited to connect with Cloudflare partners, and my first priority as the global channel leader is to provide our partners with the tools and programs which allow them to build a compelling business around our products. I’m eager to continue developing a world class program and organization that is:

  • Focused on helping partners build compelling businesses: Cloudflare has a history of democratizing Internet technologies that were once difficult to access, or complicated to use and even understand, such as free SSL, unmetered DDoS, and wholesale Registrar. We plan to take a similar market-shifting approach with our partners. We are redesigning our partner program with a vision of developing best-in-class revenue share models and value added professional services and managed services that we scale through our partners.
  • Easy to do business with: Cloudflare has always prided itself on its ease of use, and we want the partner experience to be just as seamless. We have redesigned how our partners engage with us—from initial sign up, to on-going engagement—to make it even easier for partners to do business with us. This includes simplifying the deal registration process, smooth product trainings for partner reps,, straightforward tracking of deals, and making it easier overall to profit from their relationship with Cloudflare.  
  • Strategically focused: Cloudflare has always relied on valuable partnerships on its mission to help build a better Internet. We are expanding that commitment by diving deeper with those partners that are committed to building their businesses around Cloudflare. We plan to invest resources and design partner-first programs that reward partners for leaning in and investing in Cloudflare’s mission.

Today, you’ll see a few important announcements around the future of our program and how we continue to scale to support some of our most complex partnerships.

We look forward to helping you build your business with Cloudflare!

For those partners that will be in London, please join us at Cloudflare Connect // London, our second annual London gathering of distinguished businesses and technologists, including many Cloudflare customers, partners, and developers. This is Cloudflare’s marquee customer event, which means the content and experience is built for you. I plan to be there personally to formally announce our new partner program, and provide insights on what’s to come.

You can register here: CloudflareConnect.com


More Information:

Cloudflare Partners: A New Program with New Partners

Post Syndicated from Dan Hollinger original https://blog.cloudflare.com/cloudflare-partners-a-new-program-with-new-partners/

Cloudflare Partners: A New Program with New Partners

Many overlook a critical portion of the language in Cloudflare’s mission: “to help build a better Internet.” From the beginning, we knew a mission this bold, an undertaking of this magnitude, couldn’t be done alone. We could only help. To ultimately build a better Internet, it would take a diverse and engaged ecosystem of technologies, customers, partners, and end-users. Fortunately, we’ve been able to work with amazing partners as we’ve grown, and we are eager to announce new, specific programs to grow our ecosystem with an increasingly diverse set of partners.

Today, we’re excited to announce the latest iteration of our partnership program for solutions partners. These categories encompass resellers and referral partners, OEM partners, and the new partner services program. Over the past few years, we’ve grown and learned from some amazing partnerships, and want to bring those best practices to our latest partners at scale—to help them grow their business with Cloudflare’s global network.

Cloudflare Partners: A New Program with New Partners
Cloudflare Partner Tiers

Partner Program for Solution Partners

Every partner program out there has tiers, and Cloudflare’s program is no exception. However, our tiering was built to help our partners ramp up, accelerate and move fast. As Matt Harrell highlighted, we want the process to be as seamless as possible, to help partners find the level of engagement that works best for them‚ with world-class enablement paths and best-in-class revenue share models—built-in from the beginning.

World-Class Enablement

Cloudflare offers complimentary training and enablement to all partners. From self-serve paths, to partner-focused webinars, and instructor-based courses, and certification—we want to ensure our partners can learn and develop Cloudflare and product expertise, to make them as effective as possible when utilizing our massive global network.

Driving Business Value

We want our partners to grow and succeed. From self-serve resellers to our most custom integration, we want to make it frictionless to build a profitable business on Cloudflare. From our tenant API system to dedicated account teams, we’re ready to help you go-to-market with solutions that help end-customers. This includes opportunities to co-market, develop target accounts, and directly partner, to succeed with strategic accounts.

Cloudflare recognizes that, in order to help build a better Internet, we need successful partners—and our latest program is built to help partners build and grow profitable business models around Cloudflare’s solutions.

Partner Services Program – SIs, MSSPs, MSSPs, PSOs

For the first time, we are expanding our program to also include service providers that want to develop and grow profitable services practices around Cloudflare’s solutions.

Our customers face some of the most complex challenges on the Internet. From those challenges, we’ve already seen some amazing opportunities for service providers to create value, grow their business, and make an impact. From customers migrating to the public, hybrid or multi-cloud for the first time, to entirely re-writing applications using Cloudflare Workers®, the need for expertise has increased across our entire customer base. In early pilots, we’ve seen four major categories in building a successful service practice around Cloudflare:

  • Network Digital Transformations – Help customers migrate and modernize their network solution. Cloudflare is the only cloud-native network to give Enterprise-grade control and visibility into on-prem, hybrid, and multi-cloud architectures.
  • Serverless Architecture Development – Provide serverless application development services, thought leadership, and technical consulting around leveraging Cloudflare Workers and Apps.
  • Managed Security & Insights – Enable CISO and IT leaders to obtain single pane of glass of reporting and policy management across all Internet-facing application with Cloudflare’s Security solutions.
  • Managed Performance & Reliability – Keep customer’s High-Availability applications running quickly and efficiently with Cloudflare’s Global Load Balancing, Smart Routing, and Anycast DNS, which allows performance consulting, traffic analysis, and application monitoring.

As we expand this program, we are looking for audacious service providers and system integrators that want to help us build a better Internet for our mutual customers. Cloudflare can be an essential lynchpin to simplify and accelerate digital transformations. We imagine a future where massive applications run within 10ms of 90% of the global population, and where a single-pane solution provides security, performance, and reliability for mission-critical applications—running across multiple clouds. Cloudflare needs help from amazing global integrators and service partners to help realize this future.

If you are interested in learning more about becoming a service partner and growing your business with Cloudflare, please reach out to [email protected] or explore cloudflare.com/partners/services

Just Getting Started

Metcalf’s law states that a network is only as powerful as the amount of nodes within the network. And within the global Cloudflare network, we want as many partner nodes as possible—from agencies to systems integrators, managed security providers, Enterprise resellers, and OEMs. A diverse ecosystem of partners is essential to our mission of helping to build a better Internet, together. We are dedicated to the success of our partners, and we will continue to iterate and develop our programs to make sure our partners can grow and develop on Cloudflare’s global network. Our commitment moving forward is that Cloudflare will be the easiest and most valuable solution for channel partners to sell and support globally.


More Information:

Announcing the New Cloudflare Partner Platform

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/announcing-the-new-cloudflare-partner-platform/

Announcing the New Cloudflare Partner Platform

Announcing the New Cloudflare Partner Platform

When I first started at Cloudflare over two years ago, one of the first things I was tasked with was to help evolve our partner platform to support the changes in our service and the expanding needs of our partners and customers. Cloudflare’s existing partner platform was released in 2010. It is a testament to those who built it, that it was, and still is, in use today—but it was also clear that the landscape had substantially changed.

Since the launch of the existing partner platform, we had built and expanded multi-user access, and launched many new products: Argo, Load Balancing, and Cloudflare Workers, to name a few. Retrofitting the existing offering was not practical. Cloudflare needed a new partner platform that could meet the needs of partners and their customers.

As the team started to develop a new solution, we needed to find a partner who could keep us on the right path. The number of hypotheticals were infinite and we needed a first customer to ground ourselves. Lo and behold, not long after I had begun putting pen to paper, we found the perfect partner for the new platform.

The IBM Partnership

IBM was looking for a partner to bring various edge services to market quickly, and our suite of capabilities was what they were looking for. If you are not familiar with our partnership with IBM, you can learn a bit more about it in our blog post and on the IBM Cloud Internet Services landing page. We signed the contract in November 2017, and we had to be ready to launch by IBM Think the following February. Given that IBM’s engineering team needed time to integrate with us, we were on a tight timeline to deliver.

A number of team members and I jumped on a plane and flew to Austin, Texas (Hook ‘em!) to work with IBM and determine the minimum viable product (MVP). Over kolaches (for the Czech readers at home: Klobásník), IBM and Cloudflare nailed down the MVP requirements. Briefly, they were as follows:

  1. Full API integration to provision the building blocks of using Cloudflare.
    • This included:
      1. Accounts: The container of resources – typically zones
      2. Users: The way in which we partition access to accounts
  2. The ability to sell and provision Cloudflare’s paid services and package them in a way that made sense for IBM’s customers.
    • Our existing partner platform only supported zone plans and none of our newer offerings, such as Argo or load balancing.
    • IBM had specific requirements around how they could package and sell to customers, so our solution needed to be flexible enough to support that.
  3. Ensure that what we built was re-usable.
    • Cloudflare makes it a point to solve problems for scale. While we were focused on ensuring our first partner would be successful, we knew that long term we would need to be able to scale this solution to additional partners. Nothing we built could prevent us from doing that.

Over the next couple of months, many teams at Cloudflare came together to deliver this solution at breakneck speed. Given that the midpoint of this effort happened over the holiday season, I’m personally proud of our company not sacrificing employee’s time with their friends and families in order to deliver. Even when it feels like a sprint, it is still a marathon.

During this time, the engineering team we were working with at IBM felt like another team at Cloudflare. Their ability to move quickly, integrate, and validate our work was critical to the success of the project. At THINK in February 2018, we were able to announce the Beta of IBM CIS (Cloud Internet Services) powered by Cloudflare!

Following the initial release, we continued to add functionality to further enrich the IBM CIS offering, while behind the scenes we continued our work to redefine Cloudflare’s partner platform.

The New Partner Platform

Over the past year we have expanded the capabilities and completed the necessary work to enable more partners to be able to use what we initially built for the IBM partnership. Out of that comes our new partner platform we are announcing today. The new partner platform allows partners of Cloudflare to sell and provision Cloudflare for their customers in a scalable fashion.

Our new partner platform is the combination of two systems designed to fulfill specific needs:

1. Tenants: an abstraction on top of our existing accounts and users for easier management
2. Subscriptions: a new way of packaging and provisioning services

Tenants

An absolute necessity for partners is the ability to provision accounts for each of their customers. Normally the only way to get a Cloudflare account is to sign up on the dashboard. We needed a way for partners to be able to create end customer accounts at their discretion to support their specific onboarding needs. This also ensures proper separation of ownership between customers and allows end customers to access the Cloudflare dashboard directly.

With the introduction of tenants, our data model now looks like the following:

Announcing the New Cloudflare Partner Platform
Cloudflare Resource Data Model

Tenants provide partners the ability to create and manage the accounts for their customers. Each account created is a separate container of resources (zones, workers, etc) for each of customer. Users can be invited to each account as necessary for self service management, while the partner retains control of the capabilities enabled for each account. How a partner manages those capabilities brings us to the second major system that makes up the new partner platform.

Subscriptions

While not as obvious as the need for account provisioning, the ability to package and provision services is critical to providing differentiated offerings for partners of Cloudflare. One drawback of our old partner platform was the difficulty in ensuring new products and services were available to those partners. As Cloudflare grew, it reached the point where new paid services could not be added into the existing partner platform.

With subscriptions, this is no longer the case. What started as just a way to provision services for IBM, has now grown into the standard of how all customer services are provisioned at Cloudflare. Whether you purchase services through IBM CIS or buy Cloudflare Workers in our dashboard, behind the scenes, Subscriptions is what ensures you get exactly the right services enabled.

Enough talk, let’s show things in action!

The Partner Platform in Action

The full details of using the new partner platform can be found in our Provisioning API docs, but here we provide a walkthrough of a typical use case.

Using the new partner platform involves 4 steps:

  1. Provisioning Customer Accounts
  2. Granting Customer Access
  3. Enabling Services
  4. Service Configuration

1) Provisioning Customer Accounts

When onboarding customers, you want each to have their own Cloudflare account. This ensures one customer can not affect any resources belonging to another. By making a `POST /accounts` request, you can create an account for an individual customer.

Request:

curl -X POST \
    https://api.cloudflare.com/client/v4/accounts \
    -H 'Content-Type: application/json' \
    -H 'x-auth-email: <x-auth-email>' \
    -H 'x-auth-key: <x-auth-key>' \
    -d '{ "name": "Customer Account", 
          "type": "standard" 
        }'

Response:

{
    "result": {
        "id": "2bab6ace8c72ed3f09b9eca6db1396bb",
        "name": "Customer Account",
        "type": "standard",
        "settings": {
            "enforce_twofactor": false
        }
    },
    "success": true,
    "errors": [],
    "messages": []
}

This new account is owned by the partner. It can be managed by API, or in the UI by the partner or any additional administrators that are invited.

2) Granting Customer Access

Now that the customer’s account is created, let’s give them access to it. This step uses existing APIs and if you have shared access to a Cloudflare account before, then you have already done this.

Request:

curl -X POST \
    'https://api.cloudflare.com/client/v4/accounts/2bab6ace8c72ed3f09b9eca6db1396bb/members' \
    -H 'Content-Type: application/json' \
    -H 'x-auth-email: <x-auth-email>' \
    -H 'x-auth-key: <x-auth-key>' \
    -d '{ "email": "[email protected]",
          "roles": ["05784afa30c1afe1440e79d9351c7430"],
          "status": "accepted" 
        }'

Response:

{
    "result": {
        "id": "47bd8083af8516a20c410090d2f53655",
        "user": {
            "id": "fccad3c46f26dc2d6ba47ad19f639707",
            "first_name": null,
            "last_name": null,
            "email": "[email protected]",
            "two_factor_authentication_enabled": false
        },
        "status": "pending",
        "roles": [
            {
                "id": "05784afa30c1afe1440e79d9351c7430",
                "name": "Administrator",
                "description": "Can access the full account, except for membership management and billing.",
                "permissions": {
                    "organization": {
                        "read": true,
                        "edit": true
                    },
                    "zone": {
                        "read": true,
                        "edit": true
                    },
                    truncated...
                }
            }
        ]
    },
    "success": true,
    "errors": [],
    "messages": []
}

Alternatively, you can do this in the UI, from the Members section for the newly created account.

3) Enabling Services

Now the fun part! With the ability to provision subscriptions, you can enable paid services for your customers. Before we do that though, we will create a zone so we can attach a zone subscription to it.

Adding a zone as a partner is no different than adding a zone as a regular customer. It can also be done by the customer.

Request:

curl -X POST \
    https://api.cloudflare.com/client/v4/zones \
    -H 'Content-Type: application/json' \
    -H 'x-auth-email: <x-auth-email>' \
    -H 'x-auth-key: <x-auth-key>' \
    -d '{ "name": "theircompany.com",
            "account": { "id": "2bab6ace8c72ed3f09b9eca6db1396bb" }
        }'

Response:

{
    "result": {
        "id": "cae181e41197e2eb875d9bcb9396abe7",
        "name": "theircompany.com",
        "status": "pending",
        "paused": false,
        "type": "full",
        "development_mode": 0,
        "name_servers": [
            "lana.ns.cloudflare.com",
            "lynn.ns.cloudflare.com"
        ],
        "original_name_servers": null,
        "original_registrar": "cloudflare, inc.",
        "original_dnshost": null,
        "modified_on": "2019-05-30T17:51:08.510558Z",
        "created_on": "2019-05-30T17:51:08.510558Z",
        "activated_on": null,
        "meta": {
            "step": 4,
            "wildcard_proxiable": false,
            "custom_certificate_quota": 0,
            "page_rule_quota": 3,
            "phishing_detected": false,
            "multiple_railguns_allowed": false
        },
        "owner": {
            "id": null,
            "type": "user",
            "email": null
        },
        "account": {
            "id": "2bab6ace8c72ed3f09b9eca6db1396bb",
            "name": "Customer Account"
        },
        "permissions": [
            "#access:edit",
            "#access:read",
            ...truncated
        ],
        "plan": {
            "id": "0feeeeeeeeeeeeeeeeeeeeeeeeeeeeee",
            "name": "Free Website",
            "price": 0,
            "currency": "USD",
            "frequency": "",
            "is_subscribed": true,
            "can_subscribe": false,
            "legacy_id": "free",
            "legacy_discount": false,
            "externally_managed": false
        }
    },
    "success": true,
    "errors": [],
    "messages": []
}

For this customer we will provision a Pro plan for the newly created zone. If you are not familiar with our zone plans, then you can read about them here. For this, we make a call to the subscriptions service.

Request:

curl -X POST \
    https://api.cloudflare.com/client/v4/zones/cae181e41197e2eb875d9bcb9396abe7/subscription \
  -H 'Content-Type: application/json' \
  -H 'X-Auth-Email: <x-auth-email>' \
  -H 'X-Auth-Key: <x-auth-key>' \
  -d '{"rate_plan": {
          "id": "PARTNERS_PRO"}
      }'

Response:

{
    "success": true,
    "result": {
        "id": "ff563a93e11c46e7b278be46f49cdd2f",
        "product": {
            "name": "partners_cloudflare_zones",
            "period": "",
            "billing": "",
            "public_name": "CloudFlare Services",
            "duration": 0
        },
        "rate_plan": {
            "id": "partners_pro",
            "public_name": "Partners Professional Plan",
            "currency": "USD",
            "scope": "zone",
            "externally_managed": false,
            "sets": [
                "zone",
                "partner"
            ],
            "is_contract": true
        },
        "component_values": [
            {
                "name": "dedicated_certificates",
                "value": 0,
                "price": 0
            },
            {
                "name": "dedicated_certificates_custom",
                "value": 0,
                "price": 0
            },
            {
                "name": "page_rules",
                "value": 20,
                "default": 20,
                "price": 0
            },
            {
                "name": "zones",
                "value": 1,
                "default": 1,
                "price": 0
            }
        ],
        "zone": {
            "id": "cae181e41197e2eb875d9bcb9396abe7",
            "name": "theircompany.com"
        },
        "frequency": "monthly",
        "currency": "USD",
        "app": {
            "install_id": null
        },
        "entitled": true
    },
    "messages": null,
    "api_version": "2.0.0"
}

Now that the customer is set up with an account, zone, and zone subscription, the only thing left is configuring the resources appropriately.

4) Service Configuration

Service configuration can be done by either you, the partner, or the end customer. Most commonly, DNS records need to be added, security settings verified and updated, and customizations made. These can all be done either through our Client v4 APIs or the Cloudflare Dashboard.

Once that is done, the customer is all set!

This is just the beginning

With our announcement today, partners can protect and accelerate their customer’s internet services with Cloudflare’s partner platform. We have battled tested the underlying systems over the last year and are excited to partner with others to help make a better internet. We are not done yet though. We will be continually investing in the tenant and subscription services to expand their capabilities and simplify usage.

Announcing the New Cloudflare Partner Platform
Some of the latest partners using the new partner platform

If you are interested in partnering with Cloudflare, then reach out to [email protected]. If building the future of how Cloudflare’s partners and customers use our service sounds interesting then take a look at our career page.


For more information, see the following resources:

Join Cloudflare & Yandex at our Moscow meetup! Присоединяйтесь к митапу в Москве!

Post Syndicated from Andrew Fitch original https://blog.cloudflare.com/moscow-developers-join-cloudflare-yandex-at-our-meetup/

Join Cloudflare & Yandex at our Moscow meetup! Присоединяйтесь к митапу в Москве!
Photo by Serge Kutuzov / Unsplash

Join Cloudflare & Yandex at our Moscow meetup! Присоединяйтесь к митапу в Москве!

Are you based in Moscow? Cloudflare is partnering with Yandex to produce a meetup this month in Yandex’s Moscow headquarters.  We would love to invite you to join us to learn about the newest in the Internet industry. You’ll join Cloudflare’s users, stakeholders from the tech community, and Engineers and Product Managers from both Cloudflare and Yandex.

Cloudflare Moscow Meetup

Tuesday, May 30, 2019: 18:00 – 22:00

Location: Yandex – Ulitsa L’va Tolstogo, 16, Moskva, Russia, 119021

Talks will include “Performance and scalability at Cloudflare”, “Security at Yandex Cloud”, and “Edge computing”.

Speakers will include Evgeny Sidorov, Information Security Engineer at Yandex, Ivan Babrou, Performance Engineer at Cloudflare, Alex Cruz Farmer, Product Manager for Firewall at Cloudflare, and Olga Skobeleva, Solutions Engineer at Cloudflare.

Agenda:

18:00 – 19:00 – Registration and welcome cocktail

19:00 – 19:10 – Cloudflare overview

19:10 – 19:40 – Performance and scalability at Cloudflare

19:40 – 20:10 – Security at Yandex Cloud

20:10 – 20:40 – Cloudflare security solutions and industry security trends

20:40 – 21:10 – Edge computing

Q&A

The talks will be followed by food, drinks, and networking.

View Event Details & Register Here »

We’ll hope to meet you soon.

Разработчики, присоединяйтесь к Cloudflare и Яндексу на нашей предстоящей встрече в Москве!

Cloudflare сотрудничает с Яндексом, чтобы организовать мероприятие в этом месяце в штаб-квартире Яндекса. Мы приглашаем вас присоединиться к встрече посвященной новейшим достижениям в интернет-индустрии. На мероприятии соберутся клиенты Cloudflare, профессионалы из технического сообщества, инженеры из Cloudflare и Яндекса.

Вторник, 30 мая: 18:00 – 22:00

Место встречи: Яндекс, улица Льва Толстого, 16, Москва, Россия, 119021

Доклады будут включать себя такие темы как «Решения безопасности Cloudflare и тренды в области безопасности», «Безопасность в Yandex Cloud», “Производительность и масштабируемость в Cloudflare и «Edge computing» от докладчиков из Cloudflare и Яндекса.

Среди докладчиков будут Евгений Сидоров, Заместитель руководителя группы безопасности сервисов в Яндексе, Иван Бобров, Инженер по производительности в Cloudflare, Алекс Круз Фармер, Менеджер продукта Firewall в Cloudflare, и Ольга Скобелева, Инженер по внедрению в Cloudflare.

Программа:

18:00 – 19:00 – Регистрация, напитки и общение

19:00 – 19:10 – Обзор Cloudflare

19:10 – 19:40 – Производительность и масштабируемость в Cloudflare

19:40 – 20:10 – Решения для обеспечения безопасности в Яндексе

20:10 – 20:40 – Решения безопасности Cloudflare и тренды в области безопасности

20:40 – 21:10 – Примеры Serverless-решений по безопасности

Q&A

Вслед за презентациям последует общение, еда и напитки.

Посмотреть детали события и зарегистрироваться можно здесь »

Ждем встречи с вами!

How We Optimized Storage and Performance of Apache Cassandra at Backblaze

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/wide-partitions-in-apache-cassandra-3-11/

Guest post by Mick Semb Wever

Backblaze uses Apache Cassandra, a high-performance, scalable distributed database to help manage hundreds of petabytes of data. We engaged the folks at The Last Pickle to use their extensive experience to optimize the capabilities and performance of our Cassandra 3.11 cluster, and now they want to share their experience with a wider audience to explain what they found. We agree; enjoy!

— Andy

Wide Partitions in Apache Cassandra 3.11

by Mick Semb Wever, Consultant, The Last Pickle

Wide partitions in Cassandra can put tremendous pressure on the Java heap and garbage collector, impact read latencies, and can cause issues ranging from load shedding and dropped messages to crashed and downed nodes.

While the theoretical limit on the number of cells per partition has always been two billion cells, the reality has been quite different, as the impacts of heap pressure show. To mitigate these problems, the community has offered a standard recommendation for Cassandra users to keep partitions under 400MB, and preferably under 100MB.

However, in version 3 many improvements were made that affected how Cassandra handles wide partitions. Memtables, caches, and SSTable components were moved off-heap, the storage engine was rewritten in CASSANDRA-8099, and Robert Stupp made a number of other improvements listed under CASSANDRA-11206.

While working with Backblaze and operating a Cassandra version 3.11 cluster, we had the opportunity to test and validate how Cassandra actually handles partitions with this latest version. We will demonstrate that well designed data models can go beyond the existing 400MB recommendation without nodes crashing through heap pressure.

Below, we walk through how Cassandra writes partitions to disk in 3.11, look at how wide partitions impact read latencies, and then present our testing and verification of wide partition impacts on the cluster using the work we did with Backblaze.

The Art and Science of Writing Wide Partitions to Disk

First we need to understand what a partition is and how Cassandra writes partitions to disk in version 3.11.

Each SSTable contains a set of files, and the (–Data.db) file contains numerous partitions.

The layout of a partition in the –Data.db file has three components: a header, followed by zero or one static rows, which is followed by zero or more ordered Clusterable objects. The Clusterable object in this file may either be a row or a RangeTombstone that deletes data with each wide partition containing many Clusterable objects. For an excellent in-depth examination of this, see Aaron’s blog post Cassandra 3.x Storage Engine.

The –Index.db file stores offsets for the partitions, as well as the IndexInfo serialized objects for each partition. These indices facilitate locating the data on disk within the –Data.db file. Stored partition offsets are represented by a subclass of the RowIndexEntry. This subclass is chosen by the the ColumnIndex and depends on the size of the partition:

  • RowIndexEntry is used when there are no Clusterable objects in the partition, such as when there is only a static row. In this case there are no IndexInfo objects to store and so the parent RowIndexEntry class is used rather than a subclass.
  • The IndexEntry subclass holds the IndexInfo objects in memory until the partition has finished writing to disk. It is used in partitions where the total serialized size of the IndexInfo objects is less than the column_index_cache_size_in_kb configuration setting (which defaults to 2KB).
  • The ShallowIndexEntry subclass serializes IndexInfo objects to disk as they are created and references these objects using only their position in the file. This is used in partitions where the total serialized size of the IndexInfo objects is more than the column_index_cache_size_in_kb configuration setting.

These IndexInfo objects provide a sampling of positional offsets for rows within a partition, creating an index. Each object specifies the offset the page starts at, the first row and the last row.

So, in general, the bigger the partition, the more IndexInfo objects need to be created when writing to disk — and if they are held in memory until the partition is fully written to disk they can cause memory pressure. This is why the column_index_cache_size_in_kb setting was added in Cassandra 3.6 and the objects are now serialized as they are created.

The relationship between partition size and the number of objects was quantified by Robert Stupp in his presentation, Myths of Big Partitions.

IndexInfo numbers from Robert Stupp

How Wide Partitions Impact Read Latencies

Cassandra’s key cache is an optimization that is enabled by default and helps to improve the speed and efficiency of the read path by reducing the amount of disk activity per read.

Each key cache entry is identified by a combination of the keyspace, table name, SSTable, and the partition key. The value of the key cache is a RowIndexEntry or one of its subclasses — either IndexedEntry or the new ShallowIndexedEntry. The size of the key cache is limited by the key_cache_size_in_mb configuration setting.

When a read operation in the storage engine gets a cache hit it avoids having to access the –Summary.db and –Index.db SSTable components, which reduces that read request’s latency. Wide partitions, however, can decrease the efficiency of this key cache optimization because fewer hot partitions will fit into the allocated cache size.

Indeed, before the ShallowIndexedEntry was added in Cassandra version 3.6, a single wide row could fill the key cache, reducing the hit rate efficiency. When applied to multiple rows, this will cause greater churn of additions and evictions of cache entries.

For example, if the IndexEntry for a 512MB partition contains 100K+ IndexInfo objects and if these IndexInfo objects total 1.4MB, then the key cache would only be able to hold 140 entries.

The introduction of ShallowIndexedEntry objects changed how the key cache can hold data. The ShallowIndexedEntry contains a list of file pointers referencing the serialized IndexInfo objects and can binary search through this list, rather than having to deserialize the entire IndexInfo objects list. Thus when the ShallowIndexedEntry is used no IndexInfo objects exist within the key cache. This increases the storage efficiency of the key cache in storing more entries, but does still require that the IndexInfo objects are binary searched and deserialized from the –Index.db file on a cache hit.

In short, on wide partitions a key cache miss still results in two additional disk reads, as it did before Cassandra 3.6, but now a key cache hit incurs a disk read to the -Index.db file where it did not before Cassandra 3.6.

Object Creation and Heap Behavior with Wide Partitions in 2.2.13 vs 3.11.3

Introducing the ShallowIndexedEntry into Cassandra version 3.6 creates a measurable improvement in the performance of wide partitions. To test the effects of this and the other performance enhancement features introduced in version 3 we compared how Cassandra 2.2.13 and 3.11.3 performed when one hundred thousand, one million, or ten million rows were each written to a single partition.

The results and accompanying screenshots help illustrate the impact of object creation and heap behavior when inserting rows into wide partitions. While version 2.2.13 crashed repeatedly during this test, 3.11.3 was able to write over 30 million rows to a single partition before Cassandra Out-of-Memory crashed. The test and results are reproduced below.

Both Cassandra versions were started as single-node clusters with default configurations, excepting heap customization in the cassandra–env.sh:

MAX_HEAP_SIZE=”1G”
HEAP_NEWSIZE=”600M”

In Cassandra only the configured concurrency of memtable flushes and compactors determines how many partitions are processed by a node and thus pressuring its heap at any one time. Based on this known concurrency limitation, profiling can be done by inserting data into one partition against one Cassandra node with a small heap. These results extrapolate to production environments.

The tlp-stress tool inserted data in three separate profiling passes against both versions of Cassandra, creating wide partitions of one hundred thousand (100K), one million (1M), or ten million (10M) rows.

A tlp-stress profile for wide partitions was written, as no suitable profile existed. The read to write ratio used the default setting of 1:100.

The following command lines then implemented the tlp-stress tool:

# To write 100000 rows into one partition
tlp-stress run Wide –replication “{‘class’:’SimpleStrategy’,’replication_factor’: 1}” -n 100K# To write 1M rows into one partition
tlp-stress run Wide –replication “{‘class’:’SimpleStrategy’,’replication_factor’: 1}” -n 1M# To write 10M rows into one partition
tlp-stress run Wide –replication “{‘class’:’SimpleStrategy’,’replication_factor’: 1}” -n 10M

Each time tlp-stress executed it was immediately followed by a command to ensure the full count of specified rows passed through the memtable flush and were written to disk:

nodetool flush

The graphs in the sections below, taken from the Apache NetBeans Profiler, illustrate how the ShallowIndexEntry in Cassandra version 3.11 avoids keeping IndexInfo objects in memory.

Notably, the IndexInfo objects are instantiated far more often, but are referenced for much shorter periods of time. The Garbage Collector is more effective at removing short-lived objects, as illustrated by the GC pause times being barely present in the Cassandra 3.11 graphs compared to Cassandra 2.2 where GC pause times overwhelm the JVM.

Wide Partitions in Cassandra 2.2

Benchmarks were against Cassandra 2.2.13

One Partition with 100K Rows (2.2.13)

The following three screenshots shows the number of IndexInfo objects instantiated during the write benchmark, during compaction, and a heap profile.

The partition grew to be ~40MB.

Objects created during tlp-stress

screenshot of Cassandra 2.2 objects created during tlp-stress

Objects created during subsequent major compaction

screenshot of Cassandra 2.2 objects created during subsequent major compaction

Heap profiled during tlp-stress and major compaction

screenshot of Cassandra 2.2 Heap profiled during tlp-stress and major compaction

The above diagrams do not have their x-axis expanded to the full width, but still encompass the startup, stress test, flush, and compaction periods of the benchmark.

When stress testing starts with tlp-stress, the CPU Time and Surviving Generations starts to climb. During this time the heap also starts to increase and decrease more frequently as it fills up and then the Garbage Collector cleans it out. In these diagrams the garbage collection intervals are easy to identify and isolate from one another.

One Partition with 1M Rows (2.2.13)

Here, the first two screenshots show the number of IndexInfo objects instantiated during the write benchmark and during the subsequent compaction process. The third screenshot shows the CPU & GC Pause Times and the heap profile from the time writes started through when the compaction was completed.

The partition grew to be ~400MB.

Already at this size the Cassandra JVM is GC thrashing and has occasionally Out-of-Memory crashed.

Objects created during tlp-stress

screenshot of Cassandra 2.2.13 Objects created during tlp-stress

Objects created during subsequent major compaction

screenshot of Cassandra 2.2.13 Objects created during subsequent major compaction

Heap profiled during tlp-stress and major compaction

screenshot of Cassandra 2.2.13 Heap profiled during tlp-stress and major compaction

The above diagrams display a longer running benchmark, with the quiet period during the startup barely noticeable on the very left-hand side of each diagram. The number of garbage collection intervals and the oscillations in heap size are far more frequent. The GC Pause Time during the stress testing period is now consistently higher and comparable to the CPU Time. It only dissipates when the benchmark performs the flush and compaction.

One Partition with 10M Rows (2.2.13)

In this final test of Cassandra version 2.2.13, the results were difficult to reproduce reliably, as more often than not this test Out-of-Memory crashed from GC heap pressure.

The first two screenshots show the number of IndexInfo objects instantiated during the write benchmark and during the subsequent compaction process. The third screenshot shows the GC Pause Time and the heap profile from the time writes started until compaction was completed.

The partition grew to be ~4GB.

Objects created during tlp-stress

Objects created during subsequent major compaction

Heap profiled during tlp-stress and major compaction

screenshot of Cassandra Heap profiled during tlp-stress and major compaction

The above diagrams display consistently very high GC Pause Time compared to CPU Time. Any Cassandra node under this much duress from garbage collection is not healthy. It is suffering from high read latencies, could become blacklisted by other nodes due to its lack of responsiveness, and even crash altogether from Out-of-Memory errors (as it did often during this benchmark).

Wide Partitions in Cassandra 3.11.3

Benchmarks were against Cassandra 3.11.3

In this series, the graphs demonstrate how IndexInfo objects are created either from memtable flushes or from deserialization off disk. The ShallowIndexEntry is used in Cassandra 3.11.3 when deserializing the IndexInfo objects from the -Index.db file.

Neither form of IndexInfo objects reside long in the heap and thus the GC Pause Time is barely visible in comparison to Cassandra 2.2.13 despite the additional numbers of IndexInfo objects created via deserialization.

One Partition with 100K Rows (3.11.3)

As with the earlier version test of this size, the following two screenshots shows the number of IndexInfo objects instantiated during the write benchmark and during the subsequent compaction process. The third screenshot shows the CPU & GC Pause Time and the heap profile from the time writes started through when the compaction was completed.

The partition grew to be ~40MB, the same as with Cassandra 2.2.13

Objects created during tlp-stress

screenshot of Cassandra 3.11.3 objects created during tlp-stress

Objects created during subsequent major compaction

screenshot of Cassandra 3.11.3 objects created during subsequent major compaction

Heap profiled during tlp-stress and major compaction

screenshot of Cassandra 3.11.3 Heap profiled during tlp-stress and major compaction

The diagrams above are roughly comparable to the first diagrams presented under Cassandra 2.2.13, except here the x-axis is expanded to full width. Note there are significantly more instantiated IndexInfo objects, but barely any noticeable GC Pause Time.

One Partition with 1M Rows (3.11.3)

Again, the first two screenshots show the number of IndexInfo objects instantiated during the write benchmark and during the subsequent compaction process. The third screenshot shows the CPU & GC Pause Time and the heap profile over the time writes started until the compaction was completed.

The partition grew to be ~400MB, the same as with Cassandra 2.2.13

Objects created during tlp-stress

Objects created during subsequent major compaction

Heap profiled during tlp-stress and major compaction

The above diagrams show a wildly oscillating heap as many IndexInfo objects are created, and shows many garbage collection intervals, yet the GC Pause Time remains low, if at all noticeable.

One Partition with 10M Rows (3.11.3)

Here again, the first two screenshots show the number of IndexInfo objects instantiated during the write benchmark and during the subsequent compaction process. The third screenshot shows the CPU & GC Pause Time and the heap profile over the time writes started until the compaction was completed.

The partition grew to be ~4GB, the same as with Cassandra 2.2.13

Objects created during tlp-stress

Objects created during subsequent major compaction

Heap profiled during tlp-stress and major compaction

Unlike this profile in 2.2.13, the cluster remains stable as it was when running 1M rows per partition. The above diagrams display an oscillating heap when IndexInfo objects are created, and many garbage collection intervals, yet GC Pause Time remains low, if at all noticeable.

Maximum Rows in 1GB Heap (3.11.3)

In an attempt to push Cassandra 3.11.3 to the limit, we ran a test to see how much data could be written to a single partition before Cassandra Out-of-Memory crashed.

The result was 30M+ rows, which is ~12GB of data on disk.

This is similar to the limit of 17GB of data written to a single partition as Robert Stupp found in CASSANDRA-9754 when using a 5GB Java heap.

screenshot of Cassandra 3.11.3 memory usage

What about Reads

The following graph reruns the benchmark on Cassandra version 3.11.3 over a longer period of time with a read to write ratio of 10:1. It illustrates that reads of wide partitions do not create the heap pressure that writes do.

screenshot of Cassandra 3.11.3 read functions

Conclusion

While the 400MB community recommendation for partition size is clearly appropriate for version 2.2.13, version 3.11.3 shows that performance improvements have created a tremendous ability to handle wide partitions and they can easily be an order of magnitude larger than earlier versions of Cassandra without nodes crashing through heap pressure.

The trade-off for better supporting wide partitions in Cassandra 3.11.3 is increased read latency as row offsets now need to be read off disk. However, modern SSDs and kernel pagecaches take advantage of larger configurations of physical memory providing enough IO improvements to compensate for the read latency trade-offs.

The improved stability and falling back on better hardware to deal with the read latency issue allows Cassandra operators to worry less about how to store massive amounts of data in different schemas and unexpected data growth patterns on those schemas.

Some CASSANDRA-9754 custom B+ tree structures will be used to more effectively look up the deserialised row offsets and further avoid the deserialization and instantiation of short-lived unused IndexInfo objects.


Mick Semb WeverMick Semb Wever designs, builds, and is an evangelist for distributed systems, from data-driven backends using Cassandra, Hadoop, Spark, to enterprise microservices platform.

The post How We Optimized Storage and Performance of Apache Cassandra at Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Legal Blackmail: Zero Cases Brought Against Alleged Pirates in Sweden

Post Syndicated from Andy original https://torrentfreak.com/legal-blackmail-zero-cases-brought-against-alleged-pirates-in-sweden-180525/

While several countries in Europe have wilted under sustained pressure from copyright trolls for more than ten years, Sweden managed to avoid their controversial attacks until fairly recently.

With Germany a decade-old pit of misery, with many hundreds of thousands of letters – by now probably millions – sent out to Internet users demanding cash, Sweden avoided the ranks of its European partners until two years ago

In September 2016 it was revealed that an organization calling itself Spridningskollen (Distribution Check) headed up by law firm Gothia Law, would begin targeting the public.

Its spokesperson described its letters as “speeding tickets” for pirates, in that they would only target the guilty. But there was a huge backlash and just a couple of months later Spridningskollen headed for the hills, without a single collection letter being sent out.

That was the calm before the storm.

In February 2017, Danish law firm Njord Law was found to be at the center of a new troll operation targeting the subscribers of several ISPs, including Telia, Tele2 and Bredbandsbolaget. Court documents revealed that thousands of IP addresses had been harvested by the law firm’s partners who were determined to link them with real-life people.

Indeed, in a single batch, Njord Law was granted permission from the court to obtain the identities of citizens behind 25,000 IP addresses, from whom it hoped to obtain cash settlements of around US$550. But it didn’t stop there.

Time and again the trolls headed back to court in an effort to reach more people although until now the true scale of their operations has been open to question. However, a new investigation carried out by SVT has revealed that the promised copyright troll invasion of Sweden is well underway with a huge level of momentum.

Data collated by the publication reveals that since 2017, the personal details behind more than 50,000 IP addresses have been handed over by Swedish Internet service providers to law firms representing copyright trolls and their partners. By the end of this year, Njord Law alone will have sent out 35,000 letters to Swede’s whose IP addresses have been flagged as allegedly infringing copyright.

Even if one is extremely conservative with the figures, the levels of cash involved are significant. Taking a settlement amount of just $300 per letter, very quickly the copyright trolls are looking at $15,000,000 in revenues. On the perimeter, assuming $550 will make a supposed lawsuit go away, we’re looking at a potential $27,500,000 in takings.

But of course, this dragnet approach doesn’t have the desired effect on all recipients.

In 2017, Njord Law said that only 60% of its letters received any kind of response, meaning that even fewer would be settling with the company. So what happens when the public ignores the threatening letters?

“Yes, we will [go to court],” said lawyer Jeppe Brogaard Clausen last year.

“We wish to resolve matters as much as possible through education and dialogue without the assistance of the court though. It is very expensive both for the rights holders and for plaintiffs if we go to court.”

But despite the tough-talking, SVT’s investigation has turned up an interesting fact. The nuclear option, of taking people to court and winning a case when they refuse to pay, has never happened.

After trawling records held by the Patent and Market Court and all those held by the District Courts dating back five years, SVT did not find a single case of a troll taking a citizen to court and winning a case. Furthermore, no law firm contacted by the publication could show that such a thing had happened.

“In Sweden, we have not yet taken someone to court, but we are planning to file for the right in 2018,” Emelie Svensson, lawyer at Njord Law, told SVT.

While a case may yet reach the courts, when it does it is guaranteed to be a cut-and-dried one. Letter recipients can often say things to damage their case, even when they’re only getting a letter due to their name being on the Internet bill. These are the people who find themselves under the most pressure to pay, whether they’re guilty or not.

“There is a risk of what is known in English as ‘legal blackmailing’,” says Mårten Schultz, professor of civil law at Stockholm University.

“With [the copyright holders’] legal and economic muscles, small citizens are scared into paying claims that they do not legally have to pay.”

It’s a position shared by Marianne Levine, Professor of Intellectual Property Law at Stockholm University.

“One can only show that an IP address appears in some context, but there is no point in the evidence. Namely, that it is the subscriber who also downloaded illegitimate material,” she told SVT.

Njord Law, on the other hand, sees things differently.

“In Sweden, we have no legal case saying that you are not responsible for your IP address,” Emelie Svensson says.

Whether Njord Law will carry through with its threats will remain to be seen but there can be little doubt that while significant numbers of people keep paying up, this practice will continue and escalate. The trolls have come too far to give up now.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Fully-Loaded Kodi Box Sellers Receive Hefty Jail Sentences

Post Syndicated from Andy original https://torrentfreak.com/fully-loaded-kodi-box-sellers-receive-hefty-jail-sentences-180524/

While users of older peer-to-peer based file-sharing systems have to work relatively hard to obtain content, users of the Kodi media player have things an awful lot easier.

As standard, Kodi is perfectly legal. However, when augmented with third-party add-ons it becomes a media discovery powerhouse, providing most of the content anyone could desire. A system like this can be set up by the user but for many, buying a so-called “fully-loaded” box from a seller is the easier option.

As a result, hundreds – probably thousands – of cottage industries have sprung up to service this hungry market in the UK, with regular people making a business out of setting up and selling such devices. Until three years ago, that’s what Michael Jarman and Natalie Forber of Colwyn Bay, Wales, found themselves doing.

According to reports in local media, Jarman was arrested in January 2015 when police were called to a disturbance at Jarman and Forber’s home. A large number of devices were spotted and an investigation was launched by Trading Standards officers. The pair were later arrested and charged with fraud offenses.

While 37-year-old Jarman pleaded guilty, 36-year-old Forber initially denied the charges and was due to stand trial. However, she later changed her mind and like Jarman, pleaded guilty to participating in a fraudulent business. Forber also pleaded guilty to transferring criminal property by shifting cash from the scheme through various bank accounts.

The pair attended a sentencing hearing before Judge Niclas Parry at Caernarfon Crown Court yesterday. According to local reporter Eryl Crump, the Court heard that the couple had run their business for about two years, selling around 1,000 fully-loaded Kodi-enabled devices for £100 each via social media.

According to David Birrell for the prosecution, the operation wasn’t particularly sophisticated but it involved Forber programming the devices as well as handling customer service. Forber claimed she was forced into the scheme by Jarman but that claim was rejected by the prosecution.

Between February 2013 and January 2015 the pair banked £105,000 from the business, money that was transferred between bank accounts in an effort to launder the takings.

Reporting from Court via Twitter, Crump said that Jarman’s defense lawyer accepted that a prison sentence was inevitable for his client but asked for the most lenient sentence possible.

Forber’s lawyer pointed out she had no previous convictions. The mother-of-two broke up with Jarman following her arrest and is now back in work and studying at college.

Sentencing the pair, Judge Niclas Parry described the offenses as a “relatively sophisticated fraud” carried out over a significant period. He jailed Jarman for 21 months and Forber for 16 months, suspended for two years. She must also carry out 200 hours of unpaid work.

The pair will also face a Proceeds of Crime investigation which could see them paying large sums to the state, should any assets be recoverable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

AWS GDPR Data Processing Addendum – Now Part of Service Terms

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-gdpr-data-processing-addendum/

Today, we’re happy to announce that the AWS GDPR Data Processing Addendum (GDPR DPA) is now part of our online Service Terms. This means all AWS customers globally can rely on the terms of the AWS GDPR DPA which will apply automatically from May 25, 2018, whenever they use AWS services to process personal data under the GDPR. The AWS GDPR DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that AWS customers wishing to transfer personal data from the European Economic Area (EEA) to other countries can do so with the knowledge that their personal data on AWS will be given the same high level of protection it receives in the EEA.

As we approach the GDPR enforcement date this week, this announcement is an important GDPR compliance component for us, our customers, and our partners. All customers which that are using cloud services to process personal data will need to have a data processing agreement in place between them and their cloud services provider if they are to comply with GDPR. As early as April 2017, AWS announced that AWS had a GDPR-ready DPA available for its customers. In this way, we started offering our GDPR DPA to customers over a year before the May 25, 2018 enforcement date. Now, with the DPA terms included in our online service terms, there is no extra engagement needed by our customers and partners to be compliant with the GDPR requirement for data processing terms.

The AWS GDPR DPA also provides our customers with a number of other important assurances, such as the following:

  • AWS will process customer data only in accordance with customer instructions.
  • AWS has implemented and will maintain robust technical and organizational measures for the AWS network.
  • AWS will notify its customers of a security incident without undue delay after becoming aware of the security incident.
  • AWS will make available certificates issued in relation to the ISO 27001 certification, the ISO 27017 certification, and the ISO 27018 certification to further help customers and partners in their own GDPR compliance activities.

Customers who have already signed an offline version of the AWS GDPR DPA can continue to rely on that GDPR DPA. By incorporating our GDPR DPA into the AWS Service Terms, we are simply extending the terms of our GDPR DPA to all customers globally who will require it under GDPR.

AWS GDPR DPA is only part of the story, however. We are continuing to work alongside our customers and partners to help them on their journey towards GDPR compliance.

If you have any questions about the GDPR or the AWS GDPR DPA, please contact your account representative, or visit the AWS GDPR Center at: https://aws.amazon.com/compliance/gdpr-center/

-Chad

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Working with the Scout Association on digital skills for life

Post Syndicated from Philip Colligan original https://www.raspberrypi.org/blog/working-with-scout-association-digital-skills-for-life/

Today we’re launching a new partnership between the Scouts and the Raspberry Pi Foundation that will help tens of thousands of young people learn crucial digital skills for life. In this blog post, I want to explain what we’ve got planned, why it matters, and how you can get involved.

This is personal

First, let me tell you why this partnership matters to me. As a child growing up in North Wales in the 1980s, Scouting changed my life. My time with 2nd Rhyl provided me with countless opportunities to grow and develop new skills. It taught me about teamwork and community in ways that continue to shape my decisions today.

As my own kids (now seven and ten) have joined Scouting, I’ve seen the same opportunities opening up for them, and like so many parents, I’ve come back to the movement as a volunteer to support their local section. So this is deeply personal for me, and the same is true for many of my colleagues at the Raspberry Pi Foundation who in different ways have been part of the Scouting movement.

That shouldn’t come as a surprise. Scouting and Raspberry Pi share many of the same values. We are both community-led movements that aim to help young people develop the skills they need for life. We are both powered by an amazing army of volunteers who give their time to support that mission. We both care about inclusiveness, and pride ourselves on combining fun with learning by doing.

Raspberry Pi

Raspberry Pi started life in 2008 as a response to the problem that too many young people were growing up without the skills to create with technology. Our goal is that everyone should be able to harness the power of computing and digital technologies, for work, to solve problems that matter to them, and to express themselves creatively.

In 2012 we launched our first product, the world’s first $35 computer. Just six years on, we have sold over 20 million Raspberry Pi computers and helped kickstart a global movement for digital skills.

The Raspberry Pi Foundation now runs the world’s largest network of volunteer-led computing clubs (Code Clubs and CoderDojos), and creates free educational resources that are used by millions of young people all over the world to learn how to create with digital technologies. And lots of what we are able to achieve is because of partnerships with fantastic organisations that share our goals. For example, through our partnership with the European Space Agency, thousands of young people have written code that has run on two Raspberry Pi computers that Tim Peake took to the International Space Station as part of his Mission Principia.

Digital makers

Today we’re launching the new Digital Maker Staged Activity Badge to help tens of thousands of young people learn how to create with technology through Scouting. Over the past few months, we’ve been working with the Scouts all over the UK to develop and test the new badge requirements, along with guidance, project ideas, and resources that really make them work for Scouting. We know that we need to get two things right: relevance and accessibility.

Relevance is all about making sure that the activities and resources we provide are a really good fit for Scouting and Scouting’s mission to equip young people with skills for life. From the digital compass to nature cameras and the reinvented wide game, we’ve had a lot of fun thinking about ways we can bring to life the crucial role that digital technologies can play in the outdoors and adventure.

Compass Coding with Raspberry Pi

We are beyond excited to be launching a new partnership with the Raspberry Pi Foundation, which will help tens of thousands of young people learn digital skills for life.

We also know that there are great opportunities for Scouts to use digital technologies to solve social problems in their communities, reflecting the movement’s commitment to social action. Today we’re launching the first set of project ideas and resources, with many more to follow over the coming weeks and months.

Accessibility is about providing every Scout leader with the confidence, support, and kit to enable them to offer the Digital Maker Staged Activity Badge to their young people. A lot of work and care has gone into designing activities that require very little equipment: for example, activities at Stages 1 and 2 can be completed with a laptop without access to the internet. For the activities that do require kit, we will be working with Scout Stores and districts to make low-cost kit available to buy or loan.

We’re producing accessible instructions, worksheets, and videos to help leaders run sessions with confidence, and we’ll also be planning training for leaders. We will work with our network of Code Clubs and CoderDojos to connect them with local sections to organise joint activities, bringing both kit and expertise along with them.




Get involved

Today’s launch is just the start. We’ll be developing our partnership over the next few years, and we can’t wait for you to join us in getting more young people making things with technology.

Take a look at the brand-new Raspberry Pi resources designed especially for Scouts, to get young people making and creating right away.

The post Working with the Scout Association on digital skills for life appeared first on Raspberry Pi.

Connect, collaborate, and learn at AWS Global Summits in 2018

Post Syndicated from Tina Kelleher original https://aws.amazon.com/blogs/big-data/connect-collaborate-and-learn-at-aws-global-summits-in-2018/

Regardless of your career path, there’s no denying that attending industry events can provide helpful career development opportunities — not only for improving and expanding your skill sets, but for networking as well. According to this article from PayScale.com, experts estimate that somewhere between 70-85% of new positions are landed through networking.

Narrowing our focus to networking opportunities with cloud computing professionals who’re working on tackling some of today’s most innovative and exciting big data solutions, attending big data-focused sessions at an AWS Global Summit is a great place to start.

AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. As the name suggests, these summits are held in major cities around the world, and attract technologists from all industries and skill levels who’re interested in hearing from AWS leaders, experts, partners, and customers.

In addition to networking opportunities with top cloud technology providers, consultants and your peers in our Partner and Solutions Expo, you’ll also hone your AWS skills by attending and participating in a multitude of education and training opportunities.

Here’s a brief sampling of some of the upcoming sessions relevant to big data professionals:

May 31st : Big Data Architectural Patterns and Best Practices on AWS | AWS Summit – Mexico City

June 6th-7th: Various (click on the “Big Data & Analytics” header) | AWS Summit – Berlin

June 20-21st : [email protected] | Public Sector Summit – Washington DC

June 21st: Enabling Self Service for Data Scientists with AWS Service Catalog | AWS Summit – Sao Paulo

Be sure to check out the main page for AWS Global Summits, where you can see which cities have AWS Summits planned for 2018, register to attend an upcoming event, or provide your information to be notified when registration opens for a future event.

Connect Veeam to the B2 Cloud: Episode 3 — Using OpenDedup

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/opendedup-for-cloud-storage/

Veeam backup to Backblaze B2 logo

In this, the third post in our series on connecting Veeam with Backblaze B2 Cloud Storage, we discuss how to back up your VMs to B2 using Veeam and OpenDedup. In our previous posts, we covered how to connect Veeam to the B2 cloud using Synology, and how to connect Veeam with B2 using StarWind VTL.

Deduplication and OpenDedup

Deduplication is simply the process of eliminating redundant data on disk. Deduplication reduces storage space requirements, improves backup speed, and lowers backup storage costs. The dedup field used to be dominated by a few big-name vendors who sold dedup systems that were too expensive for most of the SMB market. Then an open-source challenger came along in OpenDedup, a project that produced the Space Deduplication File System (SDFS). SDFS provides many of the features of commercial dedup products without their cost.

OpenDedup provides inline deduplication that can be used with applications such as Veeam, Veritas Backup Exec, and Veritas NetBackup.

Features Supported by OpenDedup:

  • Variable Block Deduplication to cloud storage
  • Local Data Caching
  • Encryption
  • Bandwidth Throttling
  • Fast Cloud Recovery
  • Windows and Linux Support

Why use Veeam with OpenDedup to Backblaze B2?

With your VMs backed up to B2, you have a number of options to recover from a disaster. If the unexpected occurs, you can quickly restore your VMs from B2 to the location of your choosing. You also have the option to bring up cloud compute through B2’s compute partners, thereby minimizing any loss of service and ensuring business continuity.

Veeam logo + OpenDedup logo + Backblaze B2 logo

Backblaze’s B2 is an ideal solution for backing up Veeam’s backup repository due to B2’s combination of low-cost and high availability. Users of B2 save up to 75% compared to other cloud solutions such as Microsoft Azure, Amazon AWS, or Google Cloud Storage. When combined with OpenDedup’s no-cost deduplication, you’re got an efficient and economical solution for backing up VMs to the cloud.

How to Use OpenDedup with B2

For step-by-step instructions for how to set up OpenDedup for use with B2 on Windows or Linux, see Backblaze B2 Enabled on the OpenDedup website.

Are you backing up Veeam to B2 using one of the solutions we’ve written about in this series? If you have, we’d love to hear from you in the comments.

View all posts in the Veeam series.

The post Connect Veeam to the B2 Cloud: Episode 3 — Using OpenDedup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Introducing the AWS Machine Learning Competency for Consulting Partners

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/introducing-the-aws-machine-learning-competency-for-consulting-partners/

Today I’m excited to announce a new Machine Learning Competency for Consulting Partners in the Amazon Partner Network (APN). This AWS Competency program allows APN Consulting Partners to demonstrate a deep expertise in machine learning on AWS by providing solutions that enable machine learning and data science workflows for their customers. This new AWS Competency is in addition to the Machine Learning comptency for our APN Technology Partners, that we launched at the re:Invent 2017 partner summit.

These APN Consulting Partners help organizations solve their machine learning and data challenges through:

  • Providing data services that help data scientists and machine learning practitioners prepare their enterprise data for training.
  • Platform solutions that provide data scientists and machine learning practitioners with tools to take their data, train models, and make predictions on new data.
  • SaaS and API solutions to enable predictive capabilities within customer applications.

Why work with an AWS Machine Learning Competency Partner?

The AWS Competency Program helps customers find the most qualified partners with deep expertise. AWS Machine Learning Competency Partners undergo a strict validation of their capabilities to demonstrate technical proficiency and proven customer success with AWS machine learning tools.

If you’re an AWS customer interested in machine learning workloads on AWS, check out our AWS Machine Learning launch partners below:

 

Interested in becoming an AWS Machine Learning Competency Partner?

APN Partners with experience in Machine Learning can learn more about becoming an AWS Machine Learning Competency Partner here. To learn more about the benefits of joining the AWS Partner Network, see our APN Partner website.

Thanks to the AWS Partner Team for their help with this post!
Randall

ISPs Win Landmark Case to Protect Privacy of Alleged Pirates

Post Syndicated from Andy original https://torrentfreak.com/isps-win-landmark-case-protect-privacy-alleged-pirates-180508/

With waves of piracy settlement letters being sent out across the world, the last line of defense for many accused Internet users has been their ISPs.

In a number of regions, notably the United States, Europe, and the UK, most ISPs have given up the fight, handing subscriber details over to copyright trolls with a minimum of resistance. However, there are companies out there prepared to stand up for their customers’ rights, if eventually.

Over in Denmark, Telenor grew tired of tens of thousands of requests for subscriber details filed by a local law firm on behalf of international copyright troll groups. It previously complied with demands to hand over the details of individuals behind 22,000 IP addresses, around 11% of the 200,000 total handled by ISPs in Denmark. But with no end in sight, the ISP dug in its heels.

“We think there is a fundamental legal problem because the courts do not really decide what is most important: the legal security of the public or the law firms’ commercial interests,” Telenor’s Legal Director Mette Eistrøm Krüger said last year.

Assisted by rival ISP Telia, Telenor subsequently began preparing a case to protect the interests of their customers, refusing in the meantime to comply with disclosure requests in copyright cases. But last October, the District Court ruled against the telecoms companies, ordering them to provide identities to the copyright trolls.

Undeterred, the companies took their case to the Østre Landsret, one of Denmark’s two High Courts. Yesterday their determination paid off with a resounding victory for the ISPs and security for the individuals behind approximately 4,000 IP addresses targeted by Copyright Collection Ltd via law firm Njord Law.

“In its order based on telecommunications legislation, the Court has weighed subscribers’ rights to confidentiality of information regarding their use of the Internet against the interests of rightsholders to obtain information for the purpose of prosecuting claims against the subscribers,” the Court said in a statement.

Noting that the case raised important questions of European Union law and the European Convention on Human Rights, the High Court said that after due consideration it would overrule the decision of the District Court. The rights of the copyright holders do not trump the individuals right to privacy, it said.

“The telecommunications companies are therefore not required to disclose the names and addresses of their subscribers,” the Court ruled.

Telenor welcomed the decision, noting that it had received countless requests from law firms to disclose the identities of thousands of subscribers but had declined to hand them over, a decision that has now been endorsed by the High Court.

“This is an important victory for our right to protect our customers’ data,” said Telenor Denmark’s Legal Director, Mette Eistrøm Krüger.

“At Telenor we protect our customers’ data and trust – therefore it has been our conviction that we cannot be forced into almost automatically submitting personal data on our customers simply to support some private actors who are driven by commercial interests.”

Noting that it’s been putting up a fight since 2016 against handing over customers’ data for purposes other than investigating serious crime, Telenor said that the clarity provided by the decision is most welcome.

“We and other Danish telecom companies are required to log customer data for the police to fight serious crime and terrorism – but the legislation has just been insufficient in relation to the use of logged data,” Krüger said.

“Therefore I am pleased that with this judgment the High Court has stated that customers’ legal certainty is most important in these cases.”

The decision was also welcomed by Telia Denmark, with Legal Director Lasse Andersen describing the company as being “really really happy” with “a big win.”

“It is a victory for our customers and for all telecom companies’ customers,” Andersen said.

“They can now feel confident that the data that we collect about them cannot be disclosed for purposes other than the terms under which they are collected as determined by the jurisdiction.

“Therefore, anyone and everybody cannot claim our data. We are pleased that throughout the process we have determined that we will not hand over our data to anyone other than the police with a court order,” Andersen added.

But as the ISPs celebrate, the opposite is true for Njord Law and its copyright troll partners.

“It is a sad message to the Danish film and television industry that the possibilities for self-investigating illegal file sharing are complicated and that the work must be left to the police’s scarce resources,” said Jeppe Brogaard Clausen of Njord Law.

While the ISPs finally stood up for users in these cases, Telenor in particular wishes to emphasize that supporting the activities of pirates is not its aim. The company says it does not support illegal file-sharing “in any way” and is actively working with anti-piracy outfit Rights Alliance to prevent unauthorized downloading of movies and other content.

The full decision of the Østre Landsret can be found here (Danish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

MPAA Chief Says Fighting Piracy Remains “Top Priority”

Post Syndicated from Andy original https://torrentfreak.com/mpaa-chief-says-fighting-piracy-remains-top-priority-180425/

After several high-profile years at the helm of the movie industry’s most powerful lobbying group, last year saw the departure of Chris Dodd from the role of Chairman and CEO at the MPAA.

The former Senator, who earned more than $3.5m a year championing the causes of the major Hollywood studios since 2011, was immediately replaced by another political heavyweight.

Charles Rivkin, who took up his new role September 5, 2017, previously served as Assistant Secretary of State for Economic and Business Affairs in the Obama administration. With an underperforming domestic box office year behind him fortunately overshadowed by massive successes globally, this week he spoke before US movie exhibitors for the first time at CinemaCon in Las Vegas.

“Globally, we hit a record high of $40.6 billion at the box office. Domestically, our $11.1 billion box office was slightly down from the 2016 record. But it exactly matched the previous high from 2015. And it was the second highest total in the past decade,” Rivkin said.

“But it exactly matched the previous high from 2015. And it was the second highest total in the past decade.”

Rivkin, who spent time as President and CEO of The Jim Henson Company, told those in attendance that he shares a deep passion for the movie industry and looks forward optimistically to the future, a future in which content is secured from those who intend on sharing it for free.

“Making sure our creative works are valued and protected is one of the most important things we can do to keep that industry heartbeat strong. At the Henson Company, and WildBrain, I learned just how much intellectual property affects everyone. Our entire business model depended on our ability to license Kermit the Frog, Miss Piggy, and the Muppets and distribute them across the globe,” Rivkin said.

“I understand, on a visceral level, how important copyright is to any creative business and in particular our country’s small and medium enterprises – which are the backbone of the American economy. As Chairman and CEO of the MPAA, I guarantee you that fighting piracy in all forms remains our top priority.”

That tackling piracy is high on the MPAA’s agenda won’t comes as a surprise but at least in terms of the numbers of headlines plastered over the media, high-profile anti-piracy action has been somewhat lacking in recent years.

With lawsuits against torrent sites seemingly a thing of the past and a faltering Megaupload case that will conclude who-knows-when, the MPAA has taken a broader view, seeking partnerships with sometimes rival content creators and distributors, each with a shared desire to curtail illicit media.

“One of the ways that we’re already doing that is through the Alliance for Creativity and Entertainment – or ACE as we call it,” Rivkin said.

“This is a coalition of 30 leading global content creators, including the MPAA’s six member studios as well as Netflix, and Amazon. We work together as a powerful team to ensure our stories are seen as they were intended to be, and that their creators are rewarded for their hard work.”

Announced in June 2017, ACE has become a united anti-piracy powerhouse for a huge range of entertainment industry groups, encompassing the likes of CBS, HBO, BBC, Sky, Bell Canada, CBS, Hulu, Lionsgate, Foxtel and Village Roadshow, to name a few.

The coalition was announced by former MPAA Chief Chris Dodd and now, with serious financial input from all companies involved, appears to be picking its fights carefully, focusing on the growing problem of streaming piracy centered around misuse of Kodi and similar platforms.

From threatening relatively small-time producers and distributors of third-party addons and builds (1,2,3), ACE is also attempting to make its mark among the profiteers.

The group now has several lawsuits underway in the United States against people selling piracy-enabled IPTV boxes including Tickbox, Dragon Box, and during the last week, Set TV.

With these important cases pending, Rivkin offered assurances that his organization remains committed to anti-piracy enforcement and he thanked exhibitors for their efforts to prevent people quickly running away with copies of the latest releases.

“I am grateful to all of you for recognizing what is at stake, and for working with us to protect creativity, such as fighting the use of illegal camcorders in theaters,” he said.

“Protecting our creativity isn’t only a fundamental right. It’s an economic necessity, for us and all creative economies. Film and television are among the most valuable – and most impactful – exports we have.

Thus far at least, Rivkin has a noticeably less aggressive tone on piracy than his predecessor Chris Dodd but it’s unlikely that will be mistaken for weakness among pirates, nor should it. The MPAA isn’t known for going soft on pirates and it certainly won’t be changing course anytime soon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Tips for Success: GDPR Lessons Learned

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/tips-for-success-gdpr-lessons-learned/

Security is our top priority at AWS, and from the beginning we have built security into the fabric of our services. With the introduction of GDPR (which becomes enforceable on May 25 of 2018), privacy and data protection have become even more ingrained into our security-centered culture. Three weeks ago, well ahead of the deadline, we announced that all AWS services are compliant with GDPR, meaning you can use AWS as a data processor as a way to help solve your GDPR challenges (be sure to visit our GDPR Center for additional information).

When it comes to GDPR compliance, many customers are progressing nicely and much of the initial trepidation is gone. In my interactions with customers on this topic, a few themes have emerged as universal:

  • GDPR is important. You need to have a plan in place if you process personal data of EU data subjects, not only because it’s good governance, but because GDPR does carry significant penalties for non-compliance.
  • Solving this can be complex, potentially involving a lot of personnel and multiple tools. Your GDPR process will also likely span across disciplines – impacting people, processes, and technology.
  • Each customer is unique, and there are many methodologies around assessing your compliance with GDPR. It’s important to be aware of your own individual business attributes.

I thought it might be helpful to share some of our own lessons learned. In our experience in solving the GDPR challenge, the following were keys to our success:

  1. Get your senior leadership involved. We have a regular cadence of detailed status conversations about GDPR with our CEO, Andy Jassy. GDPR is high stakes, and the AWS leadership team knows it. If GDPR doesn’t have the attention it needs with the visibility of top management today, it’s time to escalate.
  2. Centralize the GDPR efforts. Driving all work streams centrally is key. This may sound obvious, but managing this in a distributed manner may result in duplicative effort and/or team members moving in a different direction.
  3. The most important single partner in solving GDPR is your legal team. Having non-legal people make assumptions about how to interpret GDPR for your unique environment is both risky and a potential waste of time and resources. You want to avoid analysis paralysis by getting proper legal advice, collaborating on a direction, and then moving forward with the proper urgency.
  4. Collaborate closely with tech leadership. The “process” people in your organization, the ones who already know how to approach governance problems, are typically comfortable jumping right in to GDPR. But technical teams, including data owners, have set up their software for business application. They may not even know what kind of data they are storing, processing, or transferring to other parts of the business. In the GDPR exercise they need to be aware of (or at least help facilitate) the tracking of data and data elements between systems. This isn’t a typical ask for technical teams, so be prepared to educate and to fully understand data flow.
  5. Don’t live by the established checklists. There are multiple methodologies to solving the compliance challenges of GDPR. At AWS, we ended up establishing core requirements, mapped out by data controller and data processor functions and then, in partnership with legal, decided upon a group of projects based on our known current state. Be careful about using a set methodology, tool or questionnaire to govern your efforts. These generic assessments can help educate, but letting them drive or limit your work could lead to missing something that is key to your own compliance. In this sense, a generic, “one size fits all” solution might not be helpful.
  6. Don’t be afraid to challenge prior orthodoxy. Many times we changed course based on new information. You shouldn’t be afraid to scrap an effort if you determine it’s not working. You should also not be afraid to escalate issues to senior leadership when needed. This is an executive issue.
  7. Look for ways to leverage your work beyond this compliance activity. GDPR requires serious effort, but are the results limited to GDPR compliance? Certainly not. You can use GDPR workflows as a way to ensure better governance moving forward. Privacy and security will require work for the foreseeable future, so make your governance program scalable and usable for other purposes.

One last tip that has made all the difference: think about protecting data subjects and work backwards from there. Customer focus drives us to ask, “what would customers and data subjects want and expect us to do?” Taking GDPR from a pure legal or compliance standpoint may be technically sufficient, but we believe the objectives of security and personal data protection require a more comprehensive view, and you can most effectively shape that view by starting with the individuals GDPR was meant to protect.

If you would like to find out more about our experiences, as well as how we can help you in your efforts, please reach out to us today.

-Chad Woolf

Vice President, AWS Security Assurance

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

Registrars Suspend 11 Pirate Site Domains, 89 More in the Crosshairs

Post Syndicated from Andy original https://torrentfreak.com/registrars-suspend-11-pirate-site-domains-89-more-in-the-crosshairs-180423/

In addition to website blocking which is running rampant across dozens of countries right now, targeting the domains of pirate sites is considered to be a somewhat effective anti-piracy tool.

The vast majority of websites are found using a recognizable name so when they become inaccessible, site operators have to work quickly to get the message out to fans. That can mean losing visitors, at least in the short term, and also contributes to the rise of copy-cat sites that may not have users’ best interests at heart.

Nevertheless, crime-fighting has always been about disrupting the ability of the enemy to do business so with this in mind, authorities in India began taking advice from the UK’s Police Intellectual Property Crime Unit (PIPCU) a couple of years ago.

After studying the model developed by PIPCU, India formed its Digital Crime Unit (DCU), which follows a multi-stage plan.

Initially, pirate sites and their partners are told to cease-and-desist. Next, complaints are filed with advertisers, who are asked to stop funding site activities. Service providers and domain registrars also receive a written complaint from the DCU, asking them to suspend services to the sites in question.

Last July, the DCU earmarked around 9,000 sites where pirated content was being made available. From there, 1,300 were placed on a shortlist for targeted action. Precisely how many have been contacted thus far is unclear but authorities are now reporting success.

According to local reports, the Maharashtra government’s Digital Crime Unit has managed to have 11 pirate site domains suspended following complaints from players in the entertainment industry.

As is often the case (and to avoid them receiving even more attention) the sites in question aren’t being named but according to Brijesh Singh, special Inspector General of Police in Maharashtra, the sites had a significant number of visitors.

Their domain registrars were sent a notice under Section 149 of the Code Of Criminal Procedure, which grants police the power to take preventative action when a crime is suspected. It’s yet to be confirmed officially but it seems likely that pirate sites utilizing local registrars were targeted by the authorities.

“Responding to our notice, the domain names of all these websites, that had a collective viewership of over 80 million, were suspended,” Singh said.

Laxman Kamble, a police inspector attached to the state government’s Cyber Cell, said the pilot project was launched after the government received complaints from Viacom and Star but back in January there were reports that the MPAA had also become involved.

Using the model pioneered by London’s PIPCU, 19 parameters were applied to list of pirate sites in order to place them on the shortlist. They are reported to include the type of content being uploaded, downloaded, and the number of downloads overall.

Kamble reports that a further 89 websites, that have domains registered abroad but are very popular in India, are now being targeted. Whether overseas registrars will prove as compliant will remain to be seen. After booking initial success, even PIPCU itself experienced problems keeping up the momentum with registrars.

In 2014, information obtained by TorrentFreak following a Freedom of Information request revealed that only five out of 70 domain registrars had complied with police requests to suspend domains.

A year later, PIPCU confirmed that suspending pirate domain names was no longer a priority for them after ICANN ruled that registrars don’t have to suspend domain names without a valid court order.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Confused About the Hybrid Cloud? You’re Not Alone

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/confused-about-the-hybrid-cloud-youre-not-alone/

Hybrid Cloud. What is it?

Do you have a clear understanding of the hybrid cloud? If you don’t, it’s not surprising.

Hybrid cloud has been applied to a greater and more varied number of IT solutions than almost any other recent data management term. About the only thing that’s clear about the hybrid cloud is that the term hybrid cloud wasn’t invented by customers, but by vendors who wanted to hawk whatever solution du jour they happened to be pushing.

Let’s be honest. We’re in an industry that loves hype. We can’t resist grafting hyper, multi, ultra, and super and other prefixes onto the beginnings of words to entice customers with something new and shiny. The alphabet soup of cloud-related terms can include various options for where the cloud is located (on-premises, off-premises), whether the resources are private or shared in some degree (private, community, public), what type of services are offered (storage, computing), and what type of orchestrating software is used to manage the workflow and the resources. With so many moving parts, it’s no wonder potential users are confused.

Let’s take a step back, try to clear up the misconceptions, and come up with a basic understanding of what the hybrid cloud is. To be clear, this is our viewpoint. Others are free to do what they like, so bear that in mind.

So, What is the Hybrid Cloud?

The hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud resources combined with third-party public cloud resources that use some kind of orchestration between them.

To get beyond the hype, let’s start with Forrester Research‘s idea of the hybrid cloud: “One or more public clouds connected to something in my data center. That thing could be a private cloud; that thing could just be traditional data center infrastructure.”

To put it simply, a hybrid cloud is a mash-up of on-premises and off-premises IT resources.

To expand on that a bit, we can say that the hybrid cloud refers to a cloud environment made up of a mixture of on-premises private cloud[1] resources combined with third-party public cloud resources that use some kind of orchestration[2] between them. The advantage of the hybrid cloud model is that it allows workloads and data to move between private and public clouds in a flexible way as demands, needs, and costs change, giving businesses greater flexibility and more options for data deployment and use.

In other words, if you have some IT resources in-house that you are replicating or augmenting with an external vendor, congrats, you have a hybrid cloud!

Private Cloud vs. Public Cloud

The cloud is really just a collection of purpose built servers. In a private cloud, the servers are dedicated to a single tenant or a group of related tenants. In a public cloud, the servers are shared between multiple unrelated tenants (customers). A public cloud is off-site, while a private cloud can be on-site or off-site — or on-prem or off-prem.

As an example, let’s look at a hybrid cloud meant for data storage, a hybrid data cloud. A company might set up a rule that says all accounting files that have not been touched in the last year are automatically moved off-prem to cloud storage to save cost and reduce the amount of storage needed on-site. The files are still available; they are just no longer stored on your local systems. The rules can be defined to fit an organization’s workflow and data retention policies.

The hybrid cloud concept also contains cloud computing. For example, at the end of the quarter, order processing application instances can be spun up off-premises in a hybrid computing cloud as needed to add to on-premises capacity.

Hybrid Cloud Benefits

If we accept that the hybrid cloud combines the best elements of private and public clouds, then the benefits of hybrid cloud solutions are clear, and we can identify the primary two benefits that result from the blending of private and public clouds.

Diagram of the Components of the Hybrid Cloud

Benefit 1: Flexibility and Scalability

Undoubtedly, the primary advantage of the hybrid cloud is its flexibility. It takes time and money to manage in-house IT infrastructure and adding capacity requires advance planning.

The cloud is ready and able to provide IT resources whenever needed on short notice. The term cloud bursting refers to the on-demand and temporary use of the public cloud when demand exceeds resources available in the private cloud. For example, some businesses experience seasonal spikes that can put an extra burden on private clouds. These spikes can be taken up by a public cloud. Demand also can vary with geographic location, events, or other variables. The public cloud provides the elasticity to deal with these and other anticipated and unanticipated IT loads. The alternative would be fixed cost investments in on-premises IT resources that might not be efficiently utilized.

For a data storage user, the on-premises private cloud storage provides, among other benefits, the highest speed access. For data that is not frequently accessed, or needed with the absolute lowest levels of latency, it makes sense for the organization to move it to a location that is secure, but less expensive. The data is still readily available, and the public cloud provides a better platform for sharing the data with specific clients, users, or with the general public.

Benefit 2: Cost Savings

The public cloud component of the hybrid cloud provides cost-effective IT resources without incurring capital expenses and labor costs. IT professionals can determine the best configuration, service provider, and location for each service, thereby cutting costs by matching the resource with the task best suited to it. Services can be easily scaled, redeployed, or reduced when necessary, saving costs through increased efficiency and avoiding unnecessary expenses.

Comparing Private vs Hybrid Cloud Storage Costs

To get an idea of the difference in storage costs between a purely on-premises solutions and one that uses a hybrid of private and public storage, we’ll present two scenarios. For each scenario we’ll use data storage amounts of 100 terabytes, 1 petabyte, and 2 petabytes. Each table is the same format, all we’ve done is change how the data is distributed: private (on-premises) cloud or public (off-premises) cloud. We are using the costs for our own B2 Cloud Storage in this example. The math can be adapted for any set of numbers you wish to use.

Scenario 1    100% of data on-premises storage

Data Stored
Data stored On-Premises: 100%100 TB1,000 TB2,000 TB
On-premises cost rangeMonthly Cost
Low — $12/TB/Month$1,200$12,000$24,000
High — $20/TB/Month$2,000$20,000$40,000

Scenario 2    20% of data on-premises with 80% public cloud storage (B2)

Data Stored
Data stored On-Premises: 20%20 TB200 TB400 TB
Data stored in Cloud: 80%80 TB800 TB1,600 TB
On-premises cost rangeMonthly Cost
Low — $12/TB/Month$240$2,400$4,800
High — $20/TB/Month$400$4,000$8,000
Public cloud cost rangeMonthly Cost
Low — $5/TB/Month (B2)$400$4,000$8,000
High — $20/TB/Month$1,600$16,000$32,000
On-premises + public cloud cost rangeMonthly Cost
Low$640$6,400$12,800
High$2,000$20,000$40,000

As can be seen in the numbers above, using a hybrid cloud solution and storing 80% of the data in the cloud with a provider such as Backblaze B2 can result in significant savings over storing only on-premises. For other cost scenarios, see the B2 Cost Calculator.

When Hybrid Might Not Always Be the Right Fit

There are circumstances where the hybrid cloud might not be the best solution. Smaller organizations operating on a tight IT budget might best be served by a purely public cloud solution. The cost of setting up and running private servers is substantial.

An application that requires the highest possible speed might not be suitable for hybrid, depending on the specific cloud implementation. While latency does play a factor in data storage for some users, it is less of a factor for uploading and downloading data than it is for organizations using the hybrid cloud for computing. Because Backblaze recognized the importance of speed and low-latency for customers wishing to use computing on data stored in B2, we directly connected our data centers with those of our computing partners, ensuring that latency would not be an issue even for a hybrid cloud computing solution.

It is essential to have a good understanding of workloads and their essential characteristics in order to make the hybrid cloud work well for you. Each application needs to be examined for the right mix of private cloud, public cloud, and traditional IT resources that fit the particular workload in order to benefit most from a hybrid cloud architecture.

The Hybrid Cloud Can Be a Win-Win Solution

From the high altitude perspective, any solution that enables an organization to respond in a flexible manner to IT demands is a win. Avoiding big upfront capital expenses for in-house IT infrastructure will appeal to the CFO. Being able to quickly spin up IT resources as they’re needed will appeal to the CTO and VP of Operations.

Should You Go Hybrid?

We’ve arrived at the bottom line and the question is, should you or your organization embrace hybrid cloud infrastructures?

According to 451 Research, by 2019, 69% of companies will operate in hybrid cloud environments, and 60% of workloads will be running in some form of hosted cloud service (up from 45% in 2017). That indicates that the benefits of the hybrid cloud appeal to a broad range of companies.

In Two Years, More Than Half of Workloads Will Run in Cloud

Clearly, depending on an organization’s needs, there are advantages to a hybrid solution. While it might have been possible to dismiss the hybrid cloud in the early days of the cloud as nothing more than a buzzword, that’s no longer true. The hybrid cloud has evolved beyond the marketing hype to offer real solutions for an increasingly complex and challenging IT environment.

If an organization approaches the hybrid cloud with sufficient planning and a structured approach, a hybrid cloud can deliver on-demand flexibility, empower legacy systems and applications with new capabilities, and become a catalyst for digital transformation. The result can be an elastic and responsive infrastructure that has the ability to quickly respond to changing demands of the business.

As data management professionals increasingly recognize the advantages of the hybrid cloud, we can expect more and more of them to embrace it as an essential part of their IT strategy.

Tell Us What You’re Doing with the Hybrid Cloud

Are you currently embracing the hybrid cloud, or are you still uncertain or hanging back because you’re satisfied with how things are currently? Maybe you’ve gone totally hybrid. We’d love to hear your comments below on how you’re dealing with the hybrid cloud.


[1] Private cloud can be on-premises or a dedicated off-premises facility.

[2] Hybrid cloud orchestration solutions are often proprietary, vertical, and task dependent.

The post Confused About the Hybrid Cloud? You’re Not Alone appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.