Tag Archives: Partners

Introducing Cloudflare’s Technology Partner Program

Post Syndicated from Matt Lewis original https://blog.cloudflare.com/technology-partner-program/

Introducing Cloudflare’s Technology Partner Program

The Internet is built on a series of shared protocols, all working in harmony to deliver the collective experience that has changed the way we live and work. These open standards have created a platform such that a myriad of companies can build unique services and products that work together seamlessly. As a steward and supporter of an open Internet, we aspire to provide an interoperable platform that works with all the complementary technologies that our customers use across their technology stack. This has been the guiding principle for the multiple partnerships we have launched over the last few years.  

One example is our Bandwidth Alliance — launched in 2018, this alliance with 18 cloud and storage providers aims to reduce egress fees, also known as data transfer fees, for our customers. The Bandwidth Alliance has broken the norms of the cloud industry so that customers can move data more freely. Since then, we have launched several technology partner programs with over 40+ partners, including:

  • Analytics — Visualize Cloudflare logs and metrics easily, and help customers better understand events and trends from websites and applications on the Cloudflare network.
  • Network Interconnect — Partnerships with best-in-class Interconnection platforms offer private, secure, software-defined links with near instant-turn-up of ports.
  • Endpoint Protection Partnerships — With these integrations, every connection to our customer’s corporate application gets an additional layer of identity assurance without the need to connect to VPN.
  • Identity Providers — Easily integrate your organization’s single sign-on provider and benefit from the ease-of-use and functionality of Cloudflare Access.
Introducing Cloudflare’s Technology Partner Program

These partner programs have helped us serve our customers better alongside our partners with our complementary solutions. The integrations we have driven have made it easy for thousands of customers to use Cloudflare with other parts of their stack.

We aim to continue expanding the Cloudflare Partner Network to make it seamless for our customers to use Cloudflare. To support our growing ecosystem of partners, we are excited to launch our Technology Partner Program.

Announcing Cloudflare’s Technology Partner Program

Cloudflare’s Technology Partner Program facilitates innovative integrations that create value for our customers, our technology partners, and Cloudflare. Our partners not only benefit from technical integrations with us, but also have the opportunity to drive sales and marketing efforts to better serve mutual customers and prospects.

This program offers a guiding structure so that our partners can benefit across three key areas:

  • Build with Cloudflare: Sandbox access to Cloudflare enterprise features and APIs to build and test integrations. Opportunity to collaborate with Cloudflare’s product teams to build innovative solutions.
  • Market with Cloudflare: Develop joint solution brief and host joint events to drive awareness and adoption of integrations. Leverage a range of our partners tools and resources to bring our joint solutions to market.
  • Sell with Cloudflare: Align with our sales teams to jointly target relevant customer segments across geographies.

Technology Partner Tiers

Depending on the maturity of the integration and fit with Cloudflare’s product portfolio, we have two types of partners:

  • Strategic partners: Strategic partners have mature integrations across the Cloudflare product suite. They are leaders in their industries and have a significant overlap with our customer base. These partners are strategically aligned with our sales and marketing efforts, and they collaborate with our product teams to bring innovative solutions to market.
  • Integration partners: Integration partners are early participants in Cloudflare’s partnership ecosystem. They already have or are on a path to build validated, functional integrations with Cloudflare. These partners have programmatic access to resources that will help them experiment with and build integrations with Cloudflare.

Work with Us

If you are interested in working with our Technology Partnerships team to develop and bring to market a joint solution, we’d love to hear from you!  Partners can complete the application on our Technology Partner Program website and we will reach out quickly to discuss how we can help build solutions for our customers together.

Measuring Hyper-Threading and Turbo Boost

Post Syndicated from Sung Park original https://blog.cloudflare.com/measuring-hyper-threading-and-turbo-boost/

Measuring Hyper-Threading and Turbo Boost

Measuring Hyper-Threading and Turbo Boost

We often put together experiments that measure hardware performance to improve our understanding and provide insights to our hardware partners. We recently wanted to know more about Hyper-Threading and Turbo Boost. The last time we assessed these two technologies was when we were still deploying the Intel Xeons (Skylake/Purley), but beginning with our Gen X servers we switched over to the AMD EPYC (Zen 2/Rome). This blog is about our latest attempt at quantifying the performance impact of Hyper-Threading and Turbo Boost on our AMD-based servers running our software stack.

Intel briefly introduced Hyper-Threading with NetBurst (Northwood) back in 2002, then reintroduced Hyper-Threading six years later with Nehalem along with Turbo Boost. AMD presented their own implementation of these technologies with Zen in 2017, but AMD’s version of Turbo Boost actually dates back to AMD K10 (Thuban), in 2010, when it used to be called Turbo Core. Since Zen, Hyper-Threading and Turbo Boost are known as simultaneous multithreading (SMT) and Core Performance Boost (CPB), respectively. The underlying implementation of Hyper-Threading and Turbo Boost differs between the two vendors, but the high-level concept remains the same.

Hyper-Threading or simultaneous multithreading creates a second hardware thread within a processor’s core, also known as a logical core, by duplicating various parts of the core to support the context of a second application thread. The two hardware threads execute simultaneously within the core, across their dedicated and remaining shared resources. If neither hardware threads contend over a particular shared resource, then the throughput can be drastically increased.

Turbo Boost or Core Performance Boost opportunistically allows the processor to operate beyond its rated base frequency as long as the processor operates within guidelines set by Intel or AMD. Generally speaking, the higher the frequency, the faster the processor finishes a task.

Simulated Environment

CPU Specification

Measuring Hyper-Threading and Turbo Boost

Our Gen X or 10th generation servers are powered by the AMD EPYC 7642, based on the Zen 2 microarchitecture. The vast majority of the Zen 2-based processors along with its successor Zen 3 that our Gen 11 servers are based on, supports simultaneous multithreading and Core Performance Boost.

Similar to Intel’s Hyper-Threading, AMD implemented 2-way simultaneous multithreading. The AMD EPYC 7642 has 48 cores, and with simultaneous multithreading enabled it can simultaneously execute 96 hardware threads. Core Performance Boost allows the AMD EPYC 7642 to operate anywhere between 2.3 to 3.3 GHz, depending on the workload and limitations imposed on the processor. With Core Performance Boost disabled, the processor will operate at 2.3 GHz, the rated base frequency on the AMD EPYC 7642. We took our usual simulated traffic pattern of 10 KiB cached assets over HTTPS, provided by our performance team, to generate a sustained workload that saturated the processor to 100% CPU utilization.

Results

After establishing a baseline with simultaneous multithreading and Core Performance Boost disabled, we started enabling one feature at a time. When we enabled Core Performance Boost, the processor operated near its peak turbo frequency, hovering between 3.2 to 3.3 GHz which is more than 39% higher than the base frequency. Higher operating frequency directly translated into 40% additional requests per second. We then disabled Core Performance Boost and enabled simultaneous multithreading. Similar to Core Performance Boost, simultaneous multithreading alone improved requests per second by 43%. Lastly, by enabling both features, we observed an 86% improvement in requests per second.

Measuring Hyper-Threading and Turbo Boost

Latencies were generally lowered by either or both Core Performance Boost and simultaneous multithreading. While Core Performance Boost consistently maintained a lower latency than the baseline, simultaneous multithreading gradually took longer to process a request as it reached tail latencies. Though not depicted in the figure below, when we examined beyond p9999 or 99.99th percentile, simultaneous multithreading, even with the help of Core Performance Boost, exponentially increased in latency by more than 150% over the baseline, presumably due to the two hardware threads contending over a shared resource within the core.

Measuring Hyper-Threading and Turbo Boost

Production Environment

Moving into production, since our traffic fluctuates throughout the day, we took four identical Gen X servers and measured in parallel during peak hours. The only changes we made to the servers were enabling and disabling simultaneous multithreading and Core Performance Boost to create a comprehensive test matrix. We conducted the experiment in two different regions to identify any anomalies and mismatching trends. All trends were alike.

Before diving into the results, we should preface that the baseline server operated at a higher CPU utilization than others. Every generation, our servers deliver a noticeable improvement in performance. So our load balancer, named Unimog, sends a different number of connections to the target server based on its generation to balance out the CPU utilization. When we disabled simultaneous multithreading and Core Performance Boost, the baseline server’s performance degraded to the point where Unimog encountered a “guard rail” or the lower limit on the requests sent to the server, and so its CPU utilization rose instead. Given that the baseline server operated at a higher CPU utilization, the baseline server processed more requests per second to meet the minimum performance threshold.

Measuring Hyper-Threading and Turbo Boost

Results

Due to the skewed baseline, when core performance boost was enabled, we only observed 7% additional requests per second. Next, simultaneous multithreading alone improved requests per second by 41%. Lastly, with both features enabled, we saw an 86% improvement in requests per second.

Measuring Hyper-Threading and Turbo Boost

Though we lack concrete baseline data, we can normalize requests per second by CPU utilization to approximate the improvement for each scenario. Once normalized, the estimated improvement in requests per second from core performance boost and simultaneous multithreading were 36% and 80%, respectively. With both features enabled, requests per second improved by 136%.

Measuring Hyper-Threading and Turbo Boost

Latency was not as interesting since the baseline server operated at a higher CPU utilization, and in turn, it produced a higher tail latency than we would have otherwise expected. All other servers maintained a lower latency due to their lower CPU utilization in conjunction with Core Performance Boost, simultaneous multithreading, or both.

Measuring Hyper-Threading and Turbo Boost

At this point, our experiment did not go as we had planned. Our baseline is skewed, and we only got half useful answers. However, we find experimenting to be important because we usually end up finding other helpful insights as well.

Let’s add power data. Since our baseline server was operating at a higher CPU utilization, we knew it was serving more requests and therefore, consumed more power than it needed to. Enabling Core Performance Boost allowed the processor to run up to its peak turbo frequency, increasing power consumption by 35% over the skewed baseline. More interestingly, enabling simultaneous multithreading increased power consumption by only 7%. Combining Core Performance Boost with simultaneous multithreading resulted in 58% increase in power consumption.

Measuring Hyper-Threading and Turbo Boost

AMD’s implementation of simultaneous multithreading appears to be power efficient as it achieves 41% additional requests per second while consuming only 7% more power compared to the skewed baseline. For completeness, using the data we have, we bridged performance and power together to obtain performance per watt to summarize power efficiency. We divided the non-normalized requests per second by power consumption to produce the requests per watt figure below. Our Gen X servers attained the best performance per watt by enabling just simultaneous multithreading.

Measuring Hyper-Threading and Turbo Boost

Conclusion

In our assessment of AMD’s implementation of Hyper-Threading and Turbo Boost, the original experiment we designed to measure requests per second and latency did not pan out as expected. As soon as we entered production, our baseline measurement was skewed due to the imbalance in CPU utilization and only partially reproduced our lab results.

We added power to the experiment and found other meaningful insights. By analyzing the performance and power characteristics of simultaneous multithreading and Core Performance Boost, we concluded that simultaneous multithreading could be a power-efficient mechanism to attain additional requests per second. Drawbacks of simultaneous multithreading include long tail latency that is currently curtailed by enabling Core Performance Boost. While the higher frequency enabled by Core Performance Boost provides latency reduction and more requests per second, we are more mindful that the increase in power consumption is quite significant.

Do you want to help shape the Cloudflare network? This blog was a glimpse of the work we do at Cloudflare. Come join us and help complete the feedback loop for our developers and hardware partners.

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/upgrading-the-cloudflare-china-network/

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Core to Cloudflare’s mission of helping build a better Internet is making it easy for our customers to improve the performance, security, and reliability of their digital properties, no matter where in the world they might be. This includes Mainland China. Cloudflare has had customers using our service in China since 2015 and recently, we expanded our China presence through a partnership with JD Cloud, the cloud division of Chinese Internet giant, JD.com. We’ve also had a local office in Beijing for several years, which has given us a deep understanding of the Chinese Internet landscape as well as local customers.

The new Cloudflare China Network built in partnership with JD Cloud has been live for several months, with significant performance and security improvements compared to the previous in-country network. Today, we’re excited to describe the improvements we made to our DNS and DDoS systems, and provide data demonstrating the performance gains customers are seeing. All customers licensed to operate in China can now benefit from these innovations, with the click of a button in the Cloudflare dashboard or via the API.

Serving DNS inside China

With over 14% of all domains on the Internet using Cloudflare’s nameservers we are the largest DNS provider. Furthermore, we pride ourselves on consistently being among the fastest authoritative nameservers, answering about 12 million DNS queries per second on average (in Q2 2021). We achieve this scale and performance by running our DNS platform on our global network in more than 200 cities, in over 100 countries.

Not too long ago, a user in mainland China accessing a website using Cloudflare DNS did not fully benefit from these advantages. Their DNS queries had to leave the country and, in most cases, cross the Pacific Ocean to reach our nameservers outside of China. This network distance introduced latency and sometimes even packet drops, resulting in a poor user experience.

With the new China Network offering built on JD Cloud’s infrastructure, customers are now able to serve their DNS in mainland China. This means DNS queries are answered directly from one of the JD Cloud Points of Presence (PoPs), leading to faster response times and improved reliability.

Once a user signs up a domain and opts in to serve their DNS in China we will assign two nameservers, from two of the following three domains:

cf-ns.com
cf-ns.net
cf-ns.tech

We selected these Top Level Domains (TLDs) because they offer the best possible performance from within mainland China. They are chosen to always be different from the TLD of the domain using them. For example, example.com will be assigned nameservers using the .tech and .net TLD. This gives us “glueless delegations” for customers’ nameservers, allowing us to dynamically return nameserver IP addresses instead of static glue records.

A “glue record” (or just “glue”) is a mapping between nameservers and IPs that’s added by registrars to break circular lookup dependencies when a domain uses a nameserver with the same TLD. For example, imagine a resolver asks the .com TLD nameserver: “Where do I find the nameservers for example.com?” and this domain is using ns1.example.com and ns2.example.com as nameservers. If .com just replied: “Go and ask ns1.example.com or ns2.example.com.” the resolver would come back to .com with the same question and this would never stop. One solution is to add glue at .com, so the answer can be: “The nameservers for example.com are ns1.example.com and ns2.example.com, and they can be reached at 192.0.2.78 and 203.0.113.55.”.

By using different TLDs, as described above, we don’t need to rely on glue records for customers’ nameservers. This way, we can ensure that queries will always be answered from the nearest point of presence (PoP) leading to a faster DNS response. Another advantage of serving dynamic nameserver IPs is the ability to distribute queries across different PoPs, which helps to spread load more efficiently and mitigate attacks.

Mitigating DDoS attacks within China

Everywhere in the world except for China and India, we use a technique known as anycast routing to distribute DDoS attacks and absorb them in data centers as close to the traffic source as possible. But as we first wrote in 2015, the Internet in China works a bit differently than the rest of the world so anycast-based mitigation was not an option:

Unlike much of the rest of the world where network routing is open, in China core Internet access is largely controlled by two ISPs: China Telecom and China Unicom. [Today this list also includes China Mobile.] These ISPs control IP address allocation and routing inside the country. Even the Chinese Internet giants rarely own their own IP address allocations, or use BGP to control routing across the Chinese Internet. This makes BGP Anycast and many of the other routing techniques we use across Cloudflare’s network impossible inside of China.

The lack of anycast in China requires a different approach to mitigating attacks, and our expansion with JD Cloud pushed us to further improve the edge-based mitigation system we wrote about earlier this year. Most importantly, we pushed the detection and mitigation of application (L7) attacks to the edge, reducing our time to mitigate and improving the resiliency of the system by removing a dependency on other core data centers for instructions. In the first quarter of 2021, we mitigated 81% of all L7 attacks at the edge.

For the larger network-based (L3/L4) attacks, we worked closely with JD Cloud to augment our in-data center protections with remote signaling to China Telecom, China Unicom, and China Mobile. These integrations allow us to remotely — and automatically — signal from our edge-based mitigation systems when we want upstream filtering assistance from the ISP. Mitigating attacks at the edge is faster than relying on centralized data centers, and in the first quarter of 2021 98.6% of all L3/4 DDoS attacks were mitigated without centralized communication. Attacks exceeding certain thresholds can also be re-routed to large scrubbing centers, a technique that doesn’t make sense in an anycast world but is useful when unicast is the only option.

Beyond the improved mitigation controls, we also developed new traffic engineering processes to move traffic from overloaded data centers to locations with more spare resources. These controls are already used outside of China, but doing so within the country required integration with our DNS systems.

Lastly, because all of our data centers run the same software stack, the work we did to improve the underlying components of DDoS detection and mitigation systems within China has already made its way back to our data centers outside of China.

Improving performance

Cloudflare on JD Cloud is significantly faster than our previous in-country network, allowing us to accelerate the delivery of our customers’ web properties in China.

To compare the Cloudflare PoPs on JD Cloud vs. our previous in-country network, we deployed a test zone to simulate a customer website on both China networks. We tested each website with the same two origin networks. Both origins are commonly used public cloud providers. One site was hosted in the northwest region of the United States, and the other in Western Europe.

For both zones, we assigned DNS nameservers in China to reduce out-of-country latency incurred during DNS lookups (more details are on DNS below). To test our caching, we used a monitoring and benchmarking service with a wide variety of clients in various Chinese cities and provinces to download 100 kilobyte, 1 megabyte, and 10 megabyte files every 15 minutes over the course of 36 hours.

Latency, as measured by Round Trip Time (RTT) from the client to our JD Cloud PoPs, was reduced at least 30% across tests for all file sizes. This subsequently reduced our Time to First Byte (TTFB) metrics. Reducing latency — and making it more consistent, i.e., improving jitter — has the most impact on other performance metrics, as latency and the slow-start process is the bottleneck for the vast majority of TCP connections.

Our latency reduction comes from the quality of the JD Cloud network, their placement of the PoPs within China, and our ability to direct clients to the closest PoP. As we continue to add more capacity and PoPs in partnership with JD Cloud in the future, we only expect our latency metrics to get even better.

Dynamic Content

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Static Content

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

DNS Response Time

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Looking forward and welcoming new customers in China

Cloudflare’s sustained product investments in China, in partnership with JD Cloud, have resulted in significant performance and security improvements over our previous in-country network first launched in 2015.

Specifically, innovations in DNS and DDoS mitigation technology, alongside an improved network design and distribution of PoPs, have resulted in better security for our customers and at least a 30% performance boost.

This new network is open for business, and interested customers should reach out to learn more.

Expanding Cloudflare to 25+ Cities in Brazil

Post Syndicated from Jen Kim original https://blog.cloudflare.com/expanding-to-25-plus-cities-in-brazil/

Expanding Cloudflare to 25+ Cities in Brazil

Expanding Cloudflare to 25+ Cities in Brazil

Today, we are excited to announce an expansion we’ve been working on behind the scenes for the last two years: a 25+ city partnership with one of the largest ISPs in Brazil. This is one of the largest simultaneous single-country expansions we’ve done so far.

With this partnership, Brazilians throughout the country will see significant improvement to their Internet experience. Already, the 25th-percentile latency of non-bot traffic (we use that measure as an approximation of physical distance from our servers to end users) has dropped from the mid-20 millisecond range to sub-10 milliseconds. This benefit extends not only to the 25 million Internet properties on our network, but to the entire Internet with Cloudflare services like 1.1.1.1 and WARP. We expect that as we approach 25 cities in Brazil, latency will continue to drop while throughput increases.

Expanding Cloudflare to 25+ Cities in Brazil
25th percentile latency of non-bot traffic in Brazil has more than halved as new cities have gone live.
Expanding Cloudflare to 25+ Cities in Brazil

This partnership is part of our mission to help create a better Internet and the best development experience for all — not just those in major population centers or in Western markets — and we are excited to take this step on our journey to help build a better Internet. Whether you live in the heart of São Paulo or the outskirts of the Amazon rainforest in Manaus, expect an upgrade to your Internet experience soon.

We have already launched in Porto Alegre, Belo Horizonte, Brasília, Campinas, Curitiba, and Fortaleza, with additional presences coming soon to Manaus, São Paulo, Blumenau, Joinville, Florianópolis, Itajai, Belém, Goiânia, Salvador, São José do Rio Preto, Americana, and Sorocaba.

From there, we’re planning on adding presences in the following cities: Guarulhos, Mogi das Cruzes, São José dos Campos, Vitória, Londrina, Maringá, Campina Grande, Caxias do Sul, Cuiabá, Lajeado, Natal, Recife, Osasco, Santo André, and Rio. The result will be a net expansion of Cloudflare in Brazil by 12 to 16 times.

We celebrate the benefits that this partnership will bring to Latin America. Our President and Chief Operating Officer Michelle Zatlyn likes to say that “we’re just getting started”. In that spirit, expect more exciting news about the Cloudflare network not only in Latin America, but worldwide!

Do you work at an ISP who is interested in bringing a better Internet experience to your users and better control over your network? Please reach out to our Edge Partnerships team at [email protected].

Are you passionate about working to expand our network to make the best edge platform on the globe? Do you thrive in an exciting, rapid-growth environment? Check out open roles on the Infrastructure team here!

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Post Syndicated from Abhi Das original https://blog.cloudflare.com/cloudflare-waf-integration-azure-active-directory/

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Cloudflare and Microsoft Azure Active Directory have partnered to provide an integration specifically for web applications using Azure Active Directory B2C. From today, customers using both services can follow the simple integration steps to protect B2C applications with Cloudflare’s Web Application Firewall (WAF) on any custom domain. Microsoft has detailed this integration as well.

Cloudflare Web Application Firewall

The Web Application Firewall (WAF) is a core component of the Cloudflare platform and is designed to keep any web application safe. It blocks more than 70 billion cyber threats per day. That is 810,000 threats blocked every second.

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

The WAF is available through an intuitive dashboard or a Terraform integration, and it enables users to build powerful rules. Every request to the WAF is inspected against the rule engine and the threat intelligence built from protecting approximately 25 million internet properties. Suspicious requests can be blocked, challenged or logged as per the needs of the user, while legitimate requests are routed to the destination regardless of where the application lives (i.e., on-premise or in the cloud). Analytics and Cloudflare Logs enable users to view actionable metrics.

The Cloudflare WAF is an intelligent, integrated, and scalable solution to protect business-critical web applications from malicious attacks, with no changes to customers’ existing infrastructure.

Azure AD B2C

Azure AD B2C is a customer identity management service that enables custom control of how your customers sign up, sign in, and manage their profiles when using iOS, Android, .NET, single-page (SPA), and other applications and web experiences. It uses standards-based authentication protocols including OpenID Connect, OAuth 2.0, and SAML. You can customize the entire user experience with your brand so that it blends seamlessly with your web and mobile applications. It integrates with most modern applications and commercial off-the-shelf software, providing business-to-customer identity as a service. Customers of businesses of all sizes use their preferred social, enterprise, or local account identities to get single sign-on access to their applications and APIs. It takes care of the scaling and safety of the authentication platform, monitoring and automatically handling threats like denial-of-service, password spray, or brute force attacks.

Integrated solution

When setting up Azure AD B2C, many customers prefer to customize their authentication endpoint by hosting the solution under their own domain — for example, under store.example.com — rather than using a Microsoft owned domain. With the new partnership and integration, customers can now place the custom domain behind Cloudflare’s Web Application Firewall while also using Azure AD B2C, further protecting the identity service from sophisticated attacks.

This defense-in-depth approach allows customers to leverage both Cloudflare WAF capabilities along with Azure AD B2C native Identity Protection features to defend against cyberattacks.

Instructions on how to set up the integration are provided on the Azure website and all it requires is a Cloudflare account.

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Customer benefit

Azure customers need support for a strong set of security and performance tools once they implement Azure AD B2C in their environment. Integrating Cloudflare Web Application Firewall with Azure AD B2C can provide customers the ability to write custom security rules (including rate limiting rules), DDoS mitigation, and deploy advanced bot management features. The Cloudflare WAF works by proxying and inspecting traffic towards your application and analyzing the payloads to ensure only non-malicious content reaches your origin servers. By incorporating the Cloudflare integration into Azure AD B2C, customers can ensure that their application is protected against sophisticated attack vectors including zero-day vulnerabilities, malicious automated botnets, and other generic attacks such as those listed in the OWASP Top 10.

Conclusion

This integration is a great match for any B2C businesses that are looking to enable their customers to authenticate themselves in the easiest and most secure way possible.

Please give it a try and let us know how we can improve it. Reach out to us for other use cases for your applications on Azure. Register here for expressing your interest/feedback on Azure integration and for upcoming webinars on this topic.

Congratulations to Cloudflare’s 2020 Partner Award Winners

Post Syndicated from Matthew Harrell original https://blog.cloudflare.com/2020-partner-awards/

Congratulations to Cloudflare’s 2020 Partner Award Winners

We are privileged to share Cloudflare’s inaugural set of Partner Awards. These Awards recognize our partner companies and representatives worldwide who stood out this past year for their investments in acquiring technical expertise in our offerings, for delivering innovative applications and services built on Cloudflare, and for their commitment to customer success.

Congratulations to Cloudflare’s 2020 Partner Award Winners

The unprecedented challenges in 2020 have reinforced how critical it is to have a secure, performant, and reliable Internet. Throughout these turbulent times, our partners have been busy innovating and helping organizations of all sizes and in various industries. By protecting and accelerating websites, applications, and teams with Cloudflare, our partners have helped these organizations adjust, seize new opportunities, and thrive.

Congratulations to each of our award winners.  Cloudflare’s mission of helping build a better Internet is more important than ever.  And our partners are more critical than ever to achieving our mission. Testifying to Cloudflare’s global reach, our honorees represent companies headquartered in 16 countries.

Cloudflare Partner of the Year Honorees, 2020

Congratulations to Cloudflare’s 2020 Partner Award Winners

Worldwide MSP Partner of the Year: Rackspace Technology
Honors the top performing managed services provider (MSP) partner across Cloudflare’s three sales geographies: Americas, APAC, and EMEA.

Cloudflare Americas Partner Awards

Partner of the Year: Optiv Security
Honors the top performing partner that has demonstrated phenomenal sales achievement in 2020.

Technology Partner of the Year: Sumo Logic
Honors the technology alliance that has delivered stellar business outcomes and demonstrated continued commitment to our joint customers.

New Partner of the Year: GuidePoint Security
Honors the partner who, although new to the Cloudflare Partner Network in 2020, has already made substantial investments to grow our shared business.

Partner Systems Engineers (SEs) of the Year:
Honors the partner SEs who have demonstrated depth of knowledge and expertise in Cloudflare solutions through earned certifications and also outstanding delivery of customer service in the practical application of Cloudflare technology solutions to customers’ technical and business challenges.

Most Valuable Players (MVPs) of the Year:
Honors top achievers who not only provided stellar service to our joint customers, but also built new business value by tapping into the power of network, relationships, and ecosystems.

Cloudflare APAC Partner Awards

Distributor of the Year: Softdebut Co., Ltd
Honors the top performing distributor that has best represented Cloudflare and positioned partners to secure customer sales and growth revenue streams.

Technology Partner of the Year: Pacific Tech Pte Ltd
Honors the technology alliance that has delivered stellar business outcomes and demonstrated continued commitment to our joint customers.

Partner Systems Engineers (SEs) of the Year:

Honors the first three individuals who have achieved four key certifications and have demonstrated depth of knowledge and expertise in those fields.

Most Valuable PPlayers (MVPs) of the Year:

Honors top achievers who not only provided stellar service to our joint customers, but also built new business value by tapping into the power of network, relationships, and ecosystems.

Cloudflare EMEA Partner Awards

Partner of the Year: Safenames
Honors the top performing partner that has demonstrated phenomenal sales achievement in 2020.

Distributor of the Year: V-Valley
Honors the top performing distributor that has best represented Cloudflare and positioned partners to secure customer sales and growth revenue streams.

New Partner of the Year: Synopsis
Honors a new partner to the Cloudflare Partner Network this year that has already made substantial investments to grow our shared business.

Cloudflare Certification Champions: KUWAITNET, Origo, WideOps
Honors partner companies whose teams earned the highest total number of Cloudflare certifications.

Partner Systems Engineers (SEs) of the Year:

Honors the partner SEs who have demonstrated depth of knowledge and expertise in Cloudflare solutions through earned certifications and also outstanding delivery of customer service in the practical application of Cloudflare technology solutions to customers’ technical and business challenges.

Are you a services or solutions provider interested in joining the Cloudflare Partner Network?  Check out the short video below on our program and visit our partner portal for more information.

GitHub reduces Marketplace transaction fees, revamps Technology Partner Program

Post Syndicated from Ryan J. Salva original https://github.blog/2021-02-04-github-reduces-marketplace-transaction-fees-revamps-technology-partner-program/

At GitHub, our community is at the heart of everything we do. We want to make it easier to build the things you love, with the tools you prefer to use—which is why we’re committed to maintaining an open platform for developers. Launched in 2017 and now home to the world’s largest DevOps ecosystem, GitHub Marketplace is the single destination for developers to find, sell, and share tools and solutions that help simplify and improve the process of building software.

Whether buying or selling, our goal is to provide the best Marketplace experience for developers as possible. Today, we’re announcing some changes worth celebrating 🎉; changes to increase your revenue, simplify the application verification process, and make it easier for everyone to build with GitHub.

Supporting our Marketplace partners

In the spirit of helping developers both thrive and profit, we’re increasing developer’s take-home pay for apps sold in the marketplace from 75 to 95%. GitHub will only keep a 5% transaction fee. This change puts more revenue in the pockets of the developers, who are doing the work building tools that support the GitHub community.

Learn more

Simplifying app verification process on the Marketplace

We know our partners are excited to get on Marketplace, and we’ve made changes to make this as easy as possible. Previously, a deep review of app security and functionality was required before an app could be added to Marketplace. Moving forward, we’ll verify your organization’s identity and common-sense security precautions by:

  1. Validating your domain with a simple DNS TXT record
  2. Validating the email address on record
  3. Requiring two-factor authentication for your GitHub organization

You can track your app submission’s progress from your organization’s profile settings to fix issues faster. Now developers can get their solutions added to the Marketplace faster and the community can moderate app quality.

Screenshot of app publisher verification process in Marketplace

Soon, we’ll move all “verified apps” to the validated publisher model, updating the “green verified checkmarkverified” badge to indicate publishers, and not apps are scrutinized. Learn more

GitHub Technology Partner Program updates

We’ve also made some updates to our Technology Partner Program. If you’re interested in the GitHub Marketplace but unsure how to build integrations to the GitHub platform, co-market with us, or learn about partner events and opportunities, you can get started with our technology partner program for help. You can also check out the partner-centric resources section or reach out to us at [email protected].

Screenshot of Technology Partner Program Resource page

You’re now one step away from the technical and go-to-market resources you need to integrate with GitHub and help improve the lives of all software developers. Looking forward to seeing you on the Marketplace.

Happy coding. 👾

Automate thousands of mainframe tests on AWS with the Micro Focus Enterprise Suite

Post Syndicated from Kevin Yung original https://aws.amazon.com/blogs/devops/automate-mainframe-tests-on-aws-with-micro-focus/

Micro Focus – AWS Advanced Technology Parnter, they are a global infrastructure software company with 40 years of experience in delivering and supporting enterprise software.

We have seen mainframe customers often encounter scalability constraints, and they can’t support their development and test workforce to the scale required to support business requirements. These constraints can lead to delays, reduce product or feature releases, and make them unable to respond to market requirements. Furthermore, limits in capacity and scale often affect the quality of changes deployed, and are linked to unplanned or unexpected downtime in products or services.

The conventional approach to address these constraints is to scale up, meaning to increase MIPS/MSU capacity of the mainframe hardware available for development and testing. The cost of this approach, however, is excessively high, and to ensure time to market, you may reject this approach at the expense of quality and functionality. If you’re wrestling with these challenges, this post is written specifically for you.

To accompany this post, we developed an AWS prescriptive guidance (APG) pattern for developer instances and CI/CD pipelines: Mainframe Modernization: DevOps on AWS with Micro Focus.

Overview of solution

In the APG, we introduce DevOps automation and AWS CI/CD architecture to support mainframe application development. Our solution enables you to embrace both Test Driven Development (TDD) and Behavior Driven Development (BDD). Mainframe developers and testers can automate the tests in CI/CD pipelines so they’re repeatable and scalable. To speed up automated mainframe application tests, the solution uses team pipelines to run functional and integration tests frequently, and uses systems test pipelines to run comprehensive regression tests on demand. For more information about the pipelines, see Mainframe Modernization: DevOps on AWS with Micro Focus.

In this post, we focus on how to automate and scale mainframe application tests in AWS. We show you how to use AWS services and Micro Focus products to automate mainframe application tests with best practices. The solution can scale your mainframe application CI/CD pipeline to run thousands of tests in AWS within minutes, and you only pay a fraction of your current on-premises cost.

The following diagram illustrates the solution architecture.

Mainframe DevOps On AWS Architecture Overview, on the left is the conventional mainframe development environment, on the left is the CI/CD pipelines for mainframe tests in AWS

Figure: Mainframe DevOps On AWS Architecture Overview

 

Best practices

Before we get into the details of the solution, let’s recap the following mainframe application testing best practices:

  • Create a “test first” culture by writing tests for mainframe application code changes
  • Automate preparing and running tests in the CI/CD pipelines
  • Provide fast and quality feedback to project management throughout the SDLC
  • Assess and increase test coverage
  • Scale your test’s capacity and speed in line with your project schedule and requirements

Automated smoke test

In this architecture, mainframe developers can automate running functional smoke tests for new changes. This testing phase typically “smokes out” regression of core and critical business functions. You can achieve these tests using tools such as py3270 with x3270 or Robot Framework Mainframe 3270 Library.

The following code shows a feature test written in Behave and test step using py3270:

# home_loan_calculator.feature
Feature: calculate home loan monthly repayment
  the bankdemo application provides a monthly home loan repayment caculator 
  User need to input into transaction of home loan amount, interest rate and how many years of the loan maturity.
  User will be provided an output of home loan monthly repayment amount

  Scenario Outline: As a customer I want to calculate my monthly home loan repayment via a transaction
      Given home loan amount is <amount>, interest rate is <interest rate> and maturity date is <maturity date in months> months 
       When the transaction is submitted to the home loan calculator
       Then it shall show the monthly repayment of <monthly repayment>

    Examples: Homeloan
      | amount  | interest rate | maturity date in months | monthly repayment |
      | 1000000 | 3.29          | 300                     | $4894.31          |

 

# home_loan_calculator_steps.py
import sys, os
from py3270 import Emulator
from behave import *

@given("home loan amount is {amount}, interest rate is {rate} and maturity date is {maturity_date} months")
def step_impl(context, amount, rate, maturity_date):
    context.home_loan_amount = amount
    context.interest_rate = rate
    context.maturity_date_in_months = maturity_date

@when("the transaction is submitted to the home loan calculator")
def step_impl(context):
    # Setup connection parameters
    tn3270_host = os.getenv('TN3270_HOST')
    tn3270_port = os.getenv('TN3270_PORT')
	# Setup TN3270 connection
    em = Emulator(visible=False, timeout=120)
    em.connect(tn3270_host + ':' + tn3270_port)
    em.wait_for_field()
	# Screen login
    em.fill_field(10, 44, 'b0001', 5)
    em.send_enter()
	# Input screen fields for home loan calculator
    em.wait_for_field()
    em.fill_field(8, 46, context.home_loan_amount, 7)
    em.fill_field(10, 46, context.interest_rate, 7)
    em.fill_field(12, 46, context.maturity_date_in_months, 7)
    em.send_enter()
    em.wait_for_field()    

    # collect monthly replayment output from screen
    context.monthly_repayment = em.string_get(14, 46, 9)
    em.terminate()

@then("it shall show the monthly repayment of {amount}")
def step_impl(context, amount):
    print("expected amount is " + amount.strip() + ", and the result from screen is " + context.monthly_repayment.strip())
assert amount.strip() == context.monthly_repayment.strip()

To run this functional test in Micro Focus Enterprise Test Server (ETS), we use AWS CodeBuild.

We first need to build an Enterprise Test Server Docker image and push it to an Amazon Elastic Container Registry (Amazon ECR) registry. For instructions, see Using Enterprise Test Server with Docker.

Next, we create a CodeBuild project and uses the Enterprise Test Server Docker image in its configuration.

The following is an example AWS CloudFormation code snippet of a CodeBuild project that uses Windows Container and Enterprise Test Server:

  BddTestBankDemoStage:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Sub '${AWS::StackName}BddTestBankDemo'
      LogsConfig:
        CloudWatchLogs:
          Status: ENABLED
      Artifacts:
        Type: CODEPIPELINE
        EncryptionDisabled: true
      Environment:
        ComputeType: BUILD_GENERAL1_LARGE
        Image: !Sub "${EnterpriseTestServerDockerImage}:latest"
        ImagePullCredentialsType: SERVICE_ROLE
        Type: WINDOWS_SERVER_2019_CONTAINER
      ServiceRole: !Ref CodeBuildRole
      Source:
        Type: CODEPIPELINE
        BuildSpec: bdd-test-bankdemo-buildspec.yaml

In the CodeBuild project, we need to create a buildspec to orchestrate the commands for preparing the Micro Focus Enterprise Test Server CICS environment and issue the test command. In the buildspec, we define the location for CodeBuild to look for test reports and upload them into the CodeBuild report group. The following buildspec code uses custom scripts DeployES.ps1 and StartAndWait.ps1 to start your CICS region, and runs Python Behave BDD tests:

version: 0.2
phases:
  build:
    commands:
      - |
        # Run Command to start Enterprise Test Server
        CD C:\
        .\DeployES.ps1
        .\StartAndWait.ps1

        py -m pip install behave

        Write-Host "waiting for server to be ready ..."
        do {
          Write-Host "..."
          sleep 3  
        } until(Test-NetConnection 127.0.0.1 -Port 9270 | ? { $_.TcpTestSucceeded } )

        CD C:\tests\features
        MD C:\tests\reports
        $Env:Path += ";c:\wc3270"

        $address=(Get-NetIPAddress -AddressFamily Ipv4 | where { $_.IPAddress -Match "172\.*" })
        $Env:TN3270_HOST = $address.IPAddress
        $Env:TN3270_PORT = "9270"
        
        behave.exe --color --junit --junit-directory C:\tests\reports
reports:
  bankdemo-bdd-test-report:
    files: 
      - '**/*'
    base-directory: "C:\\tests\\reports"

In the smoke test, the team may run both unit tests and functional tests. Ideally, these tests are better to run in parallel to speed up the pipeline. In AWS CodePipeline, we can set up a stage to run multiple steps in parallel. In our example, the pipeline runs both BDD tests and Robot Framework (RPA) tests.

The following CloudFormation code snippet runs two different tests. You use the same RunOrder value to indicate the actions run in parallel.

#...
        - Name: Tests
          Actions:
            - Name: RunBDDTest
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName: !Ref BddTestBankDemoStage
                PrimarySource: Config
              InputArtifacts:
                - Name: DemoBin
                - Name: Config
              RunOrder: 1
            - Name: RunRbTest
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName : !Ref RpaTestBankDemoStage
                PrimarySource: Config
              InputArtifacts:
                - Name: DemoBin
                - Name: Config
              RunOrder: 1  
#...

The following screenshot shows the example actions on the CodePipeline console that use the preceding code.

Screenshot of CodePipeine parallel execution tests using a same run order value

Figure – Screenshot of CodePipeine parallel execution tests

Both DBB and RPA tests produce jUnit format reports, which CodeBuild can ingest and show on the CodeBuild console. This is a great way for project management and business users to track the quality trend of an application. The following screenshot shows the CodeBuild report generated from the BDD tests.

CodeBuild report generated from the BDD tests showing 100% pass rate

Figure – CodeBuild report generated from the BDD tests

Automated regression tests

After you test the changes in the project team pipeline, you can automatically promote them to another stream with other team members’ changes for further testing. The scope of this testing stream is significantly more comprehensive, with a greater number and wider range of tests and higher volume of test data. The changes promoted to this stream by each team member are tested in this environment at the end of each day throughout the life of the project. This provides a high-quality delivery to production, with new code and changes to existing code tested together with hundreds or thousands of tests.

In enterprise architecture, it’s commonplace to see an application client consuming web services APIs exposed from a mainframe CICS application. One approach to do regression tests for mainframe applications is to use Micro Focus Verastream Host Integrator (VHI) to record and capture 3270 data stream processing and encapsulate these 3270 data streams as business functions, which in turn are packaged as web services. When these web services are available, they can be consumed by a test automation product, which in our environment is Micro Focus UFT One. This uses the Verastream server as the orchestration engine that translates the web service requests into 3270 data streams that integrate with the mainframe CICS application. The application is deployed in Micro Focus Enterprise Test Server.

The following diagram shows the end-to-end testing components.

Regression Test the end-to-end testing components using ECS Container for Exterprise Test Server, Verastream Host Integrator and UFT One Container, all integration points are using Elastic Network Load Balancer

Figure – Regression Test Infrastructure end-to-end Setup

To ensure we have the coverage required for large mainframe applications, we sometimes need to run thousands of tests against very large production volumes of test data. We want the tests to run faster and complete as soon as possible so we reduce AWS costs—we only pay for the infrastructure when consuming resources for the life of the test environment when provisioning and running tests.

Therefore, the design of the test environment needs to scale out. The batch feature in CodeBuild allows you to run tests in batches and in parallel rather than serially. Furthermore, our solution needs to minimize interference between batches, a failure in one batch doesn’t affect another running in parallel. The following diagram depicts the high-level design, with each batch build running in its own independent infrastructure. Each infrastructure is launched as part of test preparation, and then torn down in the post-test phase.

Regression Tests in CodeBuoild Project setup to use batch mode, three batches running in independent infrastructure with containers

Figure – Regression Tests in CodeBuoild Project setup to use batch mode

Building and deploying regression test components

Following the design of the parallel regression test environment, let’s look at how we build each component and how they are deployed. The followings steps to build our regression tests use a working backward approach, starting from deployment in the Enterprise Test Server:

  1. Create a batch build in CodeBuild.
  2. Deploy to Enterprise Test Server.
  3. Deploy the VHI model.
  4. Deploy UFT One Tests.
  5. Integrate UFT One into CodeBuild and CodePipeline and test the application.

Creating a batch build in CodeBuild

We update two components to enable a batch build. First, in the CodePipeline CloudFormation resource, we set BatchEnabled to be true for the test stage. The UFT One test preparation stage uses the CloudFormation template to create the test infrastructure. The following code is an example of the AWS CloudFormation snippet with batch build enabled:

#...
        - Name: SystemsTest
          Actions:
            - Name: Uft-Tests
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName : !Ref UftTestBankDemoProject
                PrimarySource: Config
                BatchEnabled: true
                CombineArtifacts: true
              InputArtifacts:
                - Name: Config
                - Name: DemoSrc
              OutputArtifacts:
                - Name: TestReport                
              RunOrder: 1
#...

Second, in the buildspec configuration of the test stage, we provide a build matrix setting. We use the custom environment variable TEST_BATCH_NUMBER to indicate which set of tests runs in each batch. See the following code:

version: 0.2
batch:
  fast-fail: true
  build-matrix:
    static:
      ignore-failure: false
    dynamic:
      env:
        variables:
          TEST_BATCH_NUMBER:
            - 1
            - 2
            - 3 
phases:
  pre_build:
commands:
#...

After setting up the batch build, CodeBuild creates multiple batches when the build starts. The following screenshot shows the batches on the CodeBuild console.

Regression tests Codebuild project ran in batch mode, three batches ran in prallel successfully

Figure – Regression tests Codebuild project ran in batch mode

Deploying to Enterprise Test Server

ETS is the transaction engine that processes all the online (and batch) requests that are initiated through external clients, such as 3270 terminals, web services, and websphere MQ. This engine provides support for various mainframe subsystems, such as CICS, IMS TM and JES, as well as code-level support for COBOL and PL/I. The following screenshot shows the Enterprise Test Server administration page.

Enterprise Server Administrator window showing configuration for CICS

Figure – Enterprise Server Administrator window

In this mainframe application testing use case, the regression tests are CICS transactions, initiated from 3270 requests (encapsulated in a web service). For more information about Enterprise Test Server, see the Enterprise Test Server and Micro Focus websites.

In the regression pipeline, after the stage of mainframe artifact compiling, we bake in the artifact into an ETS Docker container and upload the image to an Amazon ECR repository. This way, we have an immutable artifact for all the tests.

During each batch’s test preparation stage, a CloudFormation stack is deployed to create an Amazon ECS service on Windows EC2. The stack uses a Network Load Balancer as an integration point for the VHI’s integration.

The following code is an example of the CloudFormation snippet to create an Amazon ECS service using an Enterprise Test Server Docker image:

#...
  EtsService:
    DependsOn:
    - EtsTaskDefinition
    - EtsContainerSecurityGroup
    - EtsLoadBalancerListener
    Properties:
      Cluster: !Ref 'WindowsEcsClusterArn'
      DesiredCount: 1
      LoadBalancers:
        -
          ContainerName: !Sub "ets-${AWS::StackName}"
          ContainerPort: 9270
          TargetGroupArn: !Ref EtsPort9270TargetGroup
      HealthCheckGracePeriodSeconds: 300          
      TaskDefinition: !Ref 'EtsTaskDefinition'
    Type: "AWS::ECS::Service"

  EtsTaskDefinition:
    Properties:
      ContainerDefinitions:
        -
          Image: !Sub "${AWS::AccountId}.dkr.ecr.us-east-1.amazonaws.com/systems-test/ets:latest"
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: !Ref 'SystemsTestLogGroup'
              awslogs-region: !Ref 'AWS::Region'
              awslogs-stream-prefix: ets
          Name: !Sub "ets-${AWS::StackName}"
          cpu: 4096
          memory: 8192
          PortMappings:
            -
              ContainerPort: 9270
          EntryPoint:
          - "powershell.exe"
          Command: 
          - '-F'
          - .\StartAndWait.ps1
          - 'bankdemo'
          - C:\bankdemo\
          - 'wait'
      Family: systems-test-ets
    Type: "AWS::ECS::TaskDefinition"
#...

Deploying the VHI model

In this architecture, the VHI is a bridge between mainframe and clients.

We use the VHI designer to capture the 3270 data streams and encapsulate the relevant data streams into a business function. We can then deliver this function as a web service that can be consumed by a test management solution, such as Micro Focus UFT One.

The following screenshot shows the setup for getCheckingDetails in VHI. Along with this procedure we can also see other procedures (eg calcCostLoan) defined that get generated as a web service. The properties associated with this procedure are available on this screen to allow for the defining of the mapping of the fields between the associated 3270 screens and exposed web service.

example of VHI designer to capture the 3270 data streams and encapsulate the relevant data streams into a business function getCheckingDetails

Figure – Setup for getCheckingDetails in VHI

The following screenshot shows the editor for this procedure and is initiated by the selection of the Procedure Editor. This screen presents the 3270 screens that are involved in the business function that will be generated as a web service.

VHI designer Procedure Editor shows the procedure

Figure – VHI designer Procedure Editor shows the procedure

After you define the required functional web services in VHI designer, the resultant model is saved and deployed into a VHI Docker image. We use this image and the associated model (from VHI designer) in the pipeline outlined in this post.

For more information about VHI, see the VHI website.

The pipeline contains two steps to deploy a VHI service. First, it installs and sets up the VHI models into a VHI Docker image, and it’s pushed into Amazon ECR. Second, a CloudFormation stack is deployed to create an Amazon ECS Fargate service, which uses the latest built Docker image. In AWS CloudFormation, the VHI ECS task definition defines an environment variable for the ETS Network Load Balancer’s DNS name. Therefore, the VHI can bootstrap and point to an ETS service. In the VHI stack, it uses a Network Load Balancer as an integration point for UFT One test integration.

The following code is an example of a ECS Task Definition CloudFormation snippet that creates a VHI service in Amazon ECS Fargate and integrates it with an ETS server:

#...
  VhiTaskDefinition:
    DependsOn:
    - EtsService
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: systems-test-vhi
      NetworkMode: awsvpc
      RequiresCompatibilities:
        - FARGATE
      ExecutionRoleArn: !Ref FargateEcsTaskExecutionRoleArn
      Cpu: 2048
      Memory: 4096
      ContainerDefinitions:
        - Cpu: 2048
          Name: !Sub "vhi-${AWS::StackName}"
          Memory: 4096
          Environment:
            - Name: esHostName 
              Value: !GetAtt EtsInternalLoadBalancer.DNSName
            - Name: esPort
              Value: 9270
          Image: !Ref "${AWS::AccountId}.dkr.ecr.us-east-1.amazonaws.com/systems-test/vhi:latest"
          PortMappings:
            - ContainerPort: 9680
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: !Ref 'SystemsTestLogGroup'
              awslogs-region: !Ref 'AWS::Region'
              awslogs-stream-prefix: vhi

#...

Deploying UFT One Tests

UFT One is a test client that uses each of the web services created by the VHI designer to orchestrate running each of the associated business functions. Parameter data is supplied to each function, and validations are configured against the data returned. Multiple test suites are configured with different business functions with the associated data.

The following screenshot shows the test suite API_Bankdemo3, which is used in this regression test process.

the screenshot shows the test suite API_Bankdemo3 in UFT One test setup console, the API setup for getCheckingDetails

Figure – API_Bankdemo3 in UFT One Test Editor Console

For more information, see the UFT One website.

Integrating UFT One and testing the application

The last step is to integrate UFT One into CodeBuild and CodePipeline to test our mainframe application. First, we set up CodeBuild to use a UFT One container. The Docker image is available in Docker Hub. Then we author our buildspec. The buildspec has the following three phrases:

  • Setting up a UFT One license and deploying the test infrastructure
  • Starting the UFT One test suite to run regression tests
  • Tearing down the test infrastructure after tests are complete

The following code is an example of a buildspec snippet in the pre_build stage. The snippet shows the command to activate the UFT One license:

version: 0.2
batch: 
# . . .
phases:
  pre_build:
    commands:
      - |
        # Activate License
        $process = Start-Process -NoNewWindow -RedirectStandardOutput LicenseInstall.log -Wait -File 'C:\Program Files (x86)\Micro Focus\Unified Functional Testing\bin\HP.UFT.LicenseInstall.exe' -ArgumentList @('concurrent', 10600, 1, ${env:AUTOPASS_LICENSE_SERVER})        
        Get-Content -Path LicenseInstall.log
        if (Select-String -Path LicenseInstall.log -Pattern 'The installation was successful.' -Quiet) {
          Write-Host 'Licensed Successfully'
        } else {
          Write-Host 'License Failed'
          exit 1
        }
#...

The following command in the buildspec deploys the test infrastructure using the AWS Command Line Interface (AWS CLI)

aws cloudformation deploy --stack-name $stack_name `
--template-file cicd-pipeline/systems-test-pipeline/systems-test-service.yaml `
--parameter-overrides EcsCluster=$cluster_arn `
--capabilities CAPABILITY_IAM

Because ETS and VHI are both deployed with a load balancer, the build detects when the load balancers become healthy before starting the tests. The following AWS CLI commands detect the load balancer’s target group health:

$vhi_health_state = (aws elbv2 describe-target-health --target-group-arn $vhi_target_group_arn --query 'TargetHealthDescriptions[0].TargetHealth.State' --output text)
$ets_health_state = (aws elbv2 describe-target-health --target-group-arn $ets_target_group_arn --query 'TargetHealthDescriptions[0].TargetHealth.State' --output text)          

When the targets are healthy, the build moves into the build stage, and it uses the UFT One command line to start the tests. See the following code:

$process = Start-Process -Wait  -NoNewWindow -RedirectStandardOutput UFTBatchRunnerCMD.log `
-FilePath "C:\Program Files (x86)\Micro Focus\Unified Functional Testing\bin\UFTBatchRunnerCMD.exe" `
-ArgumentList @("-source", "${env:CODEBUILD_SRC_DIR_DemoSrc}\bankdemo\tests\API_Bankdemo\API_Bankdemo${env:TEST_BATCH_NUMBER}")

The next release of Micro Focus UFT One (November or December 2020) will provide an exit status to indicate a test’s success or failure.

When the tests are complete, the post_build stage tears down the test infrastructure. The following AWS CLI command tears down the CloudFormation stack:


#...
	post_build:
	  finally:
	  	- |
		  Write-Host "Clean up ETS, VHI Stack"
		  #...
		  aws cloudformation delete-stack --stack-name $stack_name
          aws cloudformation wait stack-delete-complete --stack-name $stack_name

At the end of the build, the buildspec is set up to upload UFT One test reports as an artifact into Amazon Simple Storage Service (Amazon S3). The following screenshot is the example of a test report in HTML format generated by UFT One in CodeBuild and CodePipeline.

UFT One HTML report shows regression testresult and test detals

Figure – UFT One HTML report

A new release of Micro Focus UFT One will provide test report formats supported by CodeBuild test report groups.

Conclusion

In this post, we introduced the solution to use Micro Focus Enterprise Suite, Micro Focus UFT One, Micro Focus VHI, AWS developer tools, and Amazon ECS containers to automate provisioning and running mainframe application tests in AWS at scale.

The on-demand model allows you to create the same test capacity infrastructure in minutes at a fraction of your current on-premises mainframe cost. It also significantly increases your testing and delivery capacity to increase quality and reduce production downtime.

A demo of the solution is available in AWS Partner Micro Focus website AWS Mainframe CI/CD Enterprise Solution. If you’re interested in modernizing your mainframe applications, please visit Micro Focus and contact AWS mainframe business development at [email protected].

References

Micro Focus

 

Peter Woods

Peter Woods

Peter has been with Micro Focus for almost 30 years, in a variety of roles and geographies including Technical Support, Channel Sales, Product Management, Strategic Alliances Management and Pre-Sales, primarily based in Europe but for the last four years in Australia and New Zealand. In his current role as Pre-Sales Manager, Peter is charged with driving and supporting sales activity within the Application Modernization and Connectivity team, based in Melbourne.

Leo Ervin

Leo Ervin

Leo Ervin is a Senior Solutions Architect working with Micro Focus Enterprise Solutions working with the ANZ team. After completing a Mathematics degree Leo started as a PL/1 programming with a local insurance company. The next step in Leo’s career involved consulting work in PL/1 and COBOL before he joined a start-up company as a technical director and partner. This company became the first distributor of Micro Focus software in the ANZ region in 1986. Leo’s involvement with Micro Focus technology has continued from this distributorship through to today with his current focus on cloud strategies for both DevOps and re-platform implementations.

Kevin Yung

Kevin Yung

Kevin is a Senior Modernization Architect in AWS Professional Services Global Mainframe and Midrange Modernization (GM3) team. Kevin currently is focusing on leading and delivering mainframe and midrange applications modernization for large enterprise customers.

Cloudflare and Human Rights: Joining the Global Network Initiative (GNI)

Post Syndicated from Patrick Day original https://blog.cloudflare.com/cloudflare-and-human-rights-joining-the-global-network-initiative-gni/

Cloudflare and Human Rights: Joining the Global Network Initiative (GNI)

Consistent with our mission to help build a better Internet, Cloudflare has long recognized the importance of conducting our business in a way that respects the rights of Internet users around the world. We provide free services to important voices online – from human rights activists to independent journalists to the entities that help maintain our democracies – who would otherwise be vulnerable to cyberattack. We work hard to develop internal mechanisms and build products that empower user privacy. And we believe that being transparent about the types of requests we receive from government entities and how we respond is critical to maintaining customer trust.

As Cloudflare continues to expand our global network, we think there is more we can do to formalize our commitment to help respect human rights online. To that end, we are excited to announce that we have joined the Global Network Initiative (GNI), one of the world’s leading human rights organizations in the information and communications Technology (ICT) sector, as observers.

Business + Human Rights

Understanding Cloudflare’s new partnership with GNI requires some additional background on how human rights concepts apply to businesses.

In 1945, following the end of World War II, 850 delegates representing forty-six nations gathered in San Francisco to create an international organization dedicated to preserving peace and helping to build a better world. In drafting the eventual United Nations (UN) Charter, the delegates included seven mentions of the need for a set of formal, universal rights that would provide basic protections for all people, everywhere.

Although human rights have traditionally been the purview of nation-states, in 2005 the UN Secretary-General appointed Harvard Professor John Ruggie as the first Special Representative for Business and Human Rights. Ruggie was tasked with determining whether businesses, particularly multinational corporations operating in many jurisdictions, ought to have their own unique obligations under international human rights law.

In 2008, Ruggie proposed the “Protect, Respect and Remedy” framework on business and human rights to the UN Human Rights Council. He argued that while states have an obligation to protect human rights, businesses have a responsibility to respect human rights. Essentially, businesses should act with due diligence in all of their operations to avoid infringing on the rights of others, and remedy any harms that do occur. Moreover, businesses’ responsibilities exist independent of any state’s ability or willingness to meet their own human rights obligations.

Ruggie’s framework was later incorporated into the UN Guiding Principles on Business and Human Rights, which was unanimously endorsed by the UN Human Rights Council in 2011.

Cloudflare + Human Rights

With Cloudflare’s network now operating in more than 100 countries, with more than a billion unique IP addresses passing through Cloudflare’s network every day, we believe we have a significant responsibility to respect, and a tremendous potential to promote, human rights.

For example, privacy is a protected human right under Article 17 of the International Covenant on Civil and Political Rights, which means that the law enforcement and other government orders we periodically receive to access customer’s personal information affect the human rights of our customers and users. Twice a year, we publish a Transparency Report that describes exactly how many orders, subpoenas, and warrants we receive, how we respond, and how many customers or domains are affected.

In responding to law enforcement requests for subscriber information, Cloudflare has relied on three principles: due process, respect for our customer’s privacy, and notice for those affected. As we say often, Cloudflare’s intent is to follow the law, and not to make law enforcement’s job any harder, or easier.

However, as Cloudflare expands internationally, relying on due process and transparency alone may not always be sufficient. For one thing, those kinds of protections are dependent on good-faith implementation of legal processes like rule of law and judicial oversight. They also require a government and legal system that carries some political legitimacy, like those established through democratic processes.  

To improve the consistency of how we work with governments around the world, Cloudflare will be incorporating human rights training and due diligence tools like human rights impact assessments into our decision-making processes throughout our company.  

We recognize that as a private company there are limits to what Cloudflare can or should do as we operate around the world. But we are committed to more formally documenting our efforts to respect human rights under the UN Guiding Principles on Business and Human Rights, and we’ve partnered with GNI to help us get there.

Cloudflare and Human Rights: Joining the Global Network Initiative (GNI)

Who is GNI?

GNI is a non-profit organization launched in 2008. GNI members include ICT companies, civil society organizations (including human rights and press freedom groups), academic experts, and investors from around the world. Its mission is to protect and advance freedom of expression and privacy rights in the ICT sector by setting a global standard for responsible decision making and serving as a multistakeholder voice in the face of government restrictions and demands.

GNI provides companies with concrete guidance on how to implement the GNI Principles, which are based on international human rights law and standards, and are informed by the United Nations Guiding Principles on Business and Human Rights.  

When companies join GNI, they agree to have their implementation of the GNI Principles assessed independently by participating in GNI’s assessment process. The assessment is made up of a review of relevant internal systems, policies and procedures for implementing the Principles and an examination of specific cases or examples that show how the company is implementing them in practice.

What’s Next?

Cloudflare will serve in observer status in GNI for the next year while we work with GNI’s staff to implement and document formal human rights practices and training throughout our company.  Cloudflare will undergo its first independent assessment after becoming full GNI members.

It’s an exciting step for us as a company. But, we also think there is more Cloudflare can do to help build a better, more sustainable Internet. As we say often, we are just getting started.

Introducing the Rally + GitHub integration

Post Syndicated from Jared Murrell original https://github.blog/2020-08-18-introducing-the-rally-github-integration/

GitHub’s Professional Services Engineering team has decided to open source another project: Rally + GitHub. You may have seen our most recent open source project, Super Linter. Well, the team has done it again, this time to help users ensure that Rally stays up to date with the latest development in GitHub! 🎉

Rally + GitHub

This project integrates GitHub Enterprise Server (and cloud, if you host it yourself) with Broadcom’s Rally project management.

Every time a pull request is created or updated, Rally + GitHub will check for the existence of a Rally User Story or Defect in the titlebody, or commit messages, and then validate that they both exist and are in the correct state within Rally.

Animation showing a pull request being created

Why was it created?

GitHub Enterprise Server had a legacy Services integration with Rally. The deprecation of legacy Services for GitHub was announced in 2018, and the release of GitHub Enterprise Server 2.20 officially removed this functionality. As a result, many GitHub Enterprise users will be left without the ability to integrate the two platforms when upgrading to recent releases of GitHub Enterprise Server.

While Broadcom created a new integration for github.com, this functionality does not extend to GitHub Enterprise Server environments.

Get Started

We encourage you to check out this project and set it up with your existing Rally instance. A good place to start getting set up is the Get Started guide in the project’s README.md

We invite you to join us in developing this project! Come engage with us by opening up an issue even just to share your experience with the project.

Animation showing Rally and GitHub integration

Cloudflare Network Interconnection Partnerships Launch

Post Syndicated from Steven Pack original https://blog.cloudflare.com/cloudflare-network-interconnect-partner-program/

Cloudflare Network Interconnection Partnerships Launch

Today we’re excited to announce Cloudflare’s Network Interconnection Partner Program, in support of our new CNI product. As ever more enterprises turn to Cloudflare to secure and accelerate their branch and core networks, the ability to connect privately and securely becomes increasingly important. Today’s announcement significantly increases the interconnection options for our customers, allowing them to connect with us in the location of their choice using the method or vendors they prefer.

In addition to our physical locations, our customers can now interconnect with us at any of 23 metro areas across five continents using software-defined layer 2 networking technology. Following the recent release of CNI (which includes PNI support for Magic Transit), customers can now order layer 3 DDoS protection in any of the markets below, without requiring physical cross connects, providing private and secure links, with simpler setup.

Launch Partners

We’re very excited to announce that five of the world’s premier interconnect platforms are available at launch. Console Connect by PCCW Global in 14 locations, Megaport in 14 locations, PacketFabric in 15 locations, Equinix ECX Fabric™ in 8 locations and Zayo Tranzact in 3 locations, spanning North America, Europe, Asia, Oceania and Africa.

Cloudflare Network Interconnection Partnerships Launch Cloudflare Network Interconnection Partnerships Launch Cloudflare Network Interconnection Partnerships Launch
Cloudflare Network Interconnection Partnerships Launch Cloudflare Network Interconnection Partnerships Launch

What is an Interconnection Platform?

Like much of the networking world, there are many terms in the interconnection space for the same thing: Cloud Exchange, Virtual Cross Connect Platform and Interconnection Platform are all synonyms. They are platforms that allow two networks to interconnect privately at layer 2, without requiring additional physical cabling. Instead the customer can order a port and a virtual connection on a dashboard, and the interconnection ‘fabric’ will establish the connection. Since many large customers are already connected to these fabrics for their connections to traditional Cloud providers, it is a very convenient method to establish private connectivity with Cloudflare.

Cloudflare Network Interconnection Partnerships Launch

Why interconnect virtually?

Cloudflare has an extensive peering infrastructure and already has private links to thousands of other networks. Virtual private interconnection is particularly attractive to customers with strict security postures and demanding performance requirements, but without the added burden of ordering and managing additional physical cross connects and expanding their physical infrastructure.

Key Benefits of Interconnection Platforms

Secure
Similar to physical PNI, traffic does not pass across the Internet. Rather, it flows from the customer router, to the Interconnection Platform’s network and ultimately to Cloudflare. So while there is still some element of shared infrastructure, it’s not over the public Internet.

Efficient
Modern PNIs are typically a minimum of 1Gbps, but if you have the security motivation without the sustained 1Gbps data transfer rates, then you will have idle capacity. Virtual connections provide for “sub-rate” speeds, which means less than 1Gbps, such as 100Mbps, meaning you only pay for what you use. Most providers also allow some level of “burstiness”, which is to say you can exceed that 100Mbps limit for short periods.

Performance
By avoiding the public Internet, virtual links avoid Internet congestion.

Price
The major cloud providers typically have different pricing for egressing data to the Internet compared to an Interconnect Platform. By connecting to your cloud via an Interconnect Partner, you can benefit from those reduced egress fees between your cloud and the Interconnection Platform. This builds on our Bandwidth Alliance to give customers more options to continue to drive down their network costs.

Less Overhead
By virtualizing, you reduce physical cable management to just one connection into the Interconnection Platform. From there, everything defined and managed in software. For example, ordering a 100Mbps link to Cloudflare can be a few clicks in a Dashboard, as would be a 100Mbps link into Salesforce.

Data Center Independence
Is your infrastructure in the same metro, but in a different facility to Cloudflare? An Interconnection Platform can bring us together without the need for additional physical links.

Where can I connect?

  1. In any of our physical facilities
  2. In any of the 23 metro areas where we are currently connected to an Interconnection Platform (see below)
  3. If you’d like to connect virtually in a location not yet listed below, simply get in touch via our interconnection page and we’ll work out the best way to connect.

Metro Areas

The metro areas below have currently active connections. New providers and locations can be turned up on request.

Cloudflare Network Interconnection Partnerships Launch

What’s next?

Our customers have been asking for direct on-ramps to our global network for a long time and we’re excited to deliver that today with both physical and virtual connectivity of the world’s leading interconnection Platforms.

Already a Cloudflare customer and connected with one of our Interconnection partners? Then contact your account team today to get connected and benefit from improved reliability, security and privacy of Cloudflare Network Interconnect via our interconnection partners.

Are you an Interconnection Platform with customers demanding direct connectivity to Cloudflare? Head to our partner program page and click “Become a partner”. We’ll continue to add platforms and partners according to customer demand.

"Equinix and Cloudflare share the vision of software-defined, virtualized and API-driven network connections. The availability of Cloudflare on the Equinix Cloud Exchange Fabric demonstrates that shared vision and we’re excited to offer it to our joint customers today."
Joseph Harding, Equinix, Vice President, Global Product & Platform MarketingSoftware Developer

"Cloudflare and Megaport are driven to offer greater flexibility to our customers. In addition to accessing Cloudflare’s platform on Megaport’s global internet exchange service, customers can now provision on-demand, secure connections through our Software Defined Network directly to Cloudflare Network Interconnect on-ramps globally. With over 700 enabled data centres in 23 countries, Megaport extends the reach of CNI onramps to the locations where enterprises house their critical IT infrastructure. Because Cloudflare is interconnected with our SDN, customers can point, click, and connect in real time. We’re delighted to grow our partnership with Cloudflare and bring CNI to our services ecosystem — allowing customers to build multi-service, securely-connected IT architectures in a matter of minutes."
Matt Simpson, Megaport, VP of Cloud Services

“The ability to self-provision direct connections to Cloudflare’s network from Console Connect is a powerful tool for enterprises as they come to terms with new demands on their networks. We are really excited to bring together Cloudflare’s industry-leading solutions with PCCW Global’s high-performance network on the Console Connect platform, which will deliver much higher levels of network security and performance to businesses worldwide.”
Michael Glynn, PCCW Global, VP of Digital Automated Innovation

"Our customers can now connect to Cloudflare via a private, secure, and dedicated connection via the PacketFabric Marketplace. PacketFabric is proud to be the launch partner for Cloudflare’s Interconnection program. Our large U.S. footprint provides the reach and density that Cloudflare customers need."
Dave Ward, PacketFabric CEO

Cloudflare and Rackspace Technology Expand Partnership with Managed Services

Post Syndicated from Matthew Harrell original https://blog.cloudflare.com/cloudflare-and-rackspace-expand-partnership-with-managed-services/

Cloudflare and Rackspace Technology Expand Partnership with Managed Services

Cloudflare and Rackspace Technology Expand Partnership with Managed Services

Last year, Cloudflare announced the planned expansion of our partner program to help managed and professional service partners efficiently engage with Cloudflare and join us in our mission to help build a better Internet. We’ve been hard at work growing and expanding our partnerships with some amazing global teams that help us support digital transformation and security needs around the world, and today we’d like to highlight one of our Elite global partners, Rackspace Technology.

Today, we are announcing the expansion of our worldwide reseller partnership with Rackspace Technology to include a series of managed services offerings for Cloudflare. As a result, with Cloudflare Security, Performance, and Reliability with Rackspace Managed Services, customers will not only have access to and the scalability of Cloudflare’s global network and integrated cloud platform of security, performance, and reliability solutions but also benefit from a team of certified, enabled Rackspace experts to configure, onboard, and deploy Cloudflare solutions. Because more than 1 billion unique IP addresses pass through Cloudflare’s global network every day, Cloudflare, together with its solutions providers, can build real-world intelligence on the communications occurring over the Internet, and how well they perform. We’ve enjoyed enabling their teams to leverage this scope of information and we’re excited that Rackspace is ready to deliver this support to our shared customers.

Rackspace engineers are now trained on the features and configuration of Cloudflare to manage our capabilities alongside customers’ Rackspace-hosted websites. These managed services make it easy for Rackspace customers to have an Internet presence that is secure, performant, and reliable, with no tradeoffs.

Cloudflare Security, Performance, and Reliability with Rackspace Managed Services includes:

  • Onboarding support for customers consuming Cloudflare Managed DNS, Content Delivery Network (CDN), Advanced DDoS Protection, and Web Application Firewall (WAF).
  • Configuration and consultation for additional Cloudflare products and services, such as Bot Management, Load Balancing, Argo Smart Routing, Argo Tunnel, and Rate Limiting.
  • Ongoing management of Cloudflare services and products in the form of incident triage, troubleshooting and diagnostics, whitelisting, cache and security settings, and access rules.
  • Review and management of Cloudflare Analytics and Logs to optimize Cloudflare features.

With this announcement, we are building on strong existing field engagement and momentum across all of our geographies. We encourage you to read more about how our global partnerships are securing and accelerating business-critical Internet properties for our shared customers, and check out this short video clip from Rackspace Technology’s VP of Alliances and Channel Chief, Lisa McLin.

This new facet of our partnership will allow us to provide additional value to our global customers, while allowing Rackspace customers to benefit from Cloudflare’s industry-leading technology. As we evolve this new phase of our partnership, Rackspace and Cloudflare will continue to explore opportunities that expand our services while aligning to our core value of reducing the complexity of Internet security, performance, and reliability.

Our diverse network of partners is essential to our mission of helping to build a better Internet, and we are dedicated to the success of our partners. Cloudflare is committed to making our service partners successful. We ensure our shared customers have the best technology and expertise available to them as they look for solutions to protect their critical applications, infrastructure, and teams.

We’re looking forward to further strengthening our global alliance with Rackspace Technology and other partners around the world. Interested in learning more? Get in touch with Cloudflare and Rackspace.

Empowering our Customers and Service Partners

Post Syndicated from Dan Hollinger original https://blog.cloudflare.com/empowering-our-customers-and-service-partners/

Empowering our Customers and Service Partners

Last year, Cloudflare announced the planned expansion of our partner program to help managed and professional service partners efficiently engage with Cloudflare and join us in our mission to help build a better Internet. Today, we want to highlight some of those amazing partners and our growing support and training for MSPs around the globe. We want to make sure service partners have the enablement and resources they need to bring a more secure and performant Internet experience to their customers.

This partner program tier is specifically designed for professional service firms and Managed Service Providers (MSPs and MSSPs) that want to build value-added services and support Cloudflare customers. While Cloudflare is hyper-focused on building highly scalable and easy to use products, we recognize that some customers may want to engage with a professional services firm to assist them in maximizing the value of our offerings. From building Cloudflare Workers, implementing multi-cloud load balancing, or managing WAF and DDoS events, our partner training and support enables sales and technical teams to position and support the Cloudflare platform as well as enhance their services businesses.

Training

Our training and certification is meant to help partners through each stage of Cloudflare adoption, from discovery and sale to implementation, operation and continuous optimization. The program includes hands-on education, partner support and success resources, and access to account managers and partner enablement engineers.  

  • Accredited Sales ProfessionalLearn about key product features and how to identify opportunities and find the best solution for customers.
  • Accredited Sales EngineerLearn about Cloudflare’s technical differentiation that drives a smarter, faster and safer Internet.
  • Accredited Configuration Engineer Learn about implementation, best practices, and supporting Cloudflare.
  • Accredited Services Architect Launching in May, our Architect accreditation dives deeper into cybersecurity management, performance optimization, and migration services for Cloudflare.
  • Accredited Workers Developer (In Development) – Learn how to develop and deploy serverless applications with Cloudflare Workers.
Empowering our Customers and Service Partners
Cloudflare Partner Accreditation

Service Opportunities

Over the past year, the partners we’ve engaged with have found success throughout Cloudflare’s lifecycle by helping customers understand how to transform their network in their move to hybrid and multi-cloud solutions, develop serverless applications, or manage the Cloudflare platform.

Network Digital Transformations

“Cloudflare is streamlining our migration from on-prem to the cloud. As we tap into various public cloud services, Cloudflare serves as our independent, unified point of control — giving us the strategic flexibility to choose the right cloud solution for the job, and the ability to easily make changes down the line.” — Dr. Isabel Wolters, Chief Technology Officer, Handelsblatt Media Group

Serverless Architecture Development

“At Queue-it we pride ourselves on being the leading developer of virtual waiting room technology, providing a first-in, first-out online waiting system. By partnering with Cloudflare, we’ve made it easier for our joint customers to bring our solution to their applications through Cloudflare Apps and our Cloudflare Workers Connector that leverages the power of edge computing.”  – Henrik Bjergegaard, VP Sales, Queue-It

Managed Security & Insights

“Opticca Security supports our clients with proven and reliable solutions to ensure business continuity and protection of your online assets. Opticca Security has grown our partnership with Cloudflare over the years to support the quick deployment, seamless integration, and trusted expertise of Cloudflare Security solutions, Cloudflare Workers, and more.” — Joey Campione, President, Opticca Security

Partner Showcase – Zilker Technology

We wanted to highlight the success of one of our managed service partners who, together with Cloudflare, is delivering a more secure, more high performing and more reliable Internet experience for customers.

Empowering our Customers and Service Partners

Zilker Technology engaged Cloudflare when one of their eCommerce clients, the retail store of a major NFL team, was facing carding attacks and other malicious activity on their sites. “Our client activated their Cloudflare subscription on a Thursday, and we were live with Cloudflare in production the following Tuesday, ahead of Black Friday later that same week,” says Drew Harris, Director of Managed Services for Zilker. “It was crazy fast and easy!”

Carding – also known as credit card stuffing, fraud or verification, happens when cyber criminals attempt to make small purchases with large volumes of stolen credit card numbers on one eCommerce platform.

In addition to gaining the enhanced security and protection from Cloudflare WAF, advanced DDOS protection, and rate-limiting, Zilker replaced the client’s legacy CDN with Cloudflare CDN, improving site performance and user experience. Zilker provides full-stack managed services and 24/7 support for the client, including Cloudflare monitoring and management.  

“Partnering with Cloudflare gives us peace of mind that we can deliver on customer expectations of security and performance all the time, every day. Even as new threats emerge, Cloudflare is one step ahead of the game,” says Matthew Fox, VP of Business Development.

Just getting started

Cloudflare is committed to making our service partners successful to ensure our customers have the best technology and expertise available to them as they accelerate and protect their critical applications, infrastructure, and teams. As Cloudflare grows our product set, we’ve seen increased demand for the services provided by our partners. Cloudflare is excited and grateful to work with amazing agencies, professional services firms and managed security providers across the globe. The diverse Cloudflare Partner Network is essential to our mission of helping to build a better Internet, and we are dedicated to the success of our partners. We’ll continue our commitment to our customers and partners that Cloudflare will be the easiest and most rewarding solution to implement with partners.

More Information:

Addressing the Web’s Client-Side Security Challenge

Post Syndicated from Swapnil Bhalode (Guest Author) original https://blog.cloudflare.com/addressing-the-webs-client-side-security-challenge/

Addressing the Web’s Client-Side Security Challenge

Modern web architecture relies heavily on JavaScript and enabling third-party code to make client-side network requests. These innovations are built on client-heavy frameworks such as Angular, Ember, React, and Backbone that leverage the processing power of the browser to enable the execution of code directly on the client interface/web browser. These third-party integrations provide richness (chat tools, images, fonts) or extract analytics (Google Analytics). Today, up to 70% of the code executing and rendering on your customer’s browser comes from these integrations. All of these software integrations provide avenues for potential vulnerabilities.

Addressing the Web’s Client-Side Security Challenge

Unfortunately, these unmanaged, unmonitored integrations operate without security consideration, providing an expansive attack surface that attackers have routinely exploited to compromise websites. Today, only 2% of the Alexa 1000 global websites were found to deploy client-side security measures to protect websites and web applications against attacks such as Magecart, XSS, credit card skimming, session redirects and website defacement.

Improving website security and ensuring performance with Cloudflare Workers

In this post, we focus on how Cloudflare Workers can be used to improve security and ensure the high performance of web applications. Tala has joined Cloudflare’s marketplace to further our common goals of ensuring website security, preserving data privacy and assuring the integrity of web commerce. Tala’s innovative and unobtrusive solution, coupled with Cloudflare’s global reach, offers a compelling, highly effective solution for combatting the acceleration of client-side website attacks.

About Cloudflare Workers

Cloudflare Workers is a globally distributed serverless compute platform that runs across Cloudflare’s network of 200+ locations worldwide. Workers is designed for flexibility, with multiple use cases ranging from customizing configuration of Cloudflare services and features to building full, independent applications.

Cloudflare & Tala

Tala has integrated its “web module” capabilities into Cloudflare’s service Worker platform to enable a serverless, instantaneous deployment. This allows customers to activate enterprise-grade website security quickly and efficiently from Cloudflare’s 200+ reliable and redundant edge locations around the world. Tala automates the activation of standards-based, browser-native security controls to deliver highly effective security, without impacting website performance or user experience.

About Tala

Tala secures millions of web sessions for large providers in verticals such as financial services, online retail, payment processing, tech, fintech and education. We secure websites and web applications by continuously interrogating application architecture to enable the automation and continuous deployment of precise, browser-native, standards-based policies & controls. Our technology allows organizations to deploy standards-based website security with near-zero impact to performance and without the operational burdens associated with the application and administration of these policies.

How Tala Works

Tala’s solution is enabled with an analytics engine that evaluates over 150 unique indicators of a web page’s behavior and integrations. This dynamic analytics engine scans continuously, working in conjunction with an AI-assisted automation engine that activates and tunes standards-based security capabilities, like Content Security Policy (CSP), Subresource Integrity (SRI), Strict Transport (HSTS), Sandboxing (iFrame rules), Referrer Policy, Trusted Types, Certificate Stapling, Clear Site Data and others.

The automation of browser-native security controls provides comprehensive security without requiring any changes to application code and has near-zero impact on website performance. Tala’s solution can be installed via the Cloudflare Workers Integration to deliver instantaneous client-side security.

With Tala, rich website analytics become available with the risk of client-side website attacks. Website performance is preserved, administration is accelerated and the need for costly and continuous administration, remediation or incident response is minimized.

Addressing the Web’s Client-Side Security Challenge

How Tala Integrates with Cloudflare Workers

Customers can deploy Tala-generated security policies (discussed in the section above) on their website using Cloudflare’s Service Workers. The customer will install the Tala Service Worker on their Cloudflare account, using Tala’s installation scripts. These scripts invoke Cloudflare’s APIs to upload and enable the Tala Service Worker to Cloudflare as well upload the customized Tala security policies to Cloudflare’s KV store.

Once the installation is complete, the Tala service worker will be invoked every time an end user requests the customer’s site. During the response from Cloudflare, the Tala Service Worker implements the appropriate Tala’s security policies. Here are the steps involved:

  • Tala Service Worker sees the HTML content coming from the origin web server
  • Tala Service Worker parses the HTML page
  • Based on the content of the page, the Tala Service Worker inserts the appropriate security controls (e.g., CSP, SRI) which could include a combination of HTTP security headers (e.g., referrer policy, CSP, HSTS) as well as page insertions (e.g., nonces, SRI hashes)

Periodically, the Tala Service Worker polls the Tala cloud service to check for any security policy updates and if required, push the latest policies. For more details on how to install Tala into Cloudflare’s Service Workers, please read the installation manual.

Deploy Client-Side Website Security

Client-side vulnerability is a significant and accelerating problem. Workers can provide speed and capability to ensure your organization isn’t the next victim of a growing volume of successful attacks targeting widespread website and web application vulnerability. Standards-based security offers the most effective, comprehensive solution to safeguard against these attacks.

The combination of Cloudflare and Tala can help you expedite deployment. We’d love to hear from you and explore a Workers deployment!

The Tala solution is available today!

  • Cloudflare Enterprise Customers: Reach out to your dedicated Cloudflare account manager to learn more and start the process.
  • Tala Customers and Cloudflare Customers, reach out to Tala to learn more and start the process. You can sign up for and learn more about using Cloudflare Workers here!

Impact of Cache Locality

Post Syndicated from Sung Park original https://blog.cloudflare.com/impact-of-cache-locality/

Impact of Cache Locality

Impact of Cache Locality

In the past, we didn’t have the opportunity to evaluate as many CPUs as we do today. The hardware ecosystem was simple – Intel had consistently delivered industry leading processors. Other vendors could not compete with them on both performance and cost. Recently it all changed: AMD has been challenging the status quo with their 2nd Gen EPYC processors.

This is not the first time that Intel has been challenged; previously there was Qualcomm, and we worked with AMD and considered their 1st Gen EPYC processors and based on the original Zen architecture, but ultimately, Intel prevailed. AMD did not give up and unveiled their 2nd Gen EPYC processors codenamed Rome based on the latest Zen 2 architecture.


This made many improvements over its predecessors. Improvements include a die shrink from 14nm to 7nm, a doubling of the top end core count from 32 to 64, and a larger L3 cache size. Let’s emphasize again on the size of that L3 cache, which is 32 MiB L3 cache per Core Complex Die (CCD).

This time around, we have taken steps to understand our workloads at the hardware level through the use of hardware performance counters and profiling tools. Using these specialized CPU registers and profilers, we collected data on the AMD 2nd Gen EPYC and Intel Skylake-based Xeon processors in a lab environment, then validated our observations in production against other generations of servers from the past.

Simulated Environment

CPU Specifications

Impact of Cache Locality

We evaluated several Intel Cascade Lake and AMD 2nd Gen EPYC processors, trading off various factors between power and performance; the AMD EPYC 7642 CPU came out on top. The majority of Cascade Lake processors have 1.375 MiB L3 cache per core shared across all cores, a common theme that started with Skylake. On the other hand, the 2nd Gen EPYC processors start at 4 MiB per core. The AMD EPYC 7642 is a unique SKU since it has 256 MiB of L3 cache shared across its 48 cores. Having a cache this large or approximately 5.33 MiB sitting right next to each core means that a program will spend fewer cycles fetching data from RAM with the capability to have more data readily available in the L3 cache.

Impact of Cache Locality
Before (Intel)
Impact of Cache Locality
After (AMD)

Traditional cache layout has also changed with the introduction of 2nd Gen EPYC, a byproduct of AMD using a multi-chip module (MCM) design. The 256 MiB L3 cache is formed by 8 individual dies or Core Complex Die (CCD) that is formed by 2 Core Complexes (CCX) with each CCX containing 16 MiB of L3 cache.

Impact of Cache Locality
Core Complex (CCX) – Up to four cores
Impact of Cache Locality
Core Complex Die (CCD) – Created by combining two CCXs
Impact of Cache Locality
AMD 2nd Gen EPYC 7642 – Created with 8x CCDs plus an I/O die in the center

Methodology

Our production traffic shares many characteristics of a sustained workload which typically does not induce large variation in operating frequencies nor enter periods of idle time. We picked out a simulated traffic pattern that closely resembled our production traffic behavior which was the Cached 10KiB png via HTTPS. We were interested in assessing the CPU’s maximum throughput or requests per second (RPS), one of our key metrics. With that being said, we did not disable Intel Turbo Boost or AMD Precision Boost, nor matched the frequencies clock-for-clock while measuring for requests per second, instructions retired per second (IPS), L3 cache miss rate, and sustained operating frequency.

Results

The 1P AMD 2nd Gen EPYC 7642 powered server took the lead and processed 50% more requests per second compared to our Gen 9’s 2P Intel Xeon Platinum 6162 server.

Impact of Cache Locality

We are running a sustained workload, so we should end up with a sustained operating frequency that is higher than base clock. The AMD EPYC 7642 operating frequency or the number cycles that the processor had at its disposal was approximately 20% greater than the Intel Xeon Platinum 6162, so frequency alone was not enough to explain the 50% gain in requests per second.

Impact of Cache Locality

Taking a closer look, the number of instructions retired over time was far greater on the AMD 2nd Gen EPYC 7642 server, thanks to its low L3 cache miss rate.

Impact of Cache Locality
Impact of Cache Locality

Production Environment

CPU Specifications

Impact of Cache Locality

Methodology

Our most predominant bottleneck appears to be the cache memory and we saw significant improvement in requests per second as well as time to process a request due to low L3 cache miss rate. The data we present in this section was collected at a point of presence that spanned between Gen 7 to Gen 9 servers. We also collected data from a secondary region to gain additional confidence that the data we present here was not unique to one particular environment. Gen 9 is the baseline just as we have done in the previous section.

We put the 2nd Gen EPYC-based Gen X into production with hopes that the results would mirror closely to what we have previously seen in the lab. We found that the requests per second did not quite align with the results we had hoped, but the AMD EPYC server still outperformed all previous generations including outperforming the Intel Gen 9 server by 36%.

Impact of Cache Locality

Sustained operating frequency was nearly identical to what we have seen back in the lab.

Impact of Cache Locality

Due to the lower than expected requests per second, we also saw lower instructions retired over time and higher L3 cache miss rate but maintained a lead over Gen 9, with 29% better performance.

Impact of Cache Locality
Impact of Cache Locality

Conclusion

The single AMD EPYC 7642 performed very well during our lab testing, beating our Gen 9 server with dual Intel Xeon Platinum 6162 with the same total number of cores. Key factors we noticed were its large L3 cache, which led to a low L3 cache miss rate, as well as a higher sustained operating frequency. The AMD 2nd Gen EPYC 7642 did not have as big of an advantage in production, but nevertheless still outperformed all previous generations. The observation we made in production was based on a PoP that could have been influenced by a number of other factors such as but not limited to ambient temperature, timing, and other new products that will shape our traffic patterns in the future such as WebAssembly on Cloudflare Workers. The AMD EPYC 7642 opens up the possibility for our upcoming Gen X server to maintain the same core count while processing more requests per second than its predecessor.

Got a passion for hardware? I think we should get in touch. We are always looking for talented and curious individuals to join our team. The data presented here would not have been possible if it was not for the teamwork between many different individuals within Cloudflare. As a team, we strive to work together to create highly performant, reliable, and secure systems that will form the pillars of our rapidly growing network that spans 200 cities in more than 90 countries and we are just getting started.

An EPYC trip to Rome: AMD is Cloudflare’s 10th-generation Edge server CPU

Post Syndicated from Rob Dinh original https://blog.cloudflare.com/an-epyc-trip-to-rome-amd-is-cloudflares-10th-generation-edge-server-cpu/

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

More than 1 billion unique IP addresses pass through the Cloudflare Network each day, serving on average 11 million HTTP requests per second and operating within 100ms of 95% of the Internet-connected population globally. Our network spans 200 cities in more than 90 countries, and our engineering teams have built an extremely fast and reliable infrastructure.

We’re extremely proud of our work and are determined to help make the Internet a better and more secure place. Cloudflare engineers who are involved with hardware get down to servers and their components to understand and select the best hardware to maximize the performance of our stack.

Our software stack is compute intensive and is very much CPU bound, driving our engineers to work continuously at optimizing Cloudflare’s performance and reliability at all layers of our stack. With the server, a straightforward solution for increasing computing power is to have more CPU cores. The more cores we can include in a server, the more output we can expect. This is important for us since the diversity of our products and customers has grown over time with increasing demand that requires our servers to do more. To help us drive compute performance, we needed to increase core density and that’s what we did. Below is the processor detail for servers we’ve deployed since 2015, including the core counts:

Gen 6 Gen 7 Gen 8 Gen 9
Start of service 2015 2016 2017 2018
CPU Intel Xeon E5-2630 v3 Intel Xeon E5-2630 v4 Intel Xeon Silver 4116 Intel Xeon Platinum 6162
Physical Cores 2 x 8 2 x 10 2 x 12 2 x 24
TDP 2 x 85W 2 x 85W 2 x 85W 2 x 150W
TDP per Core 10.65W 8.50W 7.08W 6.25W

In 2018, we made a big jump in total number of cores per server with Gen 9. Our physical footprint was reduced by 33% compared to Gen 8, giving us increased capacity and computing power per rack. Thermal Design Power (TDP aka typical power usage) are mentioned above to highlight that we’ve also been more power efficient over time. Power efficiency is important to us: first, because we’d like to be as carbon friendly as we can; and second, so we can better utilize our provisioned power supplied by the data centers. But we know we can do better.

Our main defining metric is Requests per Watt. We can increase our Requests per Second number with more cores, but we have to stay within our power budget envelope. We are constrained by the data centers’ power infrastructure which, along with our selected power distribution units, leads us to power cap for each server rack. Adding servers to a rack obviously adds more power draw increasing power consumption at the rack level. Our Operational Costs significantly increase if we go over a rack’s power cap and have to provision another rack. What we need is more compute power inside the same power envelope which will drive a higher (better) Requests per Watt number – our key metric.

As you might imagine, we look at power consumption carefully in the design stage. From the above you can see that it’s not worth the time for us to deploy more power-hungry CPUs if TDP per Core is higher than our current generation which would hurt our Requests per Watt metric. As we started looking at production ready systems to power our Gen X solution, we took a long look at what is available to us in the market today and we’ve made our decision. We’re moving on from Gen 9’s 48-core setup of dual socket Intel® Xeon® Platinum 6162‘s to a 48-core single socket AMD EPYC™ 7642.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
Gen X server setup with single socket 48-core AMD EPYC 7642

Intel AMD
CPU Xeon Platinum 6162 EPYC 7642
Microarchitecture “Skylake” “Zen 2”
Codename “Skylake SP” “Rome”
Process 14nm 7nm
Physical Cores 2 x 24 48
Frequency 1.9 GHz 2.4 GHz
L3 Cache / socket 24 x 1.375MiB 16 x 16MiB
Memory / socket 6 channels, up to DDR4-2400 8 channels, up to DDR4-3200
TDP 2 x 150W 225W
PCIe / socket 48 lanes 128 lanes
ISA x86-64 x86-64

From the specs, we see that with the AMD chip we get to keep the same amount of cores and lower TDP. Gen 9’s TDP per Core was 6.25W, Gen X’s will be 4.69W… That’s a 25% decrease. With higher frequency, and perhaps going to a more simplified setup of single socket, we can speculate that the AMD chip will perform better. We’re walking through a series of tests, simulations, and live production results in the rest of this blog to see how much better AMD performs.

As a side note before we go further, TDP is a simplified metric from the manufacturers’ datasheets that we use in the early stages of our server design and CPU selection process. A quick Google search leads to thoughts that AMD and Intel define TDP differently, which basically makes the spec unreliable. Actual CPU power draw, and more importantly server system power draw, are what we really factor in our final decisions.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Ecosystem Readiness

At the beginning of our journey to choose our next CPU, we got a variety of processors from different vendors that could fit well with our software stack and services, which are written in C, LuaJIT, and Go. More details about benchmarking for our stack were explained when we benchmarked Qualcomm’s ARM® chip in the past. We’re going to go through the same suite of tests as Vlad’s blog this time around since it is a quick and easy “sniff test”. This allows us to test a bunch of CPUs within a manageable time period before we commit to spend more engineering effort and need to apply our software stack.

We tried a variety of CPUs with different number of cores, sockets, and frequencies. Since we’re explaining how we chose the AMD EPYC 7642, all of the graphs in this blog focus on how AMD compares with our Gen 9’s Intel Xeon Platinum 6162 CPU as a baseline.

Our results correspond to server node for both CPUs tested; meaning the numbers pertain to 2x 24-core processors for Intel, and 1x 48-core processor for AMD – a two socket Intel based server and a one socket AMD EPYC powered server. Before we started our testing, we changed the Cloudflare lab test servers’ BIOS settings to match our production server settings. This gave us CPU frequencies yields for AMD at 3.03 Ghz and Intel at 2.50 Ghz on average with very little variation. With gross simplification, we expect that with the same amount of cores AMD would perform about 21% better than Intel. Let’s start with our crypto tests.

Cryptography

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Looking promising for AMD. In public key cryptography, it does 18% better. Meanwhile for symmetric key, AMD loses on AES-128-GCM but it’s comparable overall.

Compression

We do a lot of compression at the edge to save bandwidth and help deliver content faster. We go through both zlib and brotli libraries written in C. All tests are done on blog.cloudflare.com HTML file in memory.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

AMD wins by an average of 29% using gzip across all qualities. It does even better with brotli with tests lower than quality 7, which we use for dynamic compression. There’s a throughput cliff starting brotli-9 which Vlad’s explanation is that Brotli consumes lots of memory and thrashes cache. Nevertheless, AMD wins by a healthy margin.

A lot of our services are written in Go. In the following graphs we’re redoing the crypto and compression tests in Go along with RegExp on 32KB strings and the strings library.

Go Cryptography

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Go Compression

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Go Regexp

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Go Strings

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

AMD performs better in all of our Go benchmarks except for ECDSA P256 Sign losing by 38%, which is peculiar since with the test in C it does 24% better. It’s worth investigating what’s going on here. Other than that, AMD doesn’t win by as much of a margin but it still proves to be better.

LuaJIT

We rely a lot on LuaJIT in our stack. As Vlad said, it’s the glue that holds Cloudflare together. We’re glad to show that AMD wins here as well.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Overall our tests show a single EPYC 7642 to be more competitive than two Xeon Platinum 6162. While there are a couple of tests where AMD loses out such as OpenSSL AES-128-GCM and Go OpenSSL ECDSA-P256 Sign, AMD wins in all the others. By scanning quickly and treating all tests equally, AMD does on average 25% better than Intel.

Performance Simulations

After our ‘sniff’ tests, we put our servers through another series of emulations which apply synthetic workloads simulating our edge software stack. Here we are simulating workloads of scenarios with different types of requests we see in production. Types of requests vary from asset size, whether they go through HTTP or HTTPS, WAF, Workers, or one of many additional variables. Below shows the throughput comparison between the two CPUs of the types of requests we see most typically.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

The results above are ratios using Gen 9’s Intel CPUs as the baseline normalized at 1.0 on the X-axis. For example, looking at simple requests of 10KiB assets over HTTPS, we see that AMD does 1.50x better than Intel in Requests per Second. On average for the tests shown on the graph above, AMD performs 34% better than Intel. Considering that the TDP for the single AMD EPYC 7642 is 225W, when compared to two Intel’s being 300W, we’re looking at AMD delivering up to 2.0x better Requests per Watt vs. the Intel CPUs!

By this time, we were already leaning heavily toward a single socket setup with AMD EPYC 7642 as our CPU for Gen X. We were excited to see exactly how well AMD EPYC servers would do in production, so we immediately shipped a number of the servers out to some of our data centers.

Live Production

Step one of course was to get all our test servers set up for a production environment. All of our machines in the fleet are loaded with the same processes and services which makes for a great apples-to-apples comparison.  Like data centers everywhere, we have multiple generations of servers deployed and we deploy our servers in clusters such that each cluster is pretty homogeneous by server generation. In some environments this can lead to varying utilization curves between clusters.  This is not the case for us. Our engineers have optimized CPU utilization across all server generations so that no matter if the machine’s CPU has 8 cores or 24 cores, CPU usage is generally the same.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

As you can see above and to illustrate our ‘similar CPU utilization’ comment, there is no significant difference in CPU usage between Gen X AMD powered servers and Gen 9 Intel based servers. This means both test and baseline servers are equally loaded. Good. This is exactly what we want to see with our setup, to have a fair comparison. The 2 graphs below show the comparative number of requests processed at the CPU single core and all core (server) level.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

We see that AMD does on average about 23% more requests. That’s really good! We talked a lot about bringing more muscle in the Gen 9 blog. We have the same number of cores, yet AMD does more work, and does it with less power. Just by looking at the specs for number of cores and TDP in the beginning, it’s really nice to see that AMD also delivers significantly more performance with better power efficiency.

But as we mentioned earlier, TDP isn’t a standardized spec across manufacturers so let’s look at real power usage below. Measuring server power consumption along with requests per second (RPS) yields the graph below:

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Observing our servers request rate over their power consumption, the AMD Gen X server performs 28% better. While we could have expected more out of AMD since its TDP is 25% lower, keep in mind that TDP is very ambiguous. In fact, we saw that AMD actual power draw ran nearly at spec TDP with its much higher than base frequency;  Intel was far from it. Another reason why TDP is becoming a less reliable estimate of power draw. Moreover, CPU is just one component contributing to the overall power of the system. Let’s remind that Intel CPUs are integrated in a multi-node system as described in the Gen 9 blog, while AMD is in a regular 1U form-factor machine. That actually doesn’t favor AMD since multi-node systems are designed for high density capabilities at lower power per node, yet it still outperformed the Intel system on a power per node basis anyway.

Through the majority of comparisons from the datasheets, test simulations, and live production performance, the 1P AMD EPYC 7642 configuration performed significantly better than the 2P Intel Xeon 6162. We’ve seen in some environments that AMD can do up to 36% better in live production and we believe we can achieve that consistently with some optimization on both our hardware and software.

So that’s it. AMD wins.

The additional graphs below show the median and p99 NGINX processing mostly on-CPU time latencies between the two CPUs throughout 24 hours. On average, AMD processes about 25% faster. At p99, it does about 20-50% depending on the time of day.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU

Conclusion

Hardware and Performance engineers at Cloudflare do significant research and testing to figure out the best server configuration for our customers. Solving big problems like this is why we love working here, and we’re also helping solving yours with our services like serverless edge compute and the array of security solutions such as Magic Transit, Argo Tunnel, and DDoS protection. All of our servers on the Cloudflare Network are designed to make our products work reliably, and we strive to make each new generation of our server design better than its predecessor. We believe the AMD EPYC 7642 is the answer for our Gen X’s processor question.

With Cloudflare Workers, developers have enjoyed deploying their applications to our Network, which is ever expanding across the globe. We’ve been proud to empower our customers by letting them focus on writing their code while we are managing the security and reliability in the cloud. We are now even more excited to say that their work will be deployed on our Gen X servers powered by 2nd Gen AMD EPYC processors.

An EPYC trip to Rome: AMD is Cloudflare's 10th-generation Edge server CPU
Expanding Rome to a data center near you

Thanks to AMD, using the EPYC 7642 allows us to increase our capacity and expand into more cities easier. Rome wasn’t built in one day, but it will be very close to many of you.

In the last couple of years, we’ve been experimenting with many Intel and AMD x86 chips along with ARM CPUs. We look forward to having these CPU manufacturers partner with us for future generations so that together we can help build a better Internet.

Cloudflare’s Gen X: Servers for an Accelerated Future

Post Syndicated from Nitin Rao original https://blog.cloudflare.com/cloudflares-gen-x-servers-for-an-accelerated-future/

Cloudflare’s Gen X: 
Servers for an Accelerated Future

“Every server can run every service.”

Cloudflare’s Gen X: 
Servers for an Accelerated Future

We designed and built Cloudflare’s network to be able to grow capacity quickly and inexpensively; to allow every server, in every city, to run every service; and to allow us to shift customers and traffic across our network efficiently. We deploy standard, commodity hardware, and our product developers and customers do not need to worry about the underlying servers. Our software automatically manages the deployment and execution of our developers’ code and our customers’ code across our network. Since we manage the execution and prioritization of code running across our network, we are both able to optimize the performance of our highest tier customers and effectively leverage idle capacity across our network.

An alternative approach might have been to run several fragmented networks with specialized servers designed to run specific features, such as the Firewall, DDoS protection or Workers. However, we believe that approach would have resulted in wasted idle resources and given us less flexibility to build new software or adopt the newest available hardware. And a single optimization target means we can provide security and performance at the same time.

We use Anycast to route a web request to the nearest Cloudflare data center (from among 200 cities), improving performance and maximizing the surface area to fight attacks.

Once a datacenter is selected, we use Unimog, Cloudflare’s custom load balancing system, to dynamically balance requests across diverse generations of servers. We load balance at different layers: between cities, between physical deployments located across a city, between external Internet ports, between internal cables, between servers, and even between logical CPU threads within a server.

As demand grows, we can scale out by simply adding new servers, points of presence (PoPs), or cities to the global pool of available resources. If any server component has a hardware failure, it is gracefully de-prioritized or removed from the pool, to be batch repaired by our operations team. This architecture has enabled us to have no dedicated Cloudflare staff at any of the 200 cities, instead relying on help for infrequent physical tasks from the ISPs (or data centers) hosting our equipment.

Gen X: Intel Not Inside

Cloudflare’s Gen X: 
Servers for an Accelerated Future

We recently turned up our tenth generation of servers, “Gen X”, already deployed across major US cities, and in the process of being shipped worldwide. Compared with our prior server (Gen 9), it processes as much as 36% more requests while costing substantially less. Additionally, it enables a ~50% decrease in L3 cache miss rate and up to 50% decrease in NGINX p99 latency, powered by a CPU rated at 25% lower TDP (thermal design power) per core.

Notably, for the first time, Intel is not inside. We are not using their hardware for any major server components such as the CPU, board, memory, storage, network interface card (or any type of accelerator). Given how critical Intel is to our industry, this would until recently have been unimaginable, and is in contrast with prior generations which made extensive use of their hardware.

Cloudflare’s Gen X: 
Servers for an Accelerated Future
Intel-based Gen 9 server

This time, AMD is inside.

We were particularly impressed by the 2nd Gen AMD EPYC processors because they proved to be far more efficient for our customers’ workloads. Since the pendulum of technology leadership swings back and forth between providers, we wouldn’t be surprised if that changes over time. However, we were happy to adapt quickly to the components that made the most sense for us.

Compute

Cloudflare’s Gen X: 
Servers for an Accelerated Future

CPU efficiency is very important to our server design. Since we have a compute-heavy workload, our servers are typically limited by the CPU before other components. Cloudflare’s software stack scales quite well with additional cores. So, we care more about core-count and power-efficiency than dimensions such as clock speed.

We selected the AMD EPYC 7642 processor in a single-socket configuration for Gen X. This CPU has 48-cores (96 threads), a base clock speed of 2.4 GHz, and an L3 cache of 256 MB. While the rated power (225W) may seem high, it is lower than the combined TDP in our Gen 9 servers and we preferred the performance of this CPU over lower power variants. Despite AMD offering a higher core count option with 64-cores, the performance gains for our software stack and usage weren’t compelling enough.

We have deployed the AMD EPYC 7642 in half a dozen Cloudflare data centers; it is considerably more powerful than a dual-socket pair of high-core count Intel processors (Skylake as well as Cascade Lake) we used in the last generation.

Readers of our blog might remember our excitement around ARM processors. We even ported the entirety of our software stack to run on ARM, just as it does with x86, and have been maintaining that ever since even though it calls for slightly more work for our software engineering teams. We did this leading up to the launch of Qualcomm’s Centriq server CPU, which eventually got shuttered. While none of the off-the-shelf ARM CPUs available this moment are interesting to us, we remain optimistic about high core count offerings launching in 2020 and beyond, and look forward to a day when our servers are a mix of x86 (Intel and AMD) and ARM.

We aim to replace servers when the efficiency gains enabled by new equipment outweigh their cost.

The performance we’ve seen from the AMD EPYC 7642 processor has encouraged us to accelerate replacement of multiple generations of Intel-based servers.

Compute is our largest investment in a server. Our heaviest workloads, from the Firewall to Workers (our serverless offering), often require more compute than other server resources. Also, the average size in kilobytes of a web request across our network tends to be small, influenced in part by the relative popularity of APIs and mobile applications. Our approach to server design is very different than traditional content delivery networks engineered to deliver large object video libraries, for whom servers focused on storage might make more sense, and re-architecting to offer serverless is prohibitively capital intensive.

Our Gen X server is intentionally designed with an “empty” PCIe slot for a potential add on card, if it can perform some functions more efficiently than the primary CPU. Would that be a GPU, FPGA, SmartNIC, custom ASIC, TPU or something else? We’re intrigued to explore the possibilities.

In accompanying blog posts over the next few days, our hardware engineers will describe how AMD 7642 performed against the benchmarks we care about. We are thankful for their hard work.

Memory, Storage & Network

Since we are typically limited by CPU, Gen X represented an opportunity to grow components such as RAM and SSD more slowly than compute.

Cloudflare’s Gen X: 
Servers for an Accelerated Future

For memory, we continued to use 256GB of RAM, as in our prior generation, but rated higher at 2933MHz. For storage, we continue to have ~3TB, but moved to 3x1TB form factor using NVME flash (instead of SATA) with increased available IOPS and higher endurance, which enables full disk encryption using LUKS without penalty. For the network card, we continue to use Mellanox 2x25G NIC.

Cloudflare’s Gen X: 
Servers for an Accelerated Future

We moved from our multi-node chassis back to a simple 1U form factor, designed to be lighter and less error prone during operational work at the data center. We also added multiple new ODM partners to diversify how we manufacture our equipment and to take advantage of additional global warehousing.

Cloudflare’s Gen X: 
Servers for an Accelerated Future

Network Expansion

Cloudflare’s Gen X: 
Servers for an Accelerated Future

Our newest generation of servers give us the flexibility to continue to build out our network even closer to every user on Earth. We’re proud of the hard work from across engineering teams on Gen X, and are grateful for the support of our partners. Be on the lookout for more blogs about these servers in the coming days.

Get Cloudflare insights in your preferred analytics provider

Post Syndicated from Simon Steiner original https://blog.cloudflare.com/cloudflare-partners-with-analytics-providers/

Get Cloudflare insights in your preferred analytics provider

Today, we’re excited to announce our partnerships with Chronicle Security, Datadog, Elastic, Looker, Splunk, and Sumo Logic to make it easy for our customers to analyze Cloudflare logs and metrics using their analytics provider of choice. In a joint effort, we have developed pre-built dashboards that are available as a Cloudflare App in each partner’s platform. These dashboards help customers better understand events and trends from their websites and applications on our network.


Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Cloudflare insights in the tools you’re already using

Data analytics is a frequent theme in conversations with Cloudflare customers. Our customers want to understand how Cloudflare speeds up their websites and saves them bandwidth, ranks their fastest and slowest pages, and be alerted if they are under attack. While providing insights is a core tenet of Cloudflare’s offering, the data analytics market has matured and many of our customers have started using third-party providers to analyze data—including Cloudflare logs and metrics. By aggregating data from multiple applications, infrastructure, and cloud platforms in one dedicated analytics platform, customers can create a single pane of glass and benefit from better end-to-end visibility over their entire stack.

Get Cloudflare insights in your preferred analytics provider

While these analytics platforms provide great benefits in terms of functionality and flexibility, they can take significant time to configure: from ingesting logs, to specifying data models that make data searchable, all the way to building dashboards to get the right insights out of the raw data. We see this as an opportunity to partner with the companies our customers are already using to offer a better and more integrated solution.

Providing flexibility through easy-to-use integrations

To address these complexities of aggregating, managing, and displaying data, we have developed a number of product features and partnerships to make it easier to get insights out of Cloudflare logs and metrics. In February we announced Logpush, which allows customers to automatically push Cloudflare logs to Google Cloud Storage and Amazon S3. Both of these cloud storage solutions are supported by the major analytics providers as a source for collecting logs, making it possible to get Cloudflare logs into an analytics platform with just a few clicks. With today’s announcement of Cloudflare’s Analytics Partnerships, we’re releasing a Cloudflare App—a set of pre-built and fully customizable dashboards—in each partner’s app store or integrations catalogue to make the experience even more seamless.

By using these dashboards, customers can immediately analyze events and trends of their websites and applications without first needing to wade through individual log files and build custom searches. The dashboards feature all 55+ fields available in Cloudflare logs and include 90+ panels with information about the performance, security, and reliability of customers’ websites and applications.

Get Cloudflare insights in your preferred analytics provider

Ultimately, we want to provide flexibility to our customers and make it easier to use Cloudflare with the analytics tools they already use. Improving our customers’ ability to get better data and insights continues to be a focus for us, so we’d love to hear about what tools you’re using—tell us via this brief survey. To learn more about each of our partnerships and how to get access to the dashboards, please visit our developer documentation or contact your Customer Success Manager. Similarly, if you’re an analytics provider who is interested in partnering with us, use the contact form on our analytics partnerships page to get in touch.

Get Cloudflare insights in your preferred analytics provider

Post Syndicated from Simon Steiner original https://blog.cloudflare.com/cloudflare-partners-with-analytics-providers/

Get Cloudflare insights in your preferred analytics provider

Today, we’re excited to announce our partnerships with Chronicle Security, Datadog, Elastic, Looker, Splunk, and Sumo Logic to make it easy for our customers to analyze Cloudflare logs and metrics using their analytics provider of choice. In a joint effort, we have developed pre-built dashboards that are available as a Cloudflare App in each partner’s platform. These dashboards help customers better understand events and trends from their websites and applications on our network.


Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Get Cloudflare insights in your preferred analytics provider

Cloudflare insights in the tools you’re already using

Data analytics is a frequent theme in conversations with Cloudflare customers. Our customers want to understand how Cloudflare speeds up their websites and saves them bandwidth, ranks their fastest and slowest pages, and be alerted if they are under attack. While providing insights is a core tenet of Cloudflare’s offering, the data analytics market has matured and many of our customers have started using third-party providers to analyze data—including Cloudflare logs and metrics. By aggregating data from multiple applications, infrastructure, and cloud platforms in one dedicated analytics platform, customers can create a single pane of glass and benefit from better end-to-end visibility over their entire stack.

Get Cloudflare insights in your preferred analytics provider

While these analytics platforms provide great benefits in terms of functionality and flexibility, they can take significant time to configure: from ingesting logs, to specifying data models that make data searchable, all the way to building dashboards to get the right insights out of the raw data. We see this as an opportunity to partner with the companies our customers are already using to offer a better and more integrated solution.

Providing flexibility through easy-to-use integrations

To address these complexities of aggregating, managing, and displaying data, we have developed a number of product features and partnerships to make it easier to get insights out of Cloudflare logs and metrics. In February we announced Logpush, which allows customers to automatically push Cloudflare logs to Google Cloud Storage and Amazon S3. Both of these cloud storage solutions are supported by the major analytics providers as a source for collecting logs, making it possible to get Cloudflare logs into an analytics platform with just a few clicks. With today’s announcement of Cloudflare’s Analytics Partnerships, we’re releasing a Cloudflare App—a set of pre-built and fully customizable dashboards—in each partner’s app store or integrations catalogue to make the experience even more seamless.

By using these dashboards, customers can immediately analyze events and trends of their websites and applications without first needing to wade through individual log files and build custom searches. The dashboards feature all 55+ fields available in Cloudflare logs and include 90+ panels with information about the performance, security, and reliability of customers’ websites and applications.

Get Cloudflare insights in your preferred analytics provider

Ultimately, we want to provide flexibility to our customers and make it easier to use Cloudflare with the analytics tools they already use. Improving our customers’ ability to get better data and insights continues to be a focus for us, so we’d love to hear about what tools you’re using—tell us via this brief survey. To learn more about each of our partnerships and how to get access to the dashboards, please visit our developer documentation or contact your Customer Success Manager. Similarly, if you’re an analytics provider who is interested in partnering with us, use the contact form on our analytics partnerships page to get in touch.

Why I’m helping Cloudflare build its partnerships worldwide

Post Syndicated from Matthew Harrell original https://blog.cloudflare.com/helping-cloudflare-build-its-partnerships-worldwide/

Cloudflare has always had an audacious mission: to help build a better Internet. From its inception, the company realized that a mission this big couldn’t be taken on alone. Such an undertaking would require the help of an extraordinary group of partners. Early in the company’s history, Cloudflare built strong relationships with many hosting providers to protect and accelerate internet traffic. And through the years, Cloudflare has continued to build some amazing Enterprise partnerships and strategic alliances.

As we continue to grow and foster our partner ecosystem, we are excited to announce Cloudflare’s next iteration of its Partner Program—to engage and enable an equally audacious set of partners that want to help build a better Internet, together.

I recently joined Cloudflare to run Global Channel Sales & Partnerships after spending over nine years at Google Cloud in various indirect and direct leadership roles. At Google, I witnessed the powerful impact that a strong partner ecosystem could have on solving complex organizational and societal problems. By combining innovative technologies provided by the manufacturer, with deep domain expertise provided by the partner, we delivered valuable industry solutions to our customers. And through this process, we helped our partners build valuable businesses, accelerate growth, and bring new innovation economies to all parts of the globe.

I joined Cloudflare because I strongly believe in its mission to help build a better Internet, and believe this mission, paired with its massive global network, will enable the company to continue to deliver incredibly innovative solutions to customers of all segments. Cloudflare has strong brand recognition, a market leading product portfolio, an ambitious vision, and a leadership team that is 100% committed to building out the channel and partner program.

I’m excited to connect with Cloudflare partners, and my first priority as the global channel leader is to provide our partners with the tools and programs which allow them to build a compelling business around our products. I’m eager to continue developing a world class program and organization that is:

  • Focused on helping partners build compelling businesses: Cloudflare has a history of democratizing Internet technologies that were once difficult to access, or complicated to use and even understand, such as free SSL, unmetered DDoS, and wholesale Registrar. We plan to take a similar market-shifting approach with our partners. We are redesigning our partner program with a vision of developing best-in-class revenue share models and value added professional services and managed services that we scale through our partners.
  • Easy to do business with: Cloudflare has always prided itself on its ease of use, and we want the partner experience to be just as seamless. We have redesigned how our partners engage with us—from initial sign up, to on-going engagement—to make it even easier for partners to do business with us. This includes simplifying the deal registration process, smooth product trainings for partner reps,, straightforward tracking of deals, and making it easier overall to profit from their relationship with Cloudflare.  
  • Strategically focused: Cloudflare has always relied on valuable partnerships on its mission to help build a better Internet. We are expanding that commitment by diving deeper with those partners that are committed to building their businesses around Cloudflare. We plan to invest resources and design partner-first programs that reward partners for leaning in and investing in Cloudflare’s mission.

Today, you’ll see a few important announcements around the future of our program and how we continue to scale to support some of our most complex partnerships.

We look forward to helping you build your business with Cloudflare!

For those partners that will be in London, please join us at Cloudflare Connect // London, our second annual London gathering of distinguished businesses and technologists, including many Cloudflare customers, partners, and developers. This is Cloudflare’s marquee customer event, which means the content and experience is built for you. I plan to be there personally to formally announce our new partner program, and provide insights on what’s to come.

You can register here: CloudflareConnect.com


More Information: