Tag Archives: cloud computing

Apple Adds a Backdoor to iMessage and iCloud Storage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/apple-adds-a-backdoor-to-imesssage-and-icloud-storage.html

Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.

EFF writes:

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts — that is, accounts designated as owned by a minor — for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.

Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.

This is a security disaster. Read tweets by Matthew Green and Edward Snowden. Also this. I’ll post more when I see it.

Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.

EDITED TO ADD: This is a really good write-up of the problems.

EDITED TO ADD: Alex Stamos comments.

An open letter to Apple criticizing the project.

A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)

EDITED TO ADD: John Gruber’s excellent analysis.

EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.

EDITED TO ADD (8/13): Really good essay by EFF’s Kurt Opsahl. Ross Anderson did an interview with Glenn Beck. And this news article talks about dissent within Apple about this feature.

The Economist has a good take. Apple responds to criticisms. (It’s worth watching the Wall Street Journal video interview as well.)

EDITED TO ADD (8/14): Apple released a threat model

EDITED TO ADD (8/20): Follow-on blog posts here and here.

Storing Encrypted Photos in Google’s Cloud

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/storing-encrypted-photos-in-googles-cloud.html

New paper: “Encrypted Cloud Photo Storage Using Google Photos“:

Abstract: Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user’s credentials give attackers unfettered access to all of the user’s photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP’s key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service

Apple Will Offer Onion Routing for iCloud/Safari Users

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/06/apple-will-offer-onion-routing-for-icloud-safari-users.html

At this year’s Apple Worldwide Developer Conference, Apple announced something called “iCloud Private Relay.” That’s basically its private version of onion routing, which is what Tor does.

Privacy Relay is built into both the forthcoming iOS and MacOS versions, but it will only work if you’re an iCloud Plus subscriber and you have it enabled from within your iCloud settings.

Once it’s enabled and you open Safari to browse, Private Relay splits up two pieces of information that — when delivered to websites together as normal — could quickly identify you. Those are your IP address (who and exactly where you are) and your DNS request (the address of the website you want, in numeric form).

Once the two pieces of information are split, Private Relay encrypts your DNS request and sends both the IP address and now-encrypted DNS request to an Apple proxy server. This is the first of two stops your traffic will make before you see a website. At this point, Apple has already handed over the encryption keys to the third party running the second of the two stops, so Apple can’t see what website you’re trying to access with your encrypted DNS request. All Apple can see is your IP address.

Although it has received both your IP address and encrypted DNS request, Apple’s server doesn’t send your original IP address to the second stop. Instead, it gives you an anonymous IP address that is approximately associated with your general region or city.

Not available in China, of course — and also Belarus, Colombia, Egypt, Kazakhstan, Saudi Arabia, South Africa, Turkmenistan, Uganda, and the Philippines.

AWS Managed Services by Anchor 2021-05-27 07:02:18

Post Syndicated from Gerald Bachlmayr original https://www.anchor.com.au/blog/2021/05/death-by-nodevops/

The CEO of ‘Waterfall & Silo’ walks into the meeting room and asks his three internal advisors: How are we progressing with our enterprise transformation towards DevOps, business agility and simplification? 

The well-prepared advisors, who had read at least a book and a half about organisational transformation and also watched a considerable number of Youtube videos, confidently reply: We are nearly there. We only need to get one more team on board. We have the first CI/CD pipelines established, and the containers are already up and running.

Unfortunately the advisors overlooked some details.

Two weeks later, the CEO asks the same question, and this time the response is: We only need to get two more teams on board, agree on some common tooling, the delivery methodology and relaunch our community of practice.

A month later, an executive decision is made to go back to the previous processes, tooling and perceived ‘customer focus’.

Two years later, the business closes its doors whilst other competitors achieve record revenues.

What has gone wrong, and why does this happen so often?

To answer this question, let’s have a look… 

Why do you need to transform your business?

Without transforming your business, you will run the risk of falling behind because you are potentially: 

  1. Dealing with the drag of outdated processes and ways of working. Therefore your organisation cannot react swiftly to new business opportunities and changing market trends.
  2. Wasting a lot of time and money on Undifferentiated heavy lifting (UHL). These are tasks that don’t differentiate your business from others but can be easily done better, faster and cheaper by someone else, for example, providing cloud infrastructure. Every minute you spend on UHL distracts you from focusing on your customer.
  3. Not focusing enough on what your customers need. If you don’t have sufficient data insights or experiment with new customer features, you will probably mainly focus on your competition. That makes you a follower. Customer-focused organisations will figure out earlier what works for them and what doesn’t. They will take the lead. 

How do you get started?

The biggest enablers for your transformation are the people in your business. If they work together in a collaborative way, they can leverage synergies and coach each other. This will ultimately motivate them. Delivering customer value is like in a team sport: not the team with the best player wins, but the team with the best strategy and overall team performance.  

How do we get there?

Establishing top-performing DevOps teams

Moving towards cross-functional DevOps teams, also called squads, helps to reduce manual hand-offs and waiting times in your delivery. It is also a very scalable model that is used by many modern organisations that have a good customer experience at their forefront. This applies to a variety of industries, from financial services to retail and professional services. Squad members have different skills and work together towards a shared outcome. A top-performing squad that understands the business goals will not only figure out how to deliver effectively but also how to simplify the solution and reduce Undifferentiated Heavy Lifting. A mature DevOps team will always try out new ways to solve problems. The experimental aspect is crucial for continuous improvement, and it keeps the team excited. Regular feedback in the form of metrics and retrospectives will make it easier for the team to know that they are on the right track.

Understand your customer needs and value chain

There are different methodologies to identify customer needs. Amazon has the “working backwards from the customer” methodology to come up with new ideas, and Google has the “design sprint” methodology. Identifying your actual opportunities and understanding the landscape you are operating in are big challenges. It is easy to get lost in detail and head in the wrong direction. Getting the strategy right is only one aspect of the bigger picture. You also need to get the execution right, experiment with new approaches and establish strong feedback loops between execution and strategy. 

This brings us to the next point that describes how we link those two aspects.

A bidirectional governance approach

DevOps teams operate autonomously and figure out how to best work together within their scope. They do not necessarily know what capabilities are required across the business. Hence you will need a governing working group that has complete visibility of this. That way, you can leverage synergies organisation-wide and not just within a squad. It is important that this working group gets feedback from the individual squads who are closer to specific business domains. One size does not fit all, and for some edge cases, you might need different technologies or delivery approaches. A bidirectional feedback loop will make sure you can improve customer focus and execution across the business.

Key takeaways

Establishing a mature DevOps model is a journey, and it may take some time. Each organisation and industry deals with different challenges, and therefore the journey does not always look the same. It is important to continuously tweak the approach and measure progress to make sure the customer focus can improve.

But if you don’t start the DevOps journey, you could turn into another ‘Waterfall & Silo’.

The post appeared first on AWS Managed Services by Anchor.

AWS Managed Services by Anchor 2021-03-23 07:38:38

Post Syndicated from Gerald Bachlmayr original https://www.anchor.com.au/blog/2021/03/why-cloud-native-is-essential-for-your-2021-retail-strategy-and-how-to-get-started/

The retail market has changed a lot over the last years and Covid is often referenced as the main driver for digital transformation and self-service offerings. Retail customers can easily compare products and customer feedback online via various comparison websites and search engines.

The customers interact with the e-commerce application that allows them to search for products, purchase them and keep them updated about the delivery status. Customers do not care where the application is hosted or what the technology stack is. They care about things like usability, speed, features and they want to interact with the applications on different devices.

What is Cloud Native?

Cloud Native is an approach where your application leverages the advantages of the cloud computing delivery model. Cloud-native systems are designed to embrace rapid change, large scale, and resilience. With this approach you let AWS do the Undifferentiated Heavy Lifting and your team can focus on the actual application. For example, you can deploy your code to fully managed runtime environments that scale automatically and AWS manages all the operational aspects and security of those runtimes for you.

Why is Cloud Native a retail enabler?

Taking a customer centric view, you want to focus on the things that provide value to the customer. The most visible aspect of the retail solution is the actual application or service – not the IT infrastructure behind it. Therefore you want to make sure that your application keeps improving without wasting time and budget on things that can be commoditised.

Let’s look at an example: You run a coffee shop. You grind the beans so the coffee is fresh.  Your customers can then enjoy a great tasting experience. This is the ultimate business value that the customer can see. You would not generate the electricity yourself, as an energy provider does that in a much more efficient way.

This is exactly the same with all the underlying infrastructure of your retail application: AWS can manage this for you in a much more efficient, secure and cost effective way. AWS calls all those activities that do not differentiate your business from others ‘Undifferentiated Heavy Lifting’. By handing all those Undifferentiated Heavy Lifting activities over to AWS you can focus on the things that really matter to your customers – like good coffee!

How do you get started?

If you start from scratch then you have an easier journey ahead because you can tap into all the cloud native offerings right from the beginning. For now we will assume that you already have an application and you want to move it to the cloud, leveraging the advantages of Cloud Native services. At the beginning of your journey you will make sure you have answers to some of the typical discovery questions, such as:

  1. Understand your current state and pain points
    1. Time to market:
      Do you get new features out quick and often enough. If not, what is causing those delays?
    2. Data insights and metrics:
      What insights do you need to understand what your customers want and how you can increase your conversion rate?
    3. Quality assurance and security:
      Are there sufficient quality checks in place before you release new features or product catalogue items? Do you have guardrails in place that protect your team from security breaches?
  2. Understand the Return on Investment of Cloud Native and why do you want to migrate
    1. Lost opportunity:
      What is the impact of not moving to cloud native? For example you will be slower in releasing new features than your competitors.
    2. Operational simplification:
      How can you focus more on your customer facing application when you remove the Undifferentiated Heavy Lifting?
    3. Business agility:
      Do you need geographic data isolation to meet regulatory requirements or do you need temporary environments for testing or demos?
  3. Are your ways of working aligned with where you want to be in the future?
    1. Internal collaboration:
      Is your internal communication structure future proof? “Conways Law” describes  how organisations design systems which mirror their own communication structure. This is one of many reasons why organisation move towards cross-functional delivery squads.
    2. Team hand-offs:
      Do you have many hand-offs during your software delivery life-cycle? This will slow down the process due to waiting times between team hand-offs and also potential communication gaps.
    3. Skills:
      Does your team have the required skills? By offloading the Undifferentiated Heavy Lifting to AWS the required skill set becomes narrower and your team can focus on training that is relevant for the application development and test automation.

How to expertly execute a Cloud Native approach

  1. Understand your strategy:
    1. Strategy:
      The strategy will articulate why you want to achieve change and what principles will guide the the target state
    2. Target State:
      The target state describes where you eventually want to be. Words like ‘customer focus’ and ‘simplification’ should be on the forefront of your mind. Amazons “Working backwards from the customer” framework and the AWS Well Architected Framework can help you here.
    3. Transitions:
      The transition states describe how to get to your target state. The transition states are individual architecture blueprints that describe your transformation stages.
  2. Build a roadmap
    1. Define a backlog:
      The backlog articulates the expected business outcomes typically in form of user stories that can be achieved within a sprint duration (1-2 weeks). Good user stories also include acceptance criteria and test cases.
    2. Understand dependencies:
      The backlog is driven by business outcomes but there will be some technical dependencies that dictate in which order some activities need to be completed. Understanding those dependencies is important to make sure the team can be productive and do not have unnecessary wait times.
    3. Identify skill gaps and build a learning plan:
      Once you build your backlog you get a better understanding of the required skills. This helps you to plan for training courses and other learning initiatives.
  3. Build a governance framework
    1. Strategic guidelines:
      Having clear articulated guidelines in place will help you to speed up the decision process for any changes you will perform. Make sure the required teams are represented in your governance working group so that you don’t miss out any requirements and concerns.
    2. Align with best practices:
      There are lots of best practices that can be utilised rather than reinventing the wheel. The AWS Well Architected Framework for example can help you with architecture guidelines and principles.
    3. Define how you measure success:
      You need to know what good looks like: what does a good customer experience look like and what are your milestones? What is the productivity, team happiness and customer satisfaction that you need as a successful and sustainable retail business? Agree on a set of metrics that you can compare against. You can gradually build up these metrics.
  4. Establish cross-functional teams (squads)
    1. Squads:
      A squad will have team members representing architecture, development, testing and technical business analysis. The goal is to establish an autonomous team that can tackle the user stories from the backlog. Depending on your organisation structure the squad will be represented by members from different business units.
    2. Ceremonies:
      Since the squad members can come from different business units, they might not have worked together before. Therefore a good team collaboration is crucial and agile ceremonies will help with that. Some of the key ceremonies are sprint planning, daily standups (maximum 15 minutes), a demo at the end of the sprint to show the stakeholders the produced outputs, followed by a retrospective to get feedback from the team.
    3. Experiment:
      When you change your ways of working approach it is easier to start small and pick an initiative that is not overly time critical. This way you can start with a smaller team, establish short feedback loops and tweak the approach for your organisation. The insights from the retrospective will help you to improve the process. Once you have established one successful squad you can start rolling out the new process further.
  5. Measure your outcomes:
    1. Feedback from your team:
      Your team will provide feedback during the retrospective session at the end of each sprint. You can measure aspects like: How much did the team learn, did it feel like they delivered value? This gives you visibility of any trends and if any changes around the process result in better feedback.
    2. Feedback from the customer:
      There are several ways how you can measure this. Customer surveys are insightful if you ask the right questions. Statistics from your website will be very helpful for any retail organisation. You can measure things like average time on a page, bounce rate, exit rates, conversion rates. If you can link those numbers back to your releases and release changes you can actually see which website updates change the customer behaviour. Machine learning is another way how you can identify customer patterns and determine the sentiment of online chats or phone calls to a virtual call center like Amazon Connect.
    3. Insights from your automation tools:
      Your automation tools can provide metrics such as number of incidents, criticality, ratio of successful deployments, test coverage and many more. Once you can capture those metrics you can run historic comparison and project trends. If you link incidents to releases you will also get insights into the root cause of problems.

Key Cloud Native takeaways

Adopting Cloud Native is not just a technical challenge, it is a journey. If you want to turn it into a success story you need to consider the cultural changes and also a governance process that makes sure you are heading in the right direction. This can be complex and challenging when you haven’t done it before. The good news is that Anchor have championed it many times and we can help you on the journey.

The post appeared first on AWS Managed Services by Anchor.

AWS Managed Services by Anchor 2021-02-12 02:20:26

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2021/02/is-it-possible-to-downsize-it-staff-by-making-the-switch-to-aws/

If you’re an SMB or enterprise business with a sizable reliance on digital infrastructure, it is a common query to wonder if moving your online services to a cloud provider could allow you simplify your services, benefit from a network that is perceived to be infallible, and ultimately, to downsize on technical staff and slim down your IT spend.

Many businesses believe that without having to purchase their own server hardware, pay for data centre rackspace costs, or pay for quite so many staff members to manage it all, a significant amount of money can be saved on IT costs. However, while it is true that moving to the AWS cloud would reduce hardware and rackspace costs to nil, there are a number of new costs and challenges to consider.

Is the cloud actually cheaper?

Upon completing the migration from data centre hosting services to cloud hosting services, many businesses mistakenly believe that they will be able to lower their costs by downsizing on the number of IT staff they need to manage their technological infrastructure. This is not always the case. Cloud can require more extensive expertise to both set up and maintain on an ongoing basis as a trade-off for the other benefits offered.

AWS is a complex beast, and without proper support and planning, businesses can find their costs higher than they originally were, their services more complex and difficult to manage, as well as their online assets failing on an even more regular basis. Wasted cloud spend is a very common occurrence within the cloud services industry, with many cloud users not optimising costs where they can. In a 2019 report from Flexera, they measured the actual waste of cloud spending at 35 percent.

Why is it not such a simple switch?

Cloud is fundamentally a different approach to hosting and provides more opportunity, scale and complexity. Understanding how to make the most of those requires a thorough assessment of your infrastructure and business. It is, therefore, of pertinent importance to ensure that the IT staff that you do intend to retain are properly trained and qualified to manage cloud services.

Check out our blog, “What’s the difference between Traditional Hosting and AWS Cloud Hosting?” for more information on how the two environments greatly differ.

If your IT staff are more certified in AWS cloud management, you could be looking at higher costs than you started with. You would therefore need to factor in the costs of hiring new, properly qualified staff, or investing in upskilling existing staff – at the risk of losing that investment should the staff member leave in future.

The costs of qualified staff.

Certain types of AWS certified professionals are able to command some of the highest salaries in the cloud industry, due to the high level of expertise and capability that they can provide to a business. AWS engineers can maintain the performance and security of high- demand websites and software, optimising them for lower cost and better performance. 

Large enterprises conducting a high volume of online transactions, or businesses that involve the handling of sensitive data would be in particular need of high-quality architects and engineers to keep their cloud environments both adequately optimised, reliable and safe. Though even as a small business, the build, deployment and operating of AWS services is always best conducted by experienced, AWS certified professionals to ensure the integrity and availability of your online services.

A win-win alternative.

What many businesses have discovered is that there is an alternative to managing their own AWS infrastructure. AWS management service providers can act as the vital middleman between your business and your cloud services, ensuring your digital infrastructure is being set up, maintained and cost-optimised by industry-leading AWS professionals.

Oftentimes, the cost optimisations achieved by a high-quality AWS management service provider completely pay for themselves in what would otherwise be wasted spend. Check out our blog, “4 Important Considerations To Avoid Wasted Cloud Spend” to learn more about wasted cloud spend.

One of the most beneficial things an AWS management service provider can offer your business is ensuring that you’re only paying for what your business needs. It may save your business significantly more money in the long run, even when factoring in management fees.

If you’re interested in learning more about how managed AWS services can help your business to potentially slim down on IT spend, please contact our friendly AWS experts on 1300 883 979, or submit an enquiry through our website anytime.

The post appeared first on AWS Managed Services by Anchor.

Why businesses are migrating to AWS Managed Cloud Hosting during the COVID-19 pandemic

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2021/02/why-businesses-are-migrating-to-aws-managed-cloud-hosting-during-the-covid-19-pandemic/

COVID-19 has been an eye-opening experience for many of us. Prior to the current pandemic, many of us, as individuals, had never experienced the impacts of a global health crisis before. The same can very much be said for the business world. Quite simply, many businesses never considered it, nor had a plan in place to survive it.

As a consequence, we’ve seen the devastating toll that COVID-19 has had on some businesses and even entire sectors. According to an analysis by Oxford Economics, McKinsey and McKinsey Global Institute, certain sectors such as manufacturing, accommodation and food services, arts, entertainment and recreation, transportation and warehousing and educational services will take at least 5 years to recover from the impact of COVID-19 and return to pre-pandemic contributions to GDP. There is one industry however, that was impacted by the pandemic in the very opposite way; technology services.

The growth of our digital landscape

With many countries going into varying levels of lockdown, schools and workplaces shutting down their premises, and social distancing enforcement in many facets of our new COVID-safe lives, our reliance on technology has skyrocketed throughout 2020. In 2020, “buy online” searches increased by 50% over 2019, almost doubling during the first wave of the pandemic in March. Looking at statistics from the recent Black Friday sales event gives us a staggering further example of how much our lives have transitioned into the digital world.

In the US, Black Friday online searches increased by 34.13% this year. Even here in Australia, where there is significantly less tradition surrounding the Thanksgiving/Black Friday events, online searches for Black Friday still also increased by 34.39%. Globally, when you compare October 2019 to October 2020, online retail traffic numbers grew by a massive 30%, which accounts for billions of visitors.

Retail isn’t the only sector that now relies on the cloud far more heavily than ever before. Enterprises have also had to move even more of their operations into the cloud to accommodate the sudden need for remote working facilities. With lockdowns occurring all over the world for sometimes unknown lengths of time, businesses have had to quickly adapt to allow employees to continue their roles from their own homes. Likewise, the education sector is another who have had to adapt to providing their resources and services to students remotely. Cloud computing platforms, such as AWS, are the only viable platforms that are set up to handle such vast volumes of data while remaining both reliable and affordable.

Making the transition to online

With such clear growth in the digital sector, it makes sense that businesses who already existed online, or were quick to transition to an online presence at the start of the pandemic, have by far and large had the best chance at surviving. In the early months of the pandemic, many bricks and mortar businesses returned to their innovative roots, finding ways to digitise and mobilise their products and services. Many in the hospitality industry, for example, had to quickly adapt to online ordering and delivery to stay afloat, while many other businesses and sectors transitioned in new and unexpected ways too.

What many of these businesses had in common, was to decide somewhere along the way how to get online quickly, while being mindful of costs and long-term sustainability. When it comes to flexibility, availability and reliability, there really is no competition to cloud computing to be able to consistently deliver all three.

What is AWS Managed Cloud Hosting?

Amazon Web Services has taken over the world as one of the leading public cloud infrastructure providers, offering an abundance of products and services that can greatly assist you in bringing your business presence online.

AWS provides pay-as-you-go infrastructure that allows businesses to scale their IT resources with the business demand. Prior to the proliferation of cloud providers, businesses would turn to smaller localised companies, such as web hosts and software development agencies, to provide them with what they needed. Over recent years, things have greatly progressed as cloud services have become more expansive, integrated and able to cater to more complex business requirements than ever before.

When you first create an account with AWS and open up the console menu for the first time, the expansive nature of the services that they provide becomes very apparent.

Here, you’ll find all of the most expected services such as online storage facilities such as (S3), database hosting (RDS), DNS hosting (Route 53) and computing (EC2). But it doesn’t stop there, other popular services include Lambda, Lightsail and VPC, creating an array of infrastructure options large enough to host any environment. At the time of writing, there are 161 different services on offer in the AWS Management Console, spread out over 26 broader categories.

AWS Cloud Uptake during the Pandemic

Due to the flexible, scalable and highly reliable nature of AWS cloud hosting, the uptake of managed cloud services has continued to rise steadily throughout the pandemic. So far in 2020, AWS has experienced a 29% growth, bringing the total value up to a sizable $10.8bn.

With the help of an accredited and reputable AWS MSP (Managed Service Provider), businesses of all scales are able to digitise their operations quickly and cost-effectively. Whether you’re an SMB in the retail space who needs to provide a reliable platform for customers to find and purchase your goods, or an enterprise level business with thousands of staff members who rely on internal networks to perform their work remotely, AWS provides a vast array of services to cater to every need.

If you’re interested in finding out what AWS cloud hosting could do for your business, please don’t hesitate to get in touch with our expert team for a free consultation.

The post Why businesses are migrating to AWS Managed Cloud Hosting during the COVID-19 pandemic appeared first on AWS Managed Services by Anchor.

AWS Managed Services by Anchor 2021-02-12 01:52:57

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2021/02/25645/

The thought of downtime can bring a chill to the bones of any IT team. Depending on the online demand you have for your products or services, even an hour or two of downtime can result in significant financial losses or catastrophic consequences of various other kinds.

As such, avoiding downtime should be a high priority item for any IT or Operations Manager. So, is the AWS cloud completely immune to downtime? We’ll discuss the various aspects of this question below.

The true cost of downtime

The true cost of downtime will vary from business to business, but whether you’re an SMB or an enterprise, all businesses that have critical services on the cloud should design their services from the ground up for high availability.

Gartner has reported the average cost of downtime to be $5,600 per minute. This varies between businesses, as no single business is run the exact same way or has the exact same setup, so at the low end this average could be more like $140,000 per hour, and $300,000 per hour on the high end.

To further break down their findings, Gartner’s research showed that 98% of organisations experience costs over $100,000 from a single hour of downtime. 81% of respondents said that 60 minutes of downtime costs their business in excess of $300,000. And 33% of enterprises found that that one hour of downtime cost them anywhere between $1-5 million.

Some of the causes for such a huge loss during and after a business experiences downtime can include some of the following:

  • Loss of sales
  • Certain business-critical data can become corrupted, depending on the outage
  • Costs of reviewing and resolving systems issues and processes
  • Detrimental reputational effect with stakeholders and customers
  • A drop in employee morale
  • A reduction in employee productivity

The always-online cloud services fallacy

Many businesses have migrated to the cloud and assumed that high availability is all a part of the cloud package, and doesn’t require any further expertise, investigation or implementation – however, this is not the case. To ensure high availability and uptime of internal systems and tools, a business must plan for this during its initial implementation. Properly setting up a business AWS environment for high availability requires an in-depth understanding of all that AWS has to offer, which is where a business can benefit greatly from outsourcing to an MSP that specialises in AWS cloud services.

Your business could experience more downtime with AWS than you would with a traditional hosting service.

Many people are surprised to learn that simply migrating to the cloud doesn’t automatically mean that their services will effectively become bullet-proof. In fact, the opposite can often be true.

AWS cloud services are complex and require extensive experience and in-depth knowledge to properly manage. This means there is a far greater chance for error when AWS services are being configured by an inexperienced user, leaving the services more vulnerable to security threats or performance issues that could ultimately result in downtime.

However, on the other hand, when AWS cloud services have been properly planned and configured from the ground up by certified professionals, the cloud can offer significantly greater availability and protection from downtime than traditional hosting services.

High Availability, Redundancy and Backups

‘High Availability’ is a term often attributed to cloud services, and refers to having multiple geographical regions where your website or application can be accessed from (as opposed to end-users always relaying requests back to a single server in one location). Because of the dynamic and data replicating nature of the cloud, some businesses mistake high availability for being inclusive of redundancy and backups.


High availability can refer to redundancy in the sense that should one geographical access point suffer an outage, and another can automatically step in to cater to an end-user’s request. However, it does not mean that your website or application does not still also require an effective backup and disaster recovery plan.


Should something go wrong with your cloud services, or certain parts of your environment become unavailable, you will need to rely on your own plan for replication or recovery. AWS offers a range of options to cater to this, and these should always be carefully considered and implemented during the planning and building phases.

How can you best protect your business from downtime?

So, to answer the question “Are AWS cloud services immune to downtime?”, the answer is no, as it would be for any form of technology. At this time, there is no technology that can truly claim to be entirely failsafe. However, AWS cloud services can get your business as close to failsafe as it is possible to get – if it’s done right.

For businesses that are serious about ensuring their online operations are available as much as possible, such as those involved in providing critical care, high demand eCommerce environments, or enterprise-level tools and systems, it’s essential to have your cloud services designed by a team of certified AWS professionals who have the correct credentials and expertise. If you’re interested in discussing this further, please don’t hesitate to get in touch with our expert team for a free consultation.


The post appeared first on AWS Managed Services by Anchor.

AWS Managed Services by Anchor 2021-01-13 03:31:09

Post Syndicated from Douglas Chang original https://www.anchor.com.au/blog/2021/01/25624/

If you’re an IT Manager or Operations Manager who has considered moving your company’s online assets into the AWS cloud, you may have started by wondering, what is it truly going to involve?

One of the first decisions you will need to make is whether you are going to approach the cloud with the assistance of an AWS managed service provider (AWS MSP), or whether you intend to fully self-manage.

Whether or not a fully managed service is the right option for you comes down to two pivotal questions;

  1. Do you have the technical expertise required to properly deploy and maintain AWS cloud services?
  2. Do you, or your team, have the time/capacity to take this on – not just right now, but in an ongoing capacity too?

Below, we’ll briefly cover some of the considerations you’ll need to make when choosing between fully managed AWS Cloud Services and Self-Managed AWS Cloud Services.

Self-Managed AWS Services

Why outsource the management of your AWS when you can train your own in-house staff to do it?

With self-managed AWS Services, this means you’re responsible for every aspect of the service from start to finish. Managing your own services allows for the benefit of ultimate control, which may be beneficial if you require very specific deployment conditions or software versions to run your applications. It can also allow you to very gradually test your applications within their new infrastructure, and learn as you go.

This will result in knowing how to manage and control your own services on a closer level, but it comes with the downside of a very heavy learning curve and time investment if you have never entered the cloud environment before. In the context of a business or corporate environment, you’d also need to ensure that multiple staff members go through this process to ensure redundancy for staff availability and turnover. You’d also need in either case to invest in continuous development to keep up with the latest best practices and security protocols, because the cloud, like any technical landscape, is fast-paced and ever-changing.

This can end up being a significant investment in training and staff development. As employees are never guaranteed to stay, there is the risk of that investment, or at least substantial portions of it, disappearing at some point.

At the time of writing, there are 450 items in the AWS learning library, for those looking to self-learn. In terms of taking exams to obtain official accreditation, AWS offers 3 levels of certification at present, starting with Foundational, through to Associate, and finally, Professional. To reach the Professional level, AWS requires “Two years of comprehensive experience designing, operating, and troubleshooting solutions using the AWS Cloud”.

Fully Managed AWS Services

Hand the reins over to accredited professionals.

Fully-managed AWS services mean you’ll reap all of the extensive benefits of moving your online infrastructure into the cloud, without taking on the responsibility of setting up or maintaining those services.

You will hand over the stress of managing backups, high availability, software versions, patches, fixes, dependencies, cost optimisation, network infrastructure, security, and various other aspects of keeping your cloud services secure and cost-effective. You won’t need to spend anything on staff training or development, and there is no risk of losing control of your services when internal staff come and go. Essentially, you will be handing the reins over to a team of experts who have already obtained their AWS certifications at the highest level, with collective decades of experience in all manner of business operations and requirements.

The main risk here is choosing where the right place to outsource your AWS management is. When choosing to outsource AWS cloud management, you’ll want to be sure the AWS partner you choose offers the level of support you are going to require, as well as hold all relevant certifications. When partnered with the right AWS MSP team, you’ll also often find that the management fees pay for themselves due to the greater level of AWS cost optimisation that can be achieved by seasoned professionals.

If you’re interested in finding out an estimation of professional AWS cloud management costs for your business or discussing how your business operations could be improved or revolutionised through the AWS cloud platform, please don’t hesitate to get in touch with our expert team for a free consultation. Our expert team can conduct a thorough assessment of your current infrastructure and business, and provide you with a report on how your business can specifically benefit from a migration to the AWS cloud platform.

The post appeared first on AWS Managed Services by Anchor.

Why AWS Dominates The Cloud Services Marketplace

Post Syndicated from Laurent Harvey original https://www.anchor.com.au/blog/2020/11/why-aws-dominates-the-cloud-services-marketplace/

Year after year, AWS maintains a very significant lead in the cloud marketplace over its closest competitors, including Microsoft Azure, Google Cloud Platform, as well as a number of other smaller cloud providers.

According to recent research published by Synergy Research Group, Amazon has 33% of the cloud infrastructure market share, with Microsoft trailing behind at 18%, and Google sitting at 9%.

So why has Amazon always been the leader of the pack when it comes to the major cloud service providers? The reasons for AWS’ significant lead may be more simple than you would first think. Below, we will go into just a few of the many reasons AWS has maintained such a dominant lead since its conception.

It’s Been Around The Longest

In any race, one of the most valuable things you can have is a head start. Amazon launched Amazon Web Services (AWS) back in 2006 and began offering their initial cloud computing service offerings to, primarily, developers. Their initial service offerings included Amazon S3 cloud storage, SQS, and EC2.

Google and Microsoft had dabbled in their own Cloud offerings at the time, but didn’t put the same resources into it that Amazon did. Google launched their Cloud Platform in 2008, and Microsoft Azure later followed them and launched in 2010. However, neither Google nor Microsoft invested the same amount of resources early on. As a result, Amazon was able to establish a firm lead in the cloud services market. Other providers have been playing a never-ending battle of catch-up ever since.

Constant Innovation

Although we can attribute a lot of AWS’ success to their early foothold, they wouldn’t be able to maintain such a significant market share on that alone. Since the beginning, Amazon has continually innovated year after year.

Since 2006, Amazon have greatly increased their service offerings and created many innovative services. In fact, at the time of writing, AWS offers an astounding 175 individual products and services to consumers. Many of these services are original Amazon innovations. You would be hard-pressed to find a task you can’t accomplish with one of Amazon’s many services, and they’re only adding more and more to their catalogue each year. We expect to see a specific focus on Artificial Intelligence Services from Amazon in the next few years, as it’s one of the fastest-growing areas of cloud computing.

Price Cuts

One of the greatest reasons AWS stays not only incredibly competitive, while still leading the market, is their constant efforts to reduce consumer costs. In fact, research published in 2018 by TSO Logic found that AWS costs get lower every year. AWS has no problem maintaining their existing customer base with increasingly diminishing prices, while also attracting new customers. Plus, the larger AWS gets, the more ability they have to achieve even higher economies of scale, thus passing more price cuts onto their customers.

In Amazon’s own words, they state the following on their website:

“Benefit from massive economies of scale – By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.”

Backed by Amazon

With the full long-term backing of Amazon, which comes in at #3 worldwide of all public corporations by market capitalization, AWS is quite simply a juggernaut of resources and capability. At the time of writing, Amazon has an estimated net value of $1.14 trillion. Amazon’s founder, Jeff Bezos, has an estimated worth of $190 billion.

With these kinds of numbers, Amazon are of course a formidable opponent for any newcomers to the cloud services marketplace. They also don’t look to be slowing down anytime soon in terms of their vision for the future and upcoming technological innovations.


AWS provides a platform which is ever-evolving and constantly becoming more financially accessible for businesses of all sizes. With new offerings, technology, features and opportunities for performance improvements every year, AWS provides a solid and proven platform for businesses who are looking to bring their business into the cloud.

If you think your business may benefit from taking advantage of AWS’ huge range of services, get in touch with us today for an expert consultation on how we can assist you in your journey to the cloud.

The post Why AWS Dominates The Cloud Services Marketplace appeared first on AWS Managed Services by Anchor.

3 Common Problems Businesses Face Which Can Be Solved By AWS Cloud.

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/11/3-common-problems-businesses-face-which-can-be-solved-by-aws-cloud/

Business leaders know all too well the long list of challenges involved in taking any business to the next level. Cash flow, human resources, infrastructure, growing marketing spend, refining or expanding on processes, and company culture are just a few of the many considerations. One particularly important challenge is choosing the right software and tools early on, to allow your business to provide its products or services efficiently and cost-effectively.

One of the greatest ways to achieve reliable and harmonious business practices is to ensure the technological infrastructure that your business is built upon is carefully planned to not only cater to your immediate business needs but also to be flexible for future growth.

Cloud computing services are more popular than ever before, and even in the face of the COVID-19 pandemic, have continued to grow just as steadily. Below, we’ve outlined 5 common business problems that are solved by migration to AWS cloud. If you’ve been considering the potential advantages of AWS for your business, read on!

Common problem: Convoluted/expensive/unnecessary services within pre-packaged traditional hosting plans.

With traditional hosting services, products tend to be pre-packaged with a selection of commonly required services as well as tiered/set resources. As a business grows larger and requires more heavy-duty online infrastructure, the cost of pre-packaged services can become much more expensive than it needs to be. That is because you may not be using some of the inclusions provided with these services, or require less of one resource or more of another. Pre-packaged service pricing also generally has factored in the cost of software licences needed to deliver all of the inclusions offered. If you’re not using these services, why should you be paying for them?

How AWS cloud computing services solves this: With AWS cloud hosting, each individual service is billed separately, and charges are based on different metrics such as the number of hours or seconds a service is online, or how much data transfer takes place. This allows a business to have very granular control over where they are directing their spend, as well as offering the ability to avoid paying for service inclusions that they are simply not using.

Common problem: Cost creep over time.

Cost creep is a common problem both in traditional hosting services and cloud computing services. As your business grows and evolves, your online infrastructure may need access to more services, features or integrations, as well as more computing resources. Each of these things almost always comes with an additional cost.

How AWS cloud computing services solves this: Between traditional hosting services and cloud computing, cloud is the only one that offers a plethora of ways to prevent, monitor and even reverse cost creep over time. Cost creep is a common occurrence for many businesses, especially in the early deployment stages when traffic is the least predictable and resource requirements are difficult to allocate in advance. This is something that can be greatly improved over time as usage data becomes available, along with traffic and resource usage patterns of your website or application. With proper maintenance and the utilisation of AWS reserved instances (which can provide the same resources at a greatly lower cost), there are many opportunities to minimise, and even reverse cost creep over time with cloud services.

Common problem: Infrastructure that offers a lack of horizontal scaling.

Horizontal scaling can translate to cost efficiencies, by adding or removing computing resources, and only paying for them while you are actually using them. For example, say you were running a food delivery application where you required a lot of computing resources to handle traffic during the lunch and dinner rush. If you were to purchase a computing instance with enough power to handle the rush hour, that might become an expensive waste of resources to still be running when business is quiet at 4 am. This is where horizontal scaling can come in to maximise efficiency through the addition and reduction of additional computing power, as needed.

Traditional hosting services rarely offer horizontal scalability, meaning you will be overpaying for resources or services that you aren’t utilising a lot of the time.

How AWS cloud computing services solves this: AWS offers powerful options when it comes to horizontally scaling computing power on demand. Scaling horizontally means adding additional computing instances to support an application running on an existing instance, as needed.

One of the greatest advantages of cloud computing services such as AWS is that their range of services are billed by the amount of time you are using them. So horizontal scaling can translate to cost efficiencies, by adding or removing computing resources, and only paying for them while you are actually using them.

The post 3 Common Problems Businesses Face Which Can Be Solved By AWS Cloud. appeared first on AWS Managed Services by Anchor.

4 Important Considerations To Avoid Wasted Cloud Spend

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/11/4-important-considerations-to-avoid-wasted-cloud-spend/

Growth for cloud services is at an all-time high in 2020, partly due to the COVID-19 pandemic and businesses scrambling to migrate to the cloud as soon as possible. But with that record growth, wasted spend on unnecessary or unoptimised cloud usage is also at an all-time high.


Wasted cloud spend generally boils down to paying for services or resources that you aren’t using. You can most commonly attribute wasted spend on services that aren’t being used at all in either the development or production stages, services that are often idle (not being used 80-90% of the time), or simply over-provisioned resources (more resources than necessary).


Wasted cloud spend is expected to reach as high as $17.6 billion in 2020. In a 2019 report from Flexera, they measured the actual waste of cloud spending at 35 percent of all cloud services revenue. This highlights how crucial it can be, and how much money a business can save, by having an experienced and dedicated AWS management team looking after their cloud services. In many cases, having the right team managing your cloud services can more than repay any associated management costs. Read on below for some further insight into the most common pitfalls of wasted cloud spending.

Lack Of Research, Skills and/or Management

A lack of proper research, skills or management involved in a migration to cloud services is probably the most frequent and costly pitfall. Without proper AWS cloud migration best practices and a comprehensive strategy in place, businesses may dive into setting up their services without realising how complex the initial learning curve can be to sufficiently manage their cloud spend. It’s a common occurrence for not just businesses, but anyone first experimenting with cloud, to see a bill that’s much higher than they first anticipated. This can lead a business to believe the cloud is significantly more expensive than it really needs to be.


It’s absolutely crucial to have a strategy in place for all potential usage situations, so that you don’t end up paying much more than you should. This is something that a managed cloud provider can expertly design for you, to ensure that you’re only paying for exactly what you need and potentially quite drastically reducing your spend over time.

Unused Or Unnecessary Snapshots

Snapshots can create a point in time backup of your AWS services. Each snapshot contains all of the information that is needed to restore your data to the point when the snapshot was taken. This is an incredibly important and useful tool when managed correctly. However it’s also one of the biggest mistakes businesses can make in their AWS cloud spend.


Charges for snapshots are based on the amount of data stored, and each snapshot increases the amount of data that you’re storing. Many users will take and store a high number of snapshots and never delete them when they’re no longer needed, and in a lot of cases, not realise that this is exponentially increasing their cloud spend.

Idle Resources

Idle resources account for another of the largest parts of cloud waste. Idle resources are resources that aren’t being used for anything, yet you’re still paying for them. They can be useful in the event of resource spike, but for the most part may not be worth you paying for them when you look at your average usage over a period of time. A good analogy for this would be paying rent for a holiday home all year round, when you only spend 2 weeks using it every Christmas. This is where horizontal scaling comes into play. When set up by skilled AWS experts, horizontal scaling can turn services and resources on or off depending on when they are actually needed.

Over-Provisioned Services

This particular issue somewhat ties into idle resources, as seen above. Over-provisioned services refers to paying for entire instances that are not in use whatsoever, or very minimally. This could be an Amazon RDS service for a database that’s not in use, an Amazon EC2 instance that’s idle 100% of the time, or any number of other services. It’s important to have a cloud strategy in place that involves frequently auditing what services your business is using and not using, in order to minimise your cloud spend as much as possible.


As you can see from the statistics provided by Flexera above, wasted cloud spend is one of the most significant problems facing businesses that have migrated to the cloud. But with the right team of experts in place, wasted spend can easily be avoided, and even mitigate management costs, leaving you in a far better position in terms of both service performance, reliability and support, and overall costs.

The post 4 Important Considerations To Avoid Wasted Cloud Spend appeared first on AWS Managed Services by Anchor.

Why is it vital for businesses to outsource AWS management?

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/why-is-it-vital-for-businesses-to-outsource-aws-management/

Some of us may have learnt during 2020 that there are simply some things that one cannot DIY without proper skills and expertise. Perhaps during the pandemic lockdown, your local hairdresser was closed, and you turned to a DIY YouTube tutorial and learnt this the hardest way of all. But, even if you survived 2020 without a fringe 2 inches too short, managing AWS services is a whole other ball game that requires years of training and dedicated skill to properly deploy, manage and keep expenses under control.

As powerful as AWS is, and as much as it can do for your business, it can be all-but impossible to do it right if you have never set foot in the AWS Management console before. AWS is complex, and requires expertise to truly get the most from it. While you may be able to perform basic provisioning tasks and perhaps get a service up and running, ensuring that that service is performing optimally and cost-efficiently is where professional AWS management can truly revolutionise your infrastructure strategy and budget.

Managed AWS services is one of the largest outsourced areas of the IT industry. According to a recent Gartner forecast, almost 19% of cloud budgets are spent on cloud management-related services, such as cloud consulting, implementation, migration and managed services – with this percentage expected to increase in the next few years (and for good reason). In this article we will delve into just a few of the reasons why you’re far better off putting your AWS management in the hands of experts.

Cost Savings

Wasted cloud spend is a very common occurrence within the cloud services industry, with many cloud users not optimising costs where they can. In a 2019 report from Flexera, they measured the actual waste of cloud spending at 35 percent.

One of the most beneficial things an AWS management service provider can offer your business is ensuring that you’re only paying for what your business needs. It may save your business significantly more money in the long run, even when factoring in management fees.

Free Up Your Time

You should focus on what you and your business do best. Sure, you could put in many hours to understand as much as possible and get up and running yourself, but many businesses find that time is much better spent on focussing on your core service offerings and leaving management to your managed service provider.

On top of the initial learning curve, there is also the time investment needed for ongoing training as new AWS cloud services are released and new management tools are developed. Best practice changes very frequently, and it can be a significant undertaking to try and keep your finger on the pulse while simultaneously trying to handle every other area of your business.

Proactive Management

Ensuring that your business leverages AWS’ ability to scale and adjust depending on your current needs is essential. An AWS partner and managed service provider can help you understand your businesses needs, and adjust course as necessary to meet each new scenario.

A good example of scaling to meet current needs is the COVID-19 pandemic. The cloud services industry has seen significant growth during 2020 due to its ability to rapidly scale and support sudden growth. For example, web traffic and bandwidth requirements skyrocketed in 2020 with more people turning to eCommerce to acquire their everyday household items as well as remotely attend school and work.

Avoiding Downtime and Increasing Stability

Any number of things can happen to your hosted services, and when they do, it’s crucial that you have an experienced team on hand to tackle whatever comes your way. There’s nothing worse than hosting your mission-critical services on AWS and not having the experience to get services up and running as soon as possible when things go wrong.

A qualified AWS management team will also put best practice measures into place to improve the resilience of your configuration, and minimise the chance of anything going wrong in the first place.


When deciding what is the best course of action for your business, it’s imperative to ensure that your mission-critical cloud services are in good hands. It can be shown that in many cases, having AWS experts handle your businesses cloud needs can more than repay the associated management fees, leaving you better off both in terms of support and costs.

If you’re looking for advice on AWS cloud migration best practices, don’t hesitate to get in touch with one of our expert cloud architects today.

The post Why is it vital for businesses to outsource AWS management? appeared first on AWS Managed Services by Anchor.

Cloud Services VS COVID-19: How has the pandemic affected the Cloud Hosting industry?

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/cloud-services-vs-covid-19-how-has-the-pandemic-affected-the-cloud-hosting-industry/

2020 has surely been a questionable year for the human race. An unexpected hail storm, if you will. But for the cloud services market? According to recent statistics, not a cloud in the sky.

Despite the unprecedented chaos that has befallen many facets of our daily lives during the COVID-19 pandemic, the world’s cloud spending has continued to significantly and consistently increase year-on-year. When it comes to the growth of cloud services, 2020 is no exception.

Ultimately, this makes sense. Demand and reliance on digital services has greatly increased this year as many businesses hurry to transform their bricks and mortar presences into digital income streams, in an attempt to survive such uncertain times. Grocery and home goods purchases, learning, working and even many social activities are now conducted primarily online as we fight the global challenges of COVID-19.

According to recent research published by Synergy Research Group, cloud spending passed $30 billion in the second quarter of 2020, an increase of $7.5 billion when compared to the same time last year. In terms of region-specific growth, cloud services have continued to grow steadily all around the world, seemingly regardless of how each region has uniquely been affected by the pandemic.

Of the largest Cloud service providers, AWS has continued to maintain a dominant lead in market share, steadily towering over Google and Microsoft’s cloud service offerings. At the time of writing, the market share is dispersed approximately as follows:

Amazon AWS: 33%

Microsoft Azure: 18%

Google Cloud Platform: 9%

Some of the smaller providers that trail this list include Alibaba Cloud, IBM, Salesforce, Tencent and Oracle.

Cloud infrastructure market share figures, including IaaS, PaaS, and Hosted Private Cloud: Q2 2020

For those of you who may have a penchant for the numbers, we’ve delved a little further into the figures for Logging as a service (LaaS), Platform as a service (PaaS) and hosted private cloud in 2020.

Public IaaS and PaaS services have maintained the majority of the market share, which grew by 34% in Q2. The lead that Amazon, Google, Microsoft Azure, Alibaba, and IBM hold over their competitors is even more significant in public cloud, where they control close to 80% of the market combined.

Regarding their findings, chief analyst at Synergy Research, John Dinsdale, had this to say:

“As far as cloud market numbers go, it’s almost as if there were no COVID-19 pandemic raging around the world. As enterprises struggle to adapt to new norms, the advantages of public cloud are amplified.” 

“The percentage growth rate is coming down, as it must when a market reaches enormous scale, but the incremental growth in absolute dollar terms remains truly impressive. The market remains on track to grow by well over 30% in 2020.”

As their findings indicate, the global pandemic certainly hasn’t slowed down the growth rate of Cloud services. With the pressure of more and more companies being forced online, or to shift their entire organisation to being able to work from home, the need to migrate to the cloud is clearly far greater than ever before.

John Dinsdale also had this to say:

“If anything, the pandemic has helped to highlight some of the main benefits of public cloud,” 

Chief among those benefits is the flexibility and scalability that cloud hosting offers. In a time where millions of workers have had to change the way they conduct their daily lives, and companies have had to quickly shift resources, the ability to be able to scale horizontally or vertically to accommodate for that shift in lifestyle is more important than ever. Where other industries have been devastated by COVID-19, the cloud services industry has proven to be more durable, largely due to its capacity to assist companies in tackling the numerous challenges that a pandemic inflicts.

Ready to tap in?

If you have been considering a switch to the cloud, or believe that your business could benefit from additional scalability, flexibility and durability during the new era of heightened online commerce, we strongly suggest consulting with a certified AWS partner, to make your move as smooth, secure and cost-efficient as possible.

Cloud services have proven to be one of the most resilient (and thriving) industries in a COVID-19 world. This can also be said of most companies who have utilised it to bring their businesses up-to-speed and online. If you’d like to tap into this success for your business alike, get in touch with one of our AWS-qualified experts today to learn how we can assist you.

The post Cloud Services VS COVID-19: How has the pandemic affected the Cloud Hosting industry? appeared first on AWS Managed Services by Anchor.

Anchor Joins AWS Service Delivery Program: Amazon EC2 for Windows Server

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/anchor-joins-aws-service-delivery-program-amazon-ec2-for-windows-server/

We are excited to announce that Anchor has joined the Amazon EC2 for Windows Service Delivery Program (SPD). This new SDP classification complements an expanding portfolio of AWS certifications, substantiating our commitment to both our AWS partnership and enabling the cloud for Australian businesses.

The AWS Service Delivery Program acknowledges select partners within the AWS Partner Network (APN) who have demonstrated technical proficiency across specialised solution areas. Achieving SDP status involves a stringent validation process to certify a deep understanding of, and adherence to, AWS architectural best practices. 

Amazon EC2 for Windows Server Partners are certified for delivering Windows Server environments on Amazon Elastic Compute Cloud (Amazon EC2). They are recommended by AWS for managing secure, reliable, and high-performance environments for deploying Windows-based applications and workloads. 

As part of the validation process for this competency, Anchor was required to demonstrate proven customer success through real customer engagement to validate that we had the technical proficiency and resources to help customers migrate, manage, or deploy Microsoft Workloads to AWS.

This AWS SDP reinforces Anchors expertise in helping businesses both modernise and future-proof legacy windows applications by replatforming onto AWS Cloud. 

“Our new SDP status for Amazon EC2 for Windows provides our customers with the highest level of confidence in our technical aptitude and alignment to AWS best practices.” – Josh Chiswell, Director of Architecture and Professional Services, APAC, Anchor Systems.

The proficiency also provides exclusive access to service-specific funding programs which partners can pass on to customers. If your business is looking to modernise or replatform old windows workloads, contact our cloud consultants for a complimentary cloud assessment.


Why Anchor? We exist to help SMBs and emerging enterprises who need managed AWS and cloud engineering services. Anchor enables the cloud by deeply engaging with your business. We architect, deploy, run and optimise cloud workloads and advocate for cloud best practices. Anchor’s team of certified engineers can support your workloads in three different time zones with 24x7x365 coverage from Sydney.

The post Anchor Joins AWS Service Delivery Program: Amazon EC2 for Windows Server appeared first on AWS Managed Services by Anchor.

Is your cloud hosting backup plan ready for 2021?

Post Syndicated from Ross Krumbeck original https://www.anchor.com.au/blog/2020/10/is-your-cloud-hosting-backup-plan-ready-for-2021/

Most businesses have a whole array of backup plans in place. A backup plan for when staff members call in sick, a backup plan for recouping damaged or lost stock, a backup plan for emergency expenses… but what about a backup plan for their cloud hosting services?


All too often, backing up one’s website or application, particularly when housed on cloud hosting, is a task left a little more neglected than it should be. This can be due to it being put in the “too hard” basket, or the “something to eventually get around to” basket, or simply from being overlooked due to a business believing that they have a plan in place, but never actually testing that it works. Staff turnover can also play a part. If the staff member or team, who initially set up your backup plan has since moved on, it can be a chaotic experience to unravel the plan when disaster suddenly strikes.


In particular, if a business conducts significant online trade, unexpected downtime of their websites or applications can mean heavy losses. In 2015, one of Anchor’s larger e-commerce customers transacted more than $100 million dollars in revenue through their Magento store. Crunching those numbers means a single hour of downtime equals a potential revenue loss of around $11,415. It should really go without saying, if a website or application is a critical part of a businesses income stream, they should be taking every precaution to guard against outages in the same way they protect themselves against any other challenges.


We must always keep in mind that no technology is completely fail-proof. Even cloud services are not exempt from experiencing occasional outages and unavoidable technical challenges, especially if not regularly maintained and managed by AWS-qualified professionals.


As we continue to wade our way through 2020, reliance on the digital world has become far heavier and more demanding than ever before. According to research published by Synergy Research Group, spending on cloud services has continued to rise during the pandemic, passing $30 billion in the second quarter of 2020 – a massive increase of $7.5 billion when compared to the second quarter of 2019. As more and more businesses turn to cloud services to continue their survival, the greater the need for a focus on preparing for downtime.


As we swiftly approach the Christmas shopping period, loss of profits in the event of an outage could be far worse should they strike during the busiest time for online sales. Realistically, the real cost could be four or five times your ‘business as usual’ number. If you add to that the reputational damage to your brand, the financial impacts keep growing.


Fortunately, every cloud provider offers some form of Service Level Agreement (SLA), including an uptime guarantee, and AWS is no different. SLAs and guarantees set out to give us confidence in the resilience of the network, infrastructure and services while describing how we may be compensated should an unscheduled outage occur. But even a 99.5% uptime guarantee means your website or app can be offline for nearly 22 minutes each and every month without compensation – and that can add up to a lot of lost sales for a busy online business.


With that being the case, the best thing you can do is ensure you are well prepared to get back online as quickly as possible. As well as ensuring that you have a disaster recovery plan in place, it’s just as important to regularly test it too. Relying on a cloud provider’s uptime guarantee is never an alternative to taking the necessary steps to ensure your deployment is highly available. It’s worth investing a little more to protect your bottom line.


To add a further complication, there are several conditions that may prevent you from claiming any SLA compensation. If you aren’t aware of these conditions, it’s entirely possible (even likely in many cases!) that you may have already voided any SLA protections.


If you’d like to know more about ensuring your business is eligible for SLA protections, you can download our free eBook here.


If your business doesn’t have a professional backup plan in place for your cloud hosting services, or you haven’t thoroughly tested that your existing plan works lately, our cloud experts can assist you in ensuring that your business backup plan is ready for the busy Christmas period, as well as future-proofed for 2021 – because after the way things have gone in 2020, who knows what’s in store for us next!


The post Is your cloud hosting backup plan ready for 2021? appeared first on AWS Managed Services by Anchor.

Netflix at AWS re:Invent 2019

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/netflix-at-aws-re-invent-2019-e09bfc144831?source=rss----2615bd06b42e---4

by Shefali Vyas Dalal

AWS re:Invent is a couple weeks away and our engineers & leaders are thrilled to be in attendance yet again this year! Please stop by our “Living Room” for an opportunity to connect or reconnect with Netflixers. We’ve compiled our speaking events below so you know what we’ve been working on. We look forward to seeing you there!

Monday — December 2

1pm-2pm CMP 326-R Capacity Management Made Easy with Amazon EC2 Auto Scaling

Vadim Filanovsky, Senior Performance Engineer & Anoop Kapoor, AWS

Abstract:Amazon EC2 Auto Scaling offers a hands-free capacity management experience to help customers maintain a healthy fleet, improve application availability, and reduce costs. In this session, we deep-dive into how Amazon EC2 Auto Scaling works to simplify continuous fleet management and automatic scaling with changing load. Netflix delivers shows like Sacred Games, Stranger Things, Money Heist, and many more to more than 150 million subscribers across 190+ countries around the world. Netflix shares how Amazon EC2 Auto Scaling allows its infrastructure to automatically adapt to changing traffic patterns in order to keep its audience entertained and its costs on target.

4:45pm-5:45pm NFX 202 A day in the life of a Netflix Engineer

Dave Hahn, SRE Engineering Manager

Abstract: Netflix is a large, ever-changing ecosystem serving millions of customers across the globe through cloud-based systems and a globally distributed CDN. This entertaining romp through the tech stack serves as an introduction to how we think about and design systems, the Netflix approach to operational challenges, and how other organizations can apply our thought processes and technologies. In this session, we discuss the technologies used to run a global streaming company, growing at scale, billions of metrics, benefits of chaos in production, and how culture affects your velocity and uptime.

4:45pm-5:45pm NFX 209 File system as a service at Netflix

Kishore Kasi, Senior Software Engineer

Abstract: As Netflix grows in original content creation, its need for storage is also increasing at a rapid pace. Technology advancements in content creation and consumption have also increased its data footprint. To sustain this data growth at Netflix, it has deployed open-source software Ceph using AWS services to achieve the required SLOs of some of the post-production workflows. In this talk, we share how Netflix deploys systems to meet its demands, Ceph’s design for high availability, and results from our benchmarking.

Tuesday — December 3

11:30am-12:30pm NFX 208 Netflix’s container journey to bare metal Amazon EC2

Andrew Spyker, Compute Platform Engineering Manager

Abstract: In 2015, Netflix started supporting containers as part of their compute platform. Over the years, this platform took on support for both elastic online services and fully featured batch workloads supporting use cases across Netflix engineering. It launches more than four million containers per week across thousands of underlying hosts. The release of Amazon EC2 bare metal instances gave direct access to host processors and memory while providing a control plane for these container hosts. In 2019, Netflix moved thousands of container hosts to bare metal. This talk explores the journey, learnings, and improvements to performance analysis, efficiency, reliability, and security.

5:30pm-6:30pm CMP 326-R Capacity Management Made Easy

Vadim Filanovsky, Senior Performance Engineer & Anoop Kapoor, AWS

Abstract: Amazon EC2 Auto Scaling offers a hands-free capacity management experience to help customers maintain a healthy fleet, improve application availability, and reduce costs. In this session, we deep-dive into how Amazon EC2 Auto Scaling works to simplify continuous fleet management and automatic scaling with changing load. Netflix delivers shows like Sacred Games, Stranger Things, Money Heist, and many more to more than 150 million subscribers across 190+ countries around the world. Netflix shares how Amazon EC2 Auto Scaling allows its infrastructure to automatically adapt to changing traffic patterns in order to keep its audience entertained and its costs on target.

Wednesday — December 4

10am-11am NFX 203 From Pitch to Play: The technology behind going from ideas to streaming

Ryan Schroeder, Senior Software Engineer

Abstract: It takes a lot of different technologies and teams to get entertainment from the idea stage through being available for streaming on the service. This session looks at what it takes to accept, produce, encode, and stream your favorite content. We explore all the systems necessary to make and stream content from Netflix.

1pm-2pm NFX 207 Benchmarking stateful services in the cloud

Vinay Chella, Data Platform Engineering Manager

Abstract: AWS cloud services make it possible to achieve millions of operations per second in a scalable fashion across multiple regions. Netflix runs dozens of stateful services on AWS under strict sub-millisecond tail-latency requirements, which brings unique challenges. In order to maintain performance, benchmarking is a vital part of our system’s lifecycle. In this session, we share our philosophy and lessons learned over the years of operating stateful services in AWS. We showcase our case studies, open-source tools in benchmarking, and how we ensure that AWS cloud services are serving our needs without compromising on tail latencies.

3:15pm-4:15pm OPN 209 Netflix’s application deployment at scale

Andy Glover, Director Delivery Engineering & Paul Roberts, AWS

Abstract: Spinnaker is an open-source continuous-delivery platform created by Netflix to improve its developers’ efficiency and reduce the time it takes to get an application into production. Netflix has over 140 million members, and in this session, Netflix shares the tooling it uses to deploy applications to meet its customers’ needs. Join us to learn why Netflix created Spinnaker, how the platform is being used at scale, how the company works with the broader open-source community, and the work it’s doing with AWS to build out a new functions compute primitive.

4pm-5pm OPN 303-R BPF Performance Analysis

Brendan Gregg, Senior Performance Engineer

Abstract: Extended BPF (eBPF) is an open-source Linux technology that powers a whole new class of software: mini programs that run on events. Among its many uses, BPF can be used to create powerful performance-analysis tools capable of analyzing everything: CPUs, memory, disks, file systems, networking, languages, applications, and more. In this session, Netflix’s Brendan Gregg tours BPF tracing capabilities, including many new open-source performance analysis tools he developed for his new book “BPF Performance Tools: Linux System and Application Observability.” The talk also includes examples of using these tools in the Amazon Elastic Compute Cloud (Amazon EC2) cloud.

Thursday — December 5

12:15pm-1:15pm NFX 205 Monitoring anomalous application behavior

Travis McPeak, Application Security Engineering Manager & William Bengston, Director HashiCorp

Abstract: AWS CloudTrail provides a wealth of information on your AWS environment. In addition, teams can use it to perform basic anomaly detection by adding state. In this talk, Travis McPeak of Netflix and Will Bengtson introduce a system built strictly with off-the-shelf AWS components that tracks CloudTrail activity across multi-account environments and sends alerts when applications perform anomalous actions. By watching applications for anomalous actions, security and operations teams can monitor unusual and erroneous behavior. We share everything attendees need to implement CloudTrail in their own organizations.

1pm-2pm OPN 303-R1 BPF Performance Analysis

Brendan Gregg, Senior Performance Engineer

Abstract: Extended BPF (eBPF) is an open-source Linux technology that powers a whole new class of software: mini programs that run on events. Among its many uses, BPF can be used to create powerful performance-analysis tools capable of analyzing everything: CPUs, memory, disks, file systems, networking, languages, applications, and more. In this session, Netflix’s Brendan Gregg tours BPF tracing capabilities, including many new open-source performance analysis tools he developed for his new book “BPF Performance Tools: Linux System and Application Observability.” The talk also includes examples of using these tools in the Amazon Elastic Compute Cloud (Amazon EC2) cloud.

1:45pm-2:45pm NFX 201 More Data Science with less engineering: ML Infrastructure

Ville Tuulos, Machine Learning Infrastructure Engineering Manager

Abstract: Netflix is known for its unique culture that gives an extraordinary amount of freedom to individual engineers and data scientists. Our data scientists are expected to develop and operate large machine learning workflows autonomously without the need to be deeply experienced with systems or data engineering. Instead, we provide them with delightfully usable ML infrastructure that they can use to manage a project’s lifecycle. Our end-to-end ML infrastructure, Metaflow, was designed to leverage the strengths of AWS: elastic compute; high-throughput storage; and dynamic, scalable notebooks. In this session, we present our human-centric design principles that enable the autonomy our engineers enjoy.

Netflix at AWS re:Invent 2019 was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Spinnaker Sets Sail to the Continuous Delivery Foundation

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/spinnaker-sets-sail-to-the-continuous-delivery-foundation-e81cd2cbbfeb?source=rss----2615bd06b42e---4

Author: Andy Glover

Since releasing Spinnaker to the open source community in 2015, the platform has flourished with the addition of new cloud providers, triggers, pipeline stages, and much more. Myriad new features, improvements, and innovations have been added by an ever growing, actively engaged community. Each new innovation has been a step towards an even better Continuous Delivery platform that facilitates rapid, reliable, safe delivery of flexible assets to pluggable deployment targets.

Over the last year, Netflix has improved overall management of Spinnaker by enhancing community engagement and transparency. At the Spinnaker Summit in 2018, we announced that we had adopted a formalized project governance plan with Google. Moreover, we also realized that we’ll need to share the responsibility of Spinnaker’s direction as well as yield a level of long-term strategic influence over the project so as to maintain a healthy, engaged community. This means enabling more parties outside of Netflix and Google to have a say in the direction and implementation of Spinnaker.

A strong, healthy, committed community benefits everyone; however, open source projects rarely reach this critical mass. It’s clear Spinnaker has reached this special stage in its evolution; accordingly, we are thrilled to announce two exciting developments.

First, Netflix and Google are jointly donating Spinnaker to the newly created Continuous Delivery Foundation (or CDF), which is part of the Linux Foundation. The CDF is a neutral organization that will grow and sustain an open continuous delivery ecosystem, much like the Cloud Native Computing Foundation (or CNCF) has done for the cloud native computing ecosystem. The initial set of projects to be donated to the CDF are Jenkins, Jenkins X, Spinnaker, and Tekton. Second, Netflix is joining as a founding member of the CDF. Continuous Delivery powers innovation at Netflix and working with other leading practitioners to promote Continuous Delivery through specifications is an exciting opportunity to join forces and bring the benefits of rapid, reliable, and safe delivery to an even larger community.

Spinnaker’s success is in large part due to the amazing community of companies and people that use it and contribute to it. Donating Spinnaker to the CDF will strengthen this community. This move will encourage contributions and investments from additional companies who are undoubtedly waiting on the sidelines. Opening the doors to new companies increases the innovations we’ll see in Spinnaker, which benefits everyone.

Donating Spinnaker to the CDF doesn’t change Netflix’s commitment to Spinnaker, and what’s more, current users of Spinnaker are unaffected by this change. Spinnaker’s previously defined governance policy remains in place. Overtime, new stakeholders will emerge and play a larger, more formal role in shaping Spinnaker’s future. The prospects of an even healthier and more engaged community focused on Spinnaker and the manifold benefits of Continuous Delivery is tremendously exciting and we’re looking forward to seeing it continue to flourish.

Spinnaker Sets Sail to the Continuous Delivery Foundation was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Connect, collaborate, and learn at AWS Global Summits in 2018

Post Syndicated from Tina Kelleher original https://aws.amazon.com/blogs/big-data/connect-collaborate-and-learn-at-aws-global-summits-in-2018/

Regardless of your career path, there’s no denying that attending industry events can provide helpful career development opportunities — not only for improving and expanding your skill sets, but for networking as well. According to this article from PayScale.com, experts estimate that somewhere between 70-85% of new positions are landed through networking.

Narrowing our focus to networking opportunities with cloud computing professionals who’re working on tackling some of today’s most innovative and exciting big data solutions, attending big data-focused sessions at an AWS Global Summit is a great place to start.

AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. As the name suggests, these summits are held in major cities around the world, and attract technologists from all industries and skill levels who’re interested in hearing from AWS leaders, experts, partners, and customers.

In addition to networking opportunities with top cloud technology providers, consultants and your peers in our Partner and Solutions Expo, you’ll also hone your AWS skills by attending and participating in a multitude of education and training opportunities.

Here’s a brief sampling of some of the upcoming sessions relevant to big data professionals:

May 31st : Big Data Architectural Patterns and Best Practices on AWS | AWS Summit – Mexico City

June 6th-7th: Various (click on the “Big Data & Analytics” header) | AWS Summit – Berlin

June 20-21st : [email protected] | Public Sector Summit – Washington DC

June 21st: Enabling Self Service for Data Scientists with AWS Service Catalog | AWS Summit – Sao Paulo

Be sure to check out the main page for AWS Global Summits, where you can see which cities have AWS Summits planned for 2018, register to attend an upcoming event, or provide your information to be notified when registration opens for a future event.

Backblaze at NAB 2018 in Las Vegas

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backblaze-at-nab-2018-in-las-vegas/

Backblaze B2 Cloud Storage NAB Booth

Backblaze just returned from exhibiting at NAB in Las Vegas, April 9-12, where the response to our recent announcements was tremendous. In case you missed the news, Backblaze B2 Cloud Storage continues to extend its lead as the most affordable, high performance cloud on the planet.

Backblaze’s News at NAB

Backblaze at NAB 2018 in Las Vegas

The Backblaze booth just before opening

What We Were Asked at NAB

Our booth was busy from start to finish with attendees interested in learning more about Backblaze and B2 Cloud Storage. Here are the questions we were asked most often in the booth.

Q. How long has Backblaze been in business?
A. The company was founded in 2007. Today, we have over 500 petabytes of data from customers in over 150 countries.

B2 Partners at NAB 2018

Q. Where is your data stored?
A. We have data centers in California and Arizona and expect to expand to Europe by the end of the year.

Q. How can your services be so inexpensive?
A. Backblaze’s goal from the beginning was to offer cloud backup and storage that was easy to use and affordable. All the existing options were simply too expensive to be viable, so we created our own infrastructure. Our purpose-built storage system — the Backblaze’s Storage Pod — is recognized as one of the most cost efficient storage platforms available.

Q. Tell me about your hardware.
A. Backblaze’s Storage Pods hold 60 HDDs each, containing as much as 720TB data per pod, stored using Reed-Solomon error correction. Storage Pods are arranged in Tomes with twenty Storage Pods making up a Vault.

Q. Where do you fit in the data workflow?
A. People typically use B2 in for archiving completed projects. All data is readily available for download from B2, making it more convenient than off-line storage. In addition, DAM and MAM systems such as CatDV, axle ai, Cantemo, and others have integrated with B2 to store raw images behind the proxies.

Q. Who uses B2 in the M&E business?
A. KLRU-TV, the PBS station in Austin Texas, uses B2 to archive their entire 43 year catalog of Austin City Limits episodes and related materials. WunderVu, the production house for Pixvana, uses B2 to back up and archive their local storage systems on which they build virtual reality experiences for their customers.

Q. You’re the company that publishes the hard drive stats, right?
A. Yes, we are!

Backblaze Case Studies and Swag at NAB 2018 in Las Vegas

Were You at NAB?

If you were, we hope you stopped by the Backblaze booth to say hello. We’d like to hear what you saw at the show that was interesting or exciting. Please tell us in the comments.

The post Backblaze at NAB 2018 in Las Vegas appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.