[$] 5.18 Merge window, part 1

Post Syndicated from original https://lwn.net/Articles/888736/

As of this writing, 4,127 non-merge changesets have found their way into
the mainline repository for the 5.18 development cycle. That may seem like
a relatively slow start to the merge window, but there are a lot of changes
packed into those commits. Read on for a summary of the most
significant changes to land in the first half of the 5.18 merge window.

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/889265/

Security updates have been issued by Debian (tiff), Fedora (nicotine+ and openvpn), openSUSE (bind, libarchive, python3, and slirp4netns), Oracle (cyrus-sasl, httpd, httpd:2.4, and openssl), Red Hat (httpd and httpd:2.4), Scientific Linux (httpd), SUSE (bind, libarchive, python3, and slirp4netns), and Ubuntu (firefox).

The Digital Citizen’s Guide to Navigating Cyber Conflict

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2022/03/25/the-digital-citizens-guide-to-navigating-cyber-conflict/

The Digital Citizen’s Guide to Navigating Cyber Conflict

As security professionals, we are currently being bombarded with warnings and alerts of a heightened threat level due to the possibility that Russia will start to more aggressively leverage cyberattacks as part of their offensive. If you are feeling the pressure of getting everything done, check out this post that identifies the 8 most important emergency conflict actions for your security program.

This post is meant as a companion piece that gives advice for non-security-pro digital citizens to protect themselves and, by extension, help protect their organizations.

As security pros, we do not live in a perfect technical vacuum where we make system-wide Decisions That Will Be Obeyed by Everyone in Our Domain. Rather, we must acknowledge that our users are part of the equation. They can be tricked and manipulated. They lose devices or leave them unlocked in public. They may not follow policy, connecting to unsecured networks, using personal devices for work, or buying unvetted apps.

In other words, they make your life more complicated. But they are likely also watching the same news reports you are and may be wondering what they can do to help protect against the prospect of Russian aggression. This is your opportunity to harness that desire to help, and educate your non-security friends, family, and end users on making it through a cyber conflict. This could be a step toward inspiring them to think more about security in the long term.

1. Control who can access your accounts, apps, or devices

Password hygiene and password managers

These days, most technical devices or apps will give you the option to set up a password, PIN, or pattern. It’s highly recommended that you do so in addition to avoiding reusing passwords and changing them often.

If you follow that advice, you’ll end up with a lot of information to remember. This is where a password manager comes in. They automatically store and fill passwords as needed. They’ll also help you generate passwords if you want them to, ensuring each one is unique and adheres to the requirements of the site, app, or device. Some also offer other benefits, such as working across multiple devices so your passwords can sync across your laptop, tablet, and mobile. Another cool feature is the ability to share selected passwords with designees — for example, if you want to give a family member access to your Netflix account. There are plenty of decent and inexpensive password managers out there. Examples include LastPass, Bitwarden, 1Password, and Dashlane.

Turn on a second layer of protection for your accounts (2FA/MFA)

Having unique and hard-to-guess passwords is important, but it’s not a magical fix that will make you invulnerable to hacking. Cyberattackers will try to trick you into giving them your password, or they may try to guess or crack it. If you are reusing passwords (which is a bad bet), they may already have your password from a previous successful hack.

In situations such as these, having a second way to prove who you are when accessing your accounts is critical to help you protect your private data and accounts. This is referred to as two-step verification, two-factor authentication (2FA), or multi-factor authentication (MFA). The second step or factor might be a code sent to a trusted device, a physical token (such as a scannable key tab or a yubikey), or a biometric (such as your fingerprint or a facial scan). You don’t have to set up 2FA on everything (though it definitely doesn’t hurt to do so), but we strongly recommend you add it to anything holding very sensitive information, such as your online or mobile banking apps, your mobile phone, or other devices.

2. Pay attention to experts

Listen to your employer or other affiliated organization

Pay attention to all internal communications from your work/school/organization, as they likely have situation-specific guidance pertaining to any malicious activity against that organization. Be sure to follow any guidance or policies they issue.

Look out for alerts from apps and services

The vendors and other organizations you do business with should notify you if they are victims of a cyberattack. Look out for communications from them, but be cautious of anything asking you for info or to take an action, as these could be fraudulent. If you receive a communication asking you to take an action, instead of clicking on links within the email, we recommend going directly to the company website or using a search engine to find the information. You should find information to indicate whether it’s legitimate or a scam.

Relevant regional information

Ensure you know where to find information on local services and infrastructure — for example, your local government’s website, social media feeds, or other forms of local media, such as TV, radio and print.

Credit reports

One way that Russia may try to gain footholds on organizations is through identity theft of individuals. Signing up to credit reports — and actually paying attention to them — is one way to catch and respond to this activity early on. Many credit card companies offer this service for free.

3. Hope for the best, but prepare for the worst

Attacks against critical infrastructure

There is a lot of speculation that Russia will target cyberattacks at critical infrastructure. A great deal of effort is going into building resilience into these organizations and systems, and we hope that widespread outages will not occur. However, the Colonial Pipeline, JBS, and HSE attacks in 2021 highlighted the scale of disruption that can be caused by cyberattacks against critical infrastructure. In the same way you would plan for warnings of incoming hurricane activity, we recommend you consider what you might need to weather outages of power, water, or other critical services.

Backup your data offline

The major technology companies typically invest a great deal in cybersecurity to ensure your data is protected; however, they also may make for attractive targets of Russian hacking. They will also be just as affected as everyone else is in the case of power outages. If you are worried about being able to access information in these events, you may want to create an offline backup of your most essential data.

The guidance above focuses on the most critical actions to help individuals navigate the current threats of cyber conflict related to the Russian invasion of Ukraine. For more general advice to individuals for protecting your digital identity, check out this guide, which was created in a collaboration with the UK government’s Cyber Aware campaign.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Ridiculously easy to use Tunnels

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/ridiculously-easy-to-use-tunnels/

Ridiculously easy to use Tunnels

Ridiculously easy to use Tunnels

A little over a decade ago, Cloudflare launched at TechCrunch Disrupt. At the time, we talked about three core principles that differentiated Cloudflare from traditional security vendors: be more secure, more performant, and ridiculously easy to use. Ease of use is at the heart of every decision we make, and this is no different for Cloudflare Tunnel.

That’s why we’re thrilled to announce today that creating tunnels, which previously required up to 14 commands in the terminal, can now be accomplished in just three simple steps directly from the Zero Trust dashboard.

If you’ve heard enough, jump over to sign-up/teams to unplug your VPN and start building your private network with Cloudflare. If you’re interested in learning more about our motivations for this release and what we’re building next, keep scrolling.

Our connector

Cloudflare Tunnel is the easiest way to connect your infrastructure to Cloudflare, whether that be a local HTTP server, web services served by a Kubernetes cluster, or a private network segment. This connectivity is made possible through our lightweight, open-source connector, cloudflared. Our connector offers high-availability by design, creating four long-lived connections to two distinct data centers within Cloudflare’s network. This means that whether an individual connection, server, or data center goes down, your network remains up. Users can also maintain redundancy within their own environment by deploying multiple instances of the connector in the event a single host goes down for one reason or another.

Historically, the best way to deploy our connector has been through the cloudflared CLI. Today, we’re thrilled to announce that we have launched a new solution to remotely create, deploy, and manage tunnels and their configuration directly from the Zero Trust dashboard. This new solution allows our customers to provide their workforce with Zero Trust network access in 15 minutes or less.

CLI? GUI? Why not both

Command line interfaces are exceptional at what they do. They allow users to pass commands at their console or terminal and interact directly with the operating system. This precision grants users exact control over the interactions they may have with a given program or service where this exactitude is required.

However, they also have a higher learning curve and can be less intuitive for new users. This means users need to carefully research the tools they wish to use prior to trying them out. Many users don’t have the luxury to perform this level of research, only to test a program and find it’s not a great fit for their problem.

Conversely, GUIs, like our Zero Trust dashboard, have the flexibility to teach by doing. Little to no program knowledge is required to get started. Users can be intuitively led to their desired results and only need to research how and why they completed certain steps after they know this solution fits their problem.

When we first released Cloudflare Tunnel, it had less than ten distinct commands to get started. We now have far more than this, as well as a myriad of new use cases to invoke them. This has made what used to be an easy-to-navigate CLI library into something more cumbersome for users just discovering our product.

Simple typos led to immense frustration for some users. Imagine, for example, a user needs to advertise IP routes for their private network tunnel. It can be burdensome to remember cloudflared tunnel route ip add <IP/CIDR>. Through the Zero Trust dashboard, you can forget all about the semantics of the CLI library. All you need to know is the name of your tunnel and the network range you wish to connect through Cloudflare. Simply enter my-private-net (or whatever you want to call it), copy the installation script, and input your network range. It’s that simple. If you accidentally type an invalid IP or CIDR block, the dashboard will provide an actionable, human-readable error and get you on track.

Whether you prefer the CLI or GUI, they ultimately achieve the same outcome through different means. Each has merit and ideally users get the best of both worlds in one solution.

Ridiculously easy to use Tunnels

Eliminating points of friction

Tunnels have typically required a locally managed configuration file to route requests to their appropriate destinations. This configuration file was never created by default, but was required for almost every use case. This meant that users needed to use the command line to create and populate their configuration file using examples from developer documentation. As functionality has been added into cloudflared, configuration files have become unwieldy to manage. Understanding the parameters and values to include as well as where to include them has become a burden for users. These issues were often difficult to catch with the naked eye and painful to troubleshoot for users.

We also wanted to improve the concept of tunnel permissions with our latest release. Previously, users were required to manage two distinct tokens: The cert.pem and the Tunnel_UUID.json file. In short, cert.pem, issued during the cloudflared tunnel login command, granted the ability to create, delete, and list tunnels for their Cloudflare account through the CLI. Tunnel_UUID.json, issued during the cloudflared tunnel create <NAME> command, granted the ability to run a specified tunnel. However, since tunnels can now be created directly from your Cloudflare account in the Zero Trust dashboard, there is no longer a requirement to authenticate your origin prior to creating a tunnel. This action is already performed during the initial Cloudflare login event.

With today’s release, users no longer need to manage configuration files or tokens locally. Instead, Cloudflare will manage this for you based on the inputs you provide in the Zero Trust dashboard. If users typo a hostname or service, they’ll know well before attempting to run their tunnel, saving time and hassle. We’ll also manage your tokens for you, and if you need to refresh your tokens at some point in the future, we’ll rotate the token on your behalf as well.

Client or clientless Zero Trust

We commonly refer to Cloudflare Tunnel as an “on-ramp” to our Zero Trust platform. Once connected, you can seamlessly pair it with WARP, Gateway, or Access to protect your resources with Zero Trust security policies, so that each request is validated against your organization’s device and identity based rules.

Clientless Zero Trust

Users can achieve a clientless Zero Trust deployment by pairing Cloudflare Tunnel with Access. In this model, users will follow the flow laid out in the Zero Trust dashboard. First, users name their tunnel. Next, users will be provided a single installation script tailored to the origin’s operating system and system architecture. Finally, they’ll create either public hostnames or private network routes for their tunnel. As outlined earlier, this step eliminates the need for a configuration file. Public hostname values will now replace ingress rules for remotely managed tunnels. Simply add the public hostname through which you’d like to access your private resource. Then, map the hostname value to a service behind your origin server. Finally, create a Cloudflare Access policy to ensure only those users who meet your requirements are able to access this resource.

Client-based Zero Trust

Alternatively, users can pair Cloudflare Tunnel with WARP and Gateway if they prefer a client-based approach to Zero Trust. Here, they’ll follow the same flow outlined above but instead of creating a public hostname, they’ll add a private network. This step replaces the cloudflared tunnel route ip add <IP/CIDR> step from the CLI library. Then, users can navigate to the Cloudflare Gateway section of the Zero Trust dashboard and create two rules to test private network connectivity and get started.

  1. Name: Allow <current user> for <IP/CIDR>
    Policy: Destination IP in <IP/CIDR> AND User Email is <Current User Email>
    Action: Allow
  2. Name: Default deny for <IP/CIDR>
    Policy: Destination IP in <IP/CIDR>
    Action: Block

It’s important to note, with either approach, most use cases will only require a single tunnel. A tunnel can advertise both public hostnames and private networks without conflicts. This helps make orchestration simple. In fact, we suggest starting with the least number of tunnels possible and using replicas to handle redundancy rather than additional tunnels. This, of course, is dependent on each user’s environment, but generally it’s smart to start with a single tunnel and create more only when there is a need to keep networks or services logically separated.

What’s next

Since we launched Cloudflare Tunnel, hundreds of thousands of tunnels have been created. That’s many tunnels that need to be migrated over to our new orchestration method. We want to make this process frictionless. That’s why we’re currently building out tooling to seamlessly migrate locally managed configurations to Cloudflare managed configurations. This will be available in a few weeks.

At launch, we also will not support global configuration options listed in our developer documentation. These parameters require case-by-case support, and we’ll be adding these commands incrementally over time. Most notably, this means the best way to adjust your cloudflared logging levels will still be by modifying the Cloudflare Tunnel service start command and appending the --loglevel flag into your service run command. This will become a priority after releasing the migration wizard.

As you can see, we have exciting plans for the future of remote tunnel management and will continue investing in this as we move forward. Check it out today and deploy your first Cloudflare Tunnel from the dashboard in three simple steps.

Webhooks in Zabbix

Post Syndicated from Andrey Biba original https://blog.zabbix.com/webhooks-in-zabbix/19935/

Zabbix is not only a flexible and versatile monitoring system but also a convenient tool for generating alerts and integrating with existing service desks. Among the various integration methods, webhooks have become the most popular. In this blog post, we will take a look at what are webhooks, how they can be used to integrate Zabbix with an external solution, and also take a look at some use case examples for webhook integrations.

What is a webhook?

Generally speaking, a webhook is a method of augmenting or altering the behavior of a web page or web application with custom callbacks. But to put it simply, a webhook is an automatic reaction to an event. If an event occurs (for example, a problem appears), then the webhook makes a call (via HTTP / HTTPS) to a third-party service to notify it about the event. Many existing solutions provide an API that allows you to interact with them via webhooks.

The webhook in Zabbix is implemented using JavaScript, so writing code does not require knowledge of a specific Zabbix syntax, and due to the prevalence of the JavaScript language, you can find many examples, tips, and guides on the Internet.

How does a webhook work?

Essentially, a webhook is code that makes a sequence of calls to achieve some result. In the case of Zabbix, a JavaScript code is executed that accesses the service API and transfers, updates, and retrieves data from there. For example, we need to open a ticket at the service desk and leave a comment on the ticket, which will contain information about the problem. For this we need:

  • Log in to the service and get a token
  • Make a request with the token to create a ticket
  • Create a comment on the newly created ticket using a token

In different services, the details may differ, but the general idea will be preserved from service to service.

How to use it?

Our integration team constantly communicates with the community and monitors the most popular services to develop official out-of-the-box integrations for them. At the moment, Zabbix provides a vast selection of out-of-the-box webhooks for the most popular services, and we review new ones and improve current ones every day.

In most cases, setting up a ready-made webhook comes down to 3-4 steps, which are described in the README file in the Zabbix repository. Usually, it is necessary to generate an API key in the service, set it in Zabbix, set the URL to the service endpoint URL, and specify a couple of parameters required for the webhook to work.

In addition to ready-made solutions, there is a Github community repository where custom templates and webhooks are laid out! If you are the author of a webhook or a template, please share it with the community by submitting it to this repository!

Example – Telegram webhook

The theory is good, but we are all interested in how it is implemented in practice. Let’s look at a Telegram webhook as an example. Now this messenger is very popular and it will be relevant to use it as an example.

First of all, let’s go to the Zabbix repository or navigate to the Zabbix website integrations section to read the setup instructions. In the repository, all templates and notification methods are located in the /templates folder, and for each of them, there is a README file with a detailed description.

From the Telegram side, we need to create a bot and get its token following the instructions and set it in the Token parameter.

After that, we create a user, set up a Telegram media type for this user, and in the “Send to” field we write the id of the user or group chat.

Voila! Your webhook is set up and ready to send notifications or event information!

As you may have noticed, the setup did not take much time and did not require deep knowledge. Naturally, for finer tuning, it is possible to edit the content of messages, the type of problems, intervals, and other parameters. But even without additional changes, notifications are already ready to go.

Is it difficult to write a webhook on your own?

Of course, creating a webhook requires certain skills.

First of all, knowledge of JavaScript is required. The language itself is not difficult and can be mastered relatively quickly. The Zabbix documentation site has a guideline for writing webhooks with recommendations and best practices.

Secondly, understanding how Zabbix works. This does not require an in-depth understanding of Zabbix and the ability to follow basic instructions will be enough. You can read more about setting up notification methods in the official documentation. It is important to properly configure the webhook itself, grant rights to users, and set up a notification action for the necessary triggers.

And thirdly, study the documentation of the service for which the webhook will be written. Although all APIs work on the same principle, they can differ greatly from each other in methods and request structure. It is also necessary to understand the service itself to understand how it works. It is difficult to write an integration if it is not clear how Zabbix should properly interact with the service being integrated.

Summarizing

Webhook is a modern and flexible way of integration that allows Zabbix to be a universal solution. Since the realities of our world imply a large number of different systems, and as a result – many people working together – webhooks are becoming an indispensable tool in notification automation. A properly written and configured webhook is an effective solution for flexible notifications.

In the next article, together we will learn the basic methods and requests that are needed to send alerts, receive updates and assign tags. For this purpose, we will completely inspect some webhook in close detail.

Questions

Q: We have a ready-made notification system built on scripts. Does it make sense to rewrite it to a webhook?

A: Certainly. Firstly, the webhook is executed natively in Zabbix, which will be much more productive than in an external script. Secondly, the webhook is much more flexible, more functional, and much easier to make changes to.

 

Q: We have a service for which we would like to write an integration, but we do not have qualified specialists who could do it. Is it possible to request such integration from Zabbix?

A: Yes, if you are a Zabbix partner, you can leave a request to create such integration.

 

The post Webhooks in Zabbix appeared first on Zabbix Blog.

Gus Simmons’s Memoir

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/gus-simmonss-memoir.html

Gus Simmons is an early pioneer in cryptography and computer security. I know him best for his work on authentication and covert channels, specifically as related to nuclear treaty verification. His work is cited extensively in Applied Cryptography.

He has written a memoir of growing up dirt-poor in 1930s rural West Virginia. I’m in the middle of reading it, and it’s fascinating.

More blog posts.

„Слуга на народа“ срещу пропагандата

Post Syndicated from Светла Енчева original https://toest.bg/sluga-na-naroda-sreshtu-propagandata/

Не е новина, че в България се провежда целенасочена руска пропаганда. Строго погледнато, след 1989 г. по един или друг начин все сме били в мрежите ѝ, макар официално да сме се разделили с просъветската идеология. „Опаковането“ на енергийната зависимост от Русия като „енергийна независимост“ и на враждебността към НАТО и ЕС като „български национален интерес“ не е от вчера. Ако надникнем по-назад в миналото, ще си дадем сметка, че в значителна степен идентичността на България като модерна национална държава от 1978 г. насам е формирана под силно руско влияние. Западното влияние е преобладаващо немско и логично е ограничено след 1944 г.

Културното влияние включва не просто послания, а и дълбоки светогледни нагласи.

В България например, както и в Русия, е самоочевидно, че езикът и нацията са двете страни на една и съща монета. За нас е умонепостижимо, че хора, чийто роден език е български, биха могли да не се възприемат като българи. Отричането на съществуването на македонския език е по същество отричане на македонската национална идентичност – „езикът ви е български, значи и вие сте българи“. А представителите на собствените ни малцинства, които говорят турски или ромски, не възприемаме като българи. Ала с такава нагласа не можем да си обясним защо австрийците говорят немски, а не са германци, швейцарците имат четири официални езика (и още повече диалекти). Нито защо нидерландският и фламандският се водят различни езици, макар да са по-близки, отколкото са немските диалекти в Берлин и Бавария (без дори да споменаваме швейцарския немски).

Според руската пропаганда една от основните причини за войната в Украйна е, че там има рускоезично население, което е дискриминирано от националистическата украинска власт, а в областите Донецк и Луганск дори е подлагано на „геноцид“. За въпросния геноцид така и няма доказателства. По-важното обаче е, че и на нас ни изглежда естествено, че едно рускоезично население би предпочело да бъде под руска политическа власт. Ето защо

украинският сериал „Слуга на народа“ може да предизвика у нас когнитивен дисонанс.

В „Слуга на народа“ Володимир Зеленски е в ролята на бъдещото си амплоа – президент на Украйна. Комедийният сериал е достъпен в YouTube (на руски език, като част от първия сезон има субтитри и на английски), а отскоро – и в Netflix. Трите му сезона и игралният филм, послужил за основа на втория сезон, са излъчени между 2015 и 2019 г.

Въпреки че може да се възприема и сам по себе си, сериалът има и политически измерения, надхвърлящи телевизионното забавление. След края на първия сезон е основана Партията за решителни промени, около края на втория е преименувана на „Слуга на народа“ и в началото на 2019 г. тя издига кандидатурата на Зеленски за президент. Излъчването на третия сезон пък съвпада с кандидатпрезидентската кампания. Последният епизод е излъчен на 28 март 2019 г. – непосредствено преди първия тур на президентските избори на 31 март.

За какво се разказва в „Слуга на народа“?

Героят на Зеленски се казва Василий Голобородко (на украински – Васил Холобородко, но в сериала се използва руският вариант на името). С фамилията Голобородко се намеква за младостта и неопитността на героя – на български тя може да се преведе като Голобрадов. Голобородко е учител по история, който неочаквано и за самия себе си става президент. Същевременно продължава да обитава едно жилище с родителите, сестра си и дъщеря ѝ. Желанието му да пребори корупцията се сблъсква със системата, в която е почти невъзможно да се намерят независими от олигарсите и неизкушени от корупцията хора. Голобородко назначава на ключови позиции свои бивши съученици и бившата си съпруга, макар хал хабер да си нямат от съответните длъжности. Защото има доверие на тях, а не представителите на продажната политическа класа.

Разбира се, Голобородко не е Зеленски, а сюжетът на сериала не възпроизвежда реални събития, макар да изхожда от реални проблеми в украинското общество. Затова не можем да правим преки паралели между комедийната поредица и живота в Украйна. Доколкото обаче „Слуга на народа“ има и политическа функция, на сериала може да се гледа като на своеобразна предизборна програма. А тя има политически ефект – цели 73,22% от гласувалите на втория тур на президентските избори през 2019 г. подкрепят комика, кандидатирал се за държавен глава. Това са над 13,5 млн. души. Затова чрез него можем да реконструираме

за какво са дали гласа си избирателите на Володимир Зеленски.

На първо място, сериалът е почти изцяло на руски език. (Впрочем вечерното шоу, в което участва Зеленски, преди да стане президент, също е на руски.) На украински се говори рядко – най-вече когато се представят официални изказвания в парламента или телевизионни новини. Но и в репортажите в новините хората говорят на руски. На украински настояват да се говори представителите на паравоенна националистическа група, която е представена карикатурно и като също толкова свързана с олигарсите, колкото са и останалите мейнстрийм политически кръгове.

На второ място, макар снимките на „Слуга на народа“ да започват скоро след анексията на Крим и началото на локалната война в Донбас (района на областите Луганск и Донецк, който проруски сепаратисти се опитват да отделят от Украйна), руската агресия не е тема в сюжетните линии. В същото време Путин е осмиван неколкократно. Още в първата серия се споменава, че той носи часовник Hublot, което звукоподражателно препраща към популярна украинска обида по адрес на руския президент. А когато иска да укроти каращите се помежду си депутати, Голобородко казва „Свалиха Путин!“ и това действа безотказно. Впрочем по-късно в сериала се споменава, че Путин действително е сменен.

На трето място, макар Голобородко да е учител по история, историята му служи като извор на мъдрост и поуки, от който черпи в критични моменти, а не като повод за национална гордост, чрез която да легитимира политическата си роля.

Всички тези особености на сериала са свързани с една водеща политическа идея, която се отстоява и от Володимир Зеленски в качеството му на президент:

Украйна е гражданска, а не етническа нация.

И според Голобородко, и според Зеленски няма значение един украинец какъв език говори и каква религия изповядва. Важно е само украинците да допринасят с каквото могат за развитието на страната си, така че тя да се превърне в място, в което хората да искат да живеят, а не да напускат. Затова в сериала от първостепенно значение е да се прекрати корупцията и да се победят олигарсите. Посланието към украинците е да разчитат на собствените си сили, а не на чужда помощ. Защото, за да уважават Украйна богатите страни и международните институции, тя трябва да покаже, че може да се справи и сама. А това може да стане само ако украинците са обединени.

В края на третия, последен сезон на сериала има един ключов момент. Разпокъсаната на множество дребни държавици Украйна започва да се обединява под ръководството на Голобородко, който най-сетне е успял да пребори корупцията, и хората са започнали да живеят по-добре. Само две области не се връщат в пределите на Украйна поради непоносимост помежду си. Това са проруският Донбас, който иронично носи името СССР (Съюз на свободните самодостатъчни републики), и държавата на запад от Лвов, управлявана от националистите, които настояват да се говори само на украински. В Лвов обаче става сериозна авария и от СССР оказват помощ. Така двете враждуващи страни се сдобряват, което води до окончателното обединение на Украйна.

Ако се беше разиграла сцената, в която украинските националисти помагат на СССР, посланието щеше да е съвсем различно. Епизодът, излъчен непосредствено преди първия тур на президентските избори в Украйна обаче, демонстрира

протегната ръка към проруските избиратели, а не конфронтация и чувство за превъзходство.

Целта на жеста не е реверанс към Русия, нито склонност към компромис за Крим и Донбас, а по-скоро е опит за диалогичност и мирно решаване на конфликтите. На този фон изглеждат още по-цинично официалните руски твърдения как Украйна е нацистка и трябва да бъде „денацифицирана“. Както в много страни, и в Украйна има националисти, като някои от тях са крайнодесни. Нормално е, когато има въоръжени действия срещу териториалната цялост на една страна, националистите да се радикализират и да хванат оръжие, за да я отбраняват. Разбираемо е и ако държавата не се отказва от помощта им в такъв момент. От това не може да се правят изводи за цялостната ѝ ценностна ориентация.

Що се отнася до използването на руския език, то не се ограничава само в сериала и вечерното шоу на Зеленски. Украинският президент и екипът му често говорят на руски. Дори сега, в контекста на инвазията на Русия срещу Украйна. Съветникът на президентската администрация Олексий Арестович, прочул се със своите вдъхващи спокойствие и уравновесеност послания по време на войната, също се обръща към аудиторията си на руски.

Една от интерпретациите за използването на руския език е, че той е разбираем от много по-широка публика. Това не е достатъчно основание да употребяваш езика на агресора. За украинците обаче руският не е просто езикът на нападналата ги държава, а и тяхната лингва франка, макар украинският да е официалният език. След дълги години в пределите на Руската империя, а после – и на Съветския съюз, голяма част от украинците използват руския във всекидневната си комуникация. В страната се говорят и междинни варианти между украински и руски, а предимно на украински се говори в определени райони, особено в западната част на страната. И ако проруските жители на Украйна говорят на руски, от това съвсем не следва, че всеки, който говори руски, смята, че Украйна трябва да е част от Русия. Нито пък има отношение към етническата идентичност. Още по-малко – към националната.

У нас украинският президент е сравняван със Слави Трифонов.

И двамата са комици, и двамата са имали вечерно шоу, и двамата влязоха в политиката. Аналогията обаче е уместна само на повърхността. Съществува принципна разлика както между публичното присъствие на двамата, така и между политическите им приоритети.

Трифонов възприема ролята на водач, тип пророк – говори на народа свише и е недостъпен и за журналистите, и за собствените си избиратели, да не говорим за критиците си. Зеленски, както и Голобородко като негова политическа проекция, е земен и диалогичен. Той не говори като последна инстанция, а се опитва да убеждава всеки, дори враговете си. Трифонов предлага бомбастични популистки нововъведения с неясен практически ефект, като например реформата на избирателната система. За Голобородко най-важното е да се намерят хора, които не крадат, а крадците да бъдат накарани да върнат заграбеното, за да има пари в бюджета. Трифонов се заиграва с национализма, идеализирането на историята и възрожденските песни. За Голобородко, също и за самия Зеленски, както виждаме от изказванията му по време на войната, националната идентичност не е функция на миналото, етноса и езика.

Ето защо Зеленски повече прилича на Кирил Петков, отколкото на Трифонов. Разбира се, няма как да знаем какво би било поведението на Петков в случай на военно нападение над България. И дано не ни се наложи да узнаем. Но Петков, също като Зеленски, смята за първостепенен приоритет борбата с корупцията и спирането на кражбите. По подобен начин се търси диалог между враждуващи фракции, а понякога – за всеобща изненада – дори успява. По същия начин обича да казва „мен не ме е страх“ (а от какво и доколко не го е страх, историята ще покаже). И почти със същата наивност като Василий Голобородко подхожда към сформирането на екипа си, привличайки хора от харвардския си курс в Софийския университет, понеже ги познава.

Сериалът „Слуга на народа“ е наричан „пророчески“

заради това, че Зеленски, изиграл в него президент на Украйна, действително става такъв. Но това е въпрос не на пророчество, а на успешна политическа кампания, уловила и предала по адекватен начин точните послания.

Има обаче моменти от комедийната поредица, които действително могат да се интерпретират като пророчески, макар и да не се случват точно така, както е в телевизионния сценарий. Това важи особено за финалните епизоди, в които Украйна, преди окончателно да тръгне в правилната посока, на практика е престанала да съществува. А Европа е залята от вълна от украински бежанци. Сценаристите на сериала едва ли са си представяли, че това действително ще се случи – и то по толкова брутален начин.

Дано се сбъдне и втората част от пророчеството – Украйна да бъде свободна, обединена и просперираща страна, в която хората искат да живеят.

Заглавна снимка: Фрагмент от плаката на сериала „Слуга на народа“

Източник

Де са ти, Гешев, осъдените олигарси?

Post Syndicated from Венелина Попова original https://toest.bg/de-sa-ti-geshev-osudenite-oligarsi/

На 24 март в онлайн пресконференция от Дубай обвиненият за 19 престъпления бизнесмен Васил Божков заяви, че е готов да се върне в България, при условие че прокуратурата свали неговото издирване от червената бюлетина на Интерпол. Ако се абстрахираме от множеството интерпретации по темата и без да навлизаме в правния ѝ аспект, съвсем логично е да се запитаме:

иска ли наистина държавното обвинение да разпита Васил Божков, или предпочита той да остане извън страната?

Името на знаковия хазартен бос нашумя отново след арестите на Бойко Борисов, Владислав Горанов и Севдeлина Арнаудова, извършени от полицията въз основа на данните, съдържащи се в сигнала на Божков до прокуратурата отпреди две години. Той бе изпратен още до президента, парламента, Висшия съдебен съвет, Върховния касационен съд, политическите партии в България и посолствата на Великобритания и САЩ у нас. В него бизнесменът твърди, че е дал около 60 млн. лв. на бившия премиер и на неговия финансов министър под формата на такса „спокойствие“.

Божков твърди, че не знае как прокуратурата е интерпретирала фактите в сигнала, прехвърлян през годините между Върховната касационна прокуратура, Специализираната прокуратура и данъчните служби, които е трябвало да проверят къде са отивали парите, теглени от него от банковите му сметки. „Каквито и данни да съм дал в тази прокуратура, те изчезват“, заяви на пресконференцията хазартният бос, който още с пускането на сигнала обяви, че разполага с достатъчно доказателствен материал, в т.ч. документи, банкови извлечения, записи от охранителни камери, снимки и свидетелски показания. Но на пресконференцията в четвъртък отказа да даде повече подробности за тях.

Арестите на Борисов, Горанов и Арнаудова, които Васил Божков нарече пред журналистите не просто организирана престъпна група, а директно „хунта и мафия“, нагнетиха напрежение в обществото и между институциите, а от ГЕРБ обявиха поход за сваляне на правителството и за нови избори. 24-часовият престой в ареста на лидера на най-голямата опозиционна партия в България имаше и международен отзвук, каквато беше и една от целите на задържането му под стража. Председателят на ЕНП в Европейския парламент Манфред Вебер заяви в сряда, че Борисов е бил „отвлечен“, и поиска проверка на законността на задържането му. Два дни по-рано групата на Прогресивния алианс на социалистите и демократите в ЕП пък излезе с позиция, в която призовава правосъдните власти в София и Брюксел да дадат отговор на въпросите, свързани с управлението на Борисов, и особено след като Европейската прокуратура потвърди, че е получила множество данни със съмнения за корупция и злоупотреби с европейски средства.

В хода на развитието на събитията след излизането на Борисов, Горанов и Арнаудова от ареста беше поставен и въпросът за това

може ли Васил Божков да бъде разпитан дистанционно от Дубай.

Според публично изразени мнения на юристи, сред тях и на министъра на вътрешните работи Бойко Рашков, процесуални пречки за подобен разпит няма. Очевидно не е имало пречка и пред Европейската прокуратура, която, по думите на Божков, го е открила и разпитала на 8 март т.г. от столицата на Обединените арабски емирства. Подобен разпит, само че в писмена форма, е извършен и от Дирекция „Национална полиция“, обяви той на пресконференцията. Пак от него разбрахме, че е осъществил контакти и с американските власти, на които е дал много повече сведения, отколкото на ЕП, за да бъде изваден от списъка „Магнитски“, попаднал в него „погрешно“. И какво се оказа накрая – че единствено за прокуратурата на Иван Гешев има процесуални пречки Васил Божков да бъде разпитан дистанционно. А хипотезата прокурори да разпитат бизнесмена в Дубай дори не е обсъждана публично.

Васил Божков, с прякор Черепа, е сочен за един от най-богатите олигарси в България, а във вътрешна кореспонденция на американския Държавен департамент е наричан и „най-печално известният гангстер на България“. Дали е бил съучастник на „шайката“, както той назовава хората около бившия премиер Борисов, които са го рекетирали и на които е плащал, за да развива успешно хазартния си бизнес, може да каже само съдът. Както и дали 60-те милиона, които той посочва, че е дал като процент от печалбата на „Национална лотария“ в периода от 2017 до 2019 г. на същите тези лица, отговарят на състава на престъплението подкуп. Но без съмнение сюжетът, който Божков представя две години от Дубай пред медиите, не изглежда съшит с бели конци. В него освен Борисов, Горанов и Арнаудова са още Менда Стоянова, Данаил Кирилов, Валери Симеонов, Кирил Домусчиев, пачки от банкноти по 500 евро, които се събират в джоба на едно мъжко сако, и една (не)запалена пура.

Този сюжет може да бъде разплетен единствено от разследващите органи в България.

И прокуратурата – господарят на разследването, може и трябва да разпита Васил Божков. Още повече че той обяви готовност да се завърне в България, и постави за това едно-единствено условие. Което означава, че ще може да бъде разследван и за всички повдигнати му обвинения, част от които коментира пред медиите като несъстоятелни и без доказателства. Ако главният прокурор не организира разпит на Васил Божков, независимо как ще бъде осъществен, ще потвърди пред обществото и пред партньорите ни в Брюксел, че е избран на този пост, за да прикрива корупцията по високите етажи на властта, а не да я разкрива.

С действията си досега Иван Гешев демонстрира и двойни стандарти към родните ни олигарси и бизнесмени, като арестува и обвинява избирателно само онези, които по една или друга причина са станали неудобни за властта. Но до момента специализираното правосъдие, което е пред закриване, не е приключило нито едно от тези дела с осъдителна присъда. А някои от задържаните предявиха иск срещу главния прокурор и неговите заместници за нанесените им морални щети. Тези действия на Гешев дават достатъчна легитимност на управляващите да искат неговата оставка. А зад тях е силата на омерзеното общество, очакващо справедливост.

Заглавна снимка: Стопкадър от онлайн пресконференцията на Васил Божков от 24 март т.г., излъчена на неговата Facebook страница

Източник

Insights for CTOs: Part 3 – Growing your business with modern data capabilities

Post Syndicated from Syed Jaffry original https://aws.amazon.com/blogs/architecture/insights-for-ctos-part-3-growing-your-business-with-modern-data-capabilities/

This post was co-wrtiten with Jonathan Hwang, head of Foundation Data Analytics at Zendesk.


In my role as a Senior Solutions Architect, I have spoken to chief technology officers (CTOs) and executive leadership of large enterprises like big banks, software as a service (SaaS) businesses, mid-sized enterprises, and startups.

In this 6-part series, I share insights gained from various CTOs and engineering leaders during their cloud adoption journeys at their respective organizations. I have taken these lessons and summarized architecture best practices to help you build and operate applications successfully in the cloud. This series also covers building and operating cloud applications, security, cloud financial management, modern data and artificial intelligence (AI), cloud operating models, and strategies for cloud migration.

In Part 3, I’ve collaborated with the head of Foundation Analytics at Zendesk, Jonathan Hwang, to show how Zendesk incrementally scaled their data and analytics capabilities to effectively use the insights they collect from customer interactions. Read how Zendesk built a modern data architecture using Amazon Simple Storage Service (Amazon S3) for storage, Apache Hudi for row-level data processing, and AWS Lake Formation for fine-grained access control.

Why Zendesk needed to build and scale their data platform

Zendesk is a customer service platform that connects over 100,000 brands with hundreds of millions of customers via telephone, chat, email, messaging, social channels, communities, review sites, and help centers. They use data from these channels to make informed business decisions and create new and updated products.

In 2014, Zendesk’s data team built the first version of their big data platform in their own data center using Apache Hadoop for incubating their machine learning (ML) initiative. With that, they launched Answer Bot and Zendesk Benchmark report. These products were so successful they soon overwhelmed the limited compute resources available in the data center. By the end of 2017, it was clear Zendesk needed to move to the cloud to modernize and scale their data capabilities.

Incrementally modernizing data capabilities

Zendesk built and scaled their workload to use data lakes on AWS, but soon encountered new architecture challenges:

  • The General Data Protection Regulation (GDPR) “right to be forgotten” rule made it difficult and costly to maintain data lakes, because deleting a small piece of data required reprocessing large datasets.
  • Security and governance was harder to manage when data lake scaled to a larger number of users.

The following sections show you how Zendesk is addressing GDPR rules by evolving from plain Apache Parquet files on Amazon S3 to Hudi datasets on Amazon S3 to enable row level inserts/updates/deletes. To address security and governance, Zendesk is migrating to AWS Lake Formation centralized security for fine-grained access control at scale.

Zendesk’s data platform

Figure 1 shows Zendesk’s current data platform. It consists of three data pipelines: “Data Hub,” “Data Lake,” and “Self Service.”

Zendesk data pipelines

Figure 1. Zendesk data pipelines

Data Lake pipelines

The Data Lake and Data Hub pipelines cover the entire lifecycle of the data from ingestion to consumption.

The Data Lake pipelines consolidate the data from Zendesk’s highly distributed databases into a data lake for analysis.

Zendesk uses Amazon Database Migration Service (AWS DMS) for change data capture (CDC) from over 1,800 Amazon Aurora MySQL databases in eight AWS Regions. It detects transaction changes and applies them to the data lake using Amazon EMR and Hudi.

Zendesk ticket data consists of over 10 billion events and petabytes of data. The data lake files in Amazon S3 are transformed and stored in Apache Hudi format and registered on the AWS Glue catalog to be available as data lake tables for analytics querying and consumption via Amazon Athena.

Data Hub pipelines

The Data Hub pipelines focus on real-time events and streaming analytics use cases with Apache Kafka. Any application at Zendesk can publish events to a global Kafka message bus. Apache Flink ingests these events into Amazon S3.

The Data Hub provides high-quality business data that is highly available and scalable.

Self-managed pipeline

The self-managed pipelines empower product engineering teams to use the data lake for those use cases that don’t fit into our standard integration patterns. All internal Zendesk product engineering teams can use standard tools such as Amazon EMR, Amazon S3, Athena, and AWS Glue to publish their own analytics dataset and share them with other teams.

A notable example of this is Zendesk’s fraud detection engineering team. They publish their fraud detection data and findings through our self-manage data lake platform and use Amazon QuickSight for visualization.

You need fine-grained security and compliance

Data lakes can accelerate growth through faster decision making and product innovation. However, they can also bring new security and compliance challenges:

  • Visibility and auditability. Who has access to what data? What level of access do people have and how/when and who is accessing it?
  • Fine-grained access control. How do you define and enforce least privilege access to subsets of data at scale without creating bottlenecks or key person/team dependencies?

Lake Formation helps address these concerns by auditing data access and offering row- and column-level security and a delegated access control model to create data stewards for self-managed security and governance.

Zendesk used Lake Formation to build a fine-grained access control model that uses row-level security. It detects personally identifiable information (PII) while scaling the data lake for self-managed consumption.

Some Zendesk customers opt out of having their data included in ML or market research. Zendesk uses Lake Formation to apply row-level security to filter out records associated with a list of customer accounts who have opted out of queries. They also help data lake users understand which data lake tables contain PII by automatically detecting and tagging columns in the data catalog using AWS Glue’s PII detection algorithm.

The value of real-time data processing

When you process and consume data closer to the time of its creation, you can make faster decisions. Streaming analytics design patterns, implemented using services like Amazon Managed Streaming for Apache Kafka (Amazon MSK) or Amazon Kinesis, create an enterprise event bus to exchange data between heterogeneous applications in near real time.

For example, it is common to use streaming to augment the traditional database CDC ingestion into the data lake with additional streaming ingestion of application events. CDC is a common data ingestion pattern, but the information can be too low level. This requires application context to be reconstructed in the data lake and business logic to be duplicated in two places, inside the application and in the data lake processing layer. This creates a risk of semantic misrepresentation of the application context.

Zendesk faced this challenge with their CDC data lake ingestion from their Aurora clusters. They created an enterprise event bus built with Apache Kafka to augment their CDC with higher-level application domain events to be exchanged directly between heterogeneous applications.

Zendesk’s streaming architecture

A CDC database ticket table schema can sometimes contain unnecessary and complex attributes that are application specific and do not capture the domain model of the ticket. This makes it hard for downstream consumers to understand and use the data. A ticket domain object may span several database tables when modeled in third normal form, which makes querying for analysts difficult downstream. This is also a brittle integration method because downstream data consumers can easily be impacted when the application logic changes, which makes it hard to derive a common data view.

To move towards event-based communication between microservices, Zendesk created the Platform Data Architecture (PDA) project, which uses a standard object model to represent a higher level, semantic view of their application data. Standard objects are domain objects designed for cross-domain communication and do not suffer from the lower level fragmented scope of database CDC. Ultimately, Zendesk aims to transition their data architecture from a collection of isolated products and data silos into a cohesive unified data platform.

An application view of Zendesk’s streaming architecture

Figure 2. An application view of Zendesk’s streaming architecture

Figure 3 shows how all Zendesk products and users integrate through common standard objects and standard events within the Data Hub. Applications publish and consume standard objects and events to/from the event bus.

For example, a complete ticket standard object will be published to the message bus whenever it is created, updated, or changed. On the consumption side, these events get used by product teams to enable platform capabilities such as search, data export, analytics, and reporting dashboards.

Summary

As Zendesk’s business grew, their data lake evolved from simple Parquet files on Amazon S3 to a modern Hudi-based incrementally updateable data lake. Now, their original coarse-grained IAM security policies use fine-grained access control with Lake Formation.

We have repeatedly seen this incremental architecture evolution achieve success because it reduces the business risk associated with the change and provides sufficient time for your team to learn and evaluate cloud operations and managed services.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other posts in this series

Horn: Racing against the clock

Post Syndicated from original https://lwn.net/Articles/889183/

Jann Horn describes
in great detail
the process he went through to exploit a tiny race
window in the kernel.

Luckily for us, the race window contains the first few memory
accesses to the struct file; therefore, by making sure that the
struct file is not present in the fastest CPU caches, we can widen
the race window by as much time as the memory accesses take. The
standard way to do this is to use an eviction pattern / eviction
set; but instead we can also make the cache line dirty on another
core.

Ekstrand: How to write a Vulkan driver in 2022

Post Syndicated from original https://lwn.net/Articles/889176/

Over on the Collabora blog, Jason Ekstrand has a detailed look at writing a Vulkan graphics driver in today’s world. “Not only has Vulkan grown, but Mesa has as well, and we’ve built up quite a suite of utilities and helpers for making writing Vulkan drivers easier.” The blog post takes the form of a tutorial of sorts, though the end result is not a functioning Vulkan driver the framework of one is shown.

At the time we were developing ANV (the Intel Vulkan driver), the Vulkan spec itself was still under development and everything was constantly in flux. There were no best practices; there were barely even tools. Everyone working on Vulkan was making it up as they went because it was a totally new API. Most of the code we wrote was purpose-built for the Intel driver because there were no other Mesa drivers to share code. (Except for the short-lived LunarG Intel driver based in ilo, which we were replacing.) If we had tried to build abstractions, they could have gotten shot to pieces at any moment by a spec change. (We rewrote the descriptor set layout code from scratch at least five or six times before the driver ever shipped.) It was frustrating, exhausting, and a whole lot of fun.

These days, however, the Vulkan spec has been stable and shipping for six years, the tooling and testing situation is pretty solid, and there are six Vulkan drivers in the Mesa tree with more on the way. We’ve also built up a lot of common infrastructure. This is important both because it makes writing a Vulkan driver easier and because it lets us fix certain classes of annoying bugs in a common place instead of everyone copying and pasting those bugs.

AWS Lambda Now Supports Up to 10 GB Ephemeral Storage

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-lambda-now-supports-up-to-10-gb-ephemeral-storage/

Serverless applications are event-driven, using ephemeral compute functions ranging from web APIs, mobile backends, and streaming analytics to data processing stages in machine learning (ML) and high-performance applications. While AWS Lambda includes a 512 MB temporary file system (/tmp) for your code, this is an ephemeral scratch resource not intended for durable storage such as Amazon Elastic File System (Amazon EFS).

However, extract, transform, and load (ETL) jobs and content generation workflows such as creating PDF files or media transcoding require fast, scalable local storage to process large amounts of data quickly. Data-intensive applications require large amounts of temporary data specific to the invocation or cached data that can be reused for all invocation in the same execution environment in a highly performant manner. With the previous limit of 512 MB, customers had to selectively load data from Amazon Simple Storage Service (Amazon S3) and Amazon EFS, or increase the allocated function memory and thus increase their cost, just to handle large objects downloaded from Amazon S3. Since customers could not cache larger data locally in the Lambda execution environment, every function invoke had to read data in parallel, which made scaling out harder for customers.

Today, we are announcing that AWS Lambda now allows you to configure ephemeral storage (/tmp) between 512 MB and 10,240 MB. You can now control the amount of ephemeral storage a function gets for reading or writing data, allowing you to use AWS Lambda for ETL jobs, ML inference, or other data-intensive workloads.

With increased AWS Lambda ephemeral storage, you get access to a secure, low-latency ephemeral file system up to 10 GB. You can continue to use up to 512 MB for free and are charged for the amount of storage you configure over the free limit for the duration of invokes.

Setting Larger Ephemeral Storage for Your Lambda Function
To configure your Lambda function with larger ephemeral storage, choose the Configuration tab under the General Configuration section in the AWS Lambda Console. You will see a new configuration for Ephemeral storage setting at 512MB by default.

When you click the Edit button, you can configure the ephemeral storage from 512 MB to 10,240 MB in 1 MB increments for your Lambda functions.

With AWS Command Line Interface (AWS CLI), you can update your desired size of ephemeral storage using theupdate-function-configuration command.

$ aws lambda update-function-configuration --function-name PDFGenerator \
              --ephemeral-storage '{"Size": 10240}'

You can configure ephemeral storage using Lambda API via AWS SDK and AWS CloudFormation. To learn more, see Configuring function options in the AWS Documentation.

As a review, AWS Lambda provides a comprehensive range of storage options. To learn more, see a great blog post, Choosing between AWS Lambda data storage options in web apps, written by my colleague James Beswick. I want to quote the table to show the differences between these options and common use-cases to help you choose the right one for your own applications.

Features Ephemeral Storage (/tmp) Lambda Layers Amazon EFS Amazon S3
Maximum size 10,240 MB 50 MB (direct upload) Elastic Elastic
Persistence Ephemeral Durable Durable Durable
Content Dynamic Static Dynamic Dynamic
Storage type File system Archive File system Object
Lambda event source integration N/A N/A N/A Native
Operations supported Any file system operation Immutable Any file system operation Atomic with versioning
Object tagging and metadata
N N N Y
Pricing model Included in Lambda
(Charged over 512MB)
Included in Lambda Storage + data transfer + throughput Storage + requests + data transfer
Shared across all invocations N Y Y Y
Sharing/permissions model Function-only IAM IAM + NFS IAM
Source for AWS Glue and Amazon Quicksight
N N N Y
Relative data access speed from Lambda Fastest Fastest Very fast Fast

Available Now
You can now configure up to 10 GB of ephemeral storage per Lambda function instance in all Regions where AWS Lambda is available. With 10 GB container image support, 10 GB function memory, and now 10 GB of ephemeral function storage, you can support workloads such as using large temporal files, data and media processing, machine learning inference, and financial analysis.

Support is also available through many AWS Lambda Partners such as HashiCorp (Terraform), Pulumi, Datadog, Splunk (SignalFx), Lumigo, Thundra, Dynatrace, Slalom, Cloudwiry, and Contino.

For this feature, you are charged for the storage you configure over the 512 MB free limit for the duration of your function invokes. To learn more, visit AWS Lambda product and pricing page and send feedback through the AWS re:Post for AWS Lambda or your usual AWS Support contacts.

Channy

What to consider when migrating data warehouse to Amazon Redshift

Post Syndicated from Lewis Tang original https://aws.amazon.com/blogs/big-data/what-to-consider-when-migrating-data-warehouse-to-amazon-redshift/

Customers are migrating data warehouses to Amazon Redshift because it’s fast, scalable, and cost-effective. However, data warehouse migration projects can be complex and challenging. In this post, I help you understand the common drivers of data warehouse migration, migration strategies, and what tools and services are available to assist with your migration project.

Let’s first discuss the big data landscape, the meaning of a modern data architecture, and what you need to consider for your data warehouse migration project when building a modern data architecture.

Business opportunities

Data is changing the way we work, live, and play. All of this behavior change and the movement to the cloud has resulted in a data explosion over the past 20 years. The proliferation of Internet of Things and smart phones have accelerated the amount of the data that is generated every day. Business models have shifted, and so have the needs of the people running these businesses. We have moved from talking about terabytes of data just a few years ago to now petabytes and exabytes of data. By putting data to work efficiently and building deep business insights from the data collected, businesses in different industries and of various sizes can achieve a wide range of business outcomes. These can be broadly categorized into the following core business outcomes:

  • Improving operational efficiency – By making sense of the data collected from various operational processes, businesses can improve customer experience, increase production efficiency, and increase sales and marketing agility
  • Making more informed decisions – Through developing more meaningful insights by bringing together full picture of data across an organization, businesses can make more informed decisions
  • Accelerating innovation – Combining internal and external data sources enable a variety of AI and machine learning (ML) use cases that help businesses automate processes and unlock business opportunities that were either impossible to do or too difficult to do before

Business challenges

Exponential data growth has also presented business challenges.

First of all, businesses need to access all data across the organization, and data may be distributed in silos. It comes from a variety of sources, in a wide range of data types and in large volume and velocity. Some data may be stored as structured data in relational databases. Other data may be stored as semi-structured data in object stores, such as media files and the clickstream data that is constantly streaming from mobile devices.

Secondly, to build insights from data, businesses need to dive deep into the data by conducting analytics. These analytics activities generally involve dozens and hundreds of data analysts who need to access the system simultaneously. Having a performant system that is scalable to meet the query demand is often a challenge. It gets more complex when businesses need to share the analyzed data with their customers.

Last but not least, businesses need a cost-effective solution to address data silos, performance, scalability, security, and compliance challenges. Being able to visualize and predict cost is necessary for a business to measure the cost-effectiveness of its solution.

To solve these challenges, businesses need a future proof modern data architecture and a robust, efficient analytics system.

Modern data architecture

A modern data architecture enables organizations to store any amount of data in open formats, break down disconnected data silos, empower users to run analytics or ML using their preferred tool or technique, and manage who has access to specific pieces of data with the proper security and data governance controls.

The AWS data lake architecture is a modern data architecture that enables you to store data in a data lake and use a ring of purpose-built data services around the lake, as shown in the following figure. This allows you to make decisions with speed and agility, at scale, and cost-effectively. For more details, refer to Modern Data Architecture on AWS.

Modern data warehouse

Amazon Redshift is a fully managed, scalable, modern data warehouse that accelerates time to insights with fast, easy, and secure analytics at scale. With Amazon Redshift, you can analyze all your data and get performance at any scale with low and predictable costs.

Amazon Redshift offers the following benefits:

  • Analyze all your data – With Amazon Redshift, you can easily analyze all your data across your data warehouse and data lake with consistent security and governance policies. We call this the modern data architecture. With Amazon Redshift Spectrum, you can query data in your data lake with no need for loading or other data preparation. And with data lake export, you can save the results of an Amazon Redshift query back into the lake. This means you can take advantage of real-time analytics and ML/AI use cases without re-architecture, because Amazon Redshift is fully integrated with your data lake. With new capabilities like data sharing, you can easily share data across Amazon Redshift clusters both internally and externally, so everyone has a live and consistent view of the data. Amazon Redshift ML makes it easy to do more with your data—you can create, train, and deploy ML models using familiar SQL commands directly in Amazon Redshift data warehouses.
  • Fast performance at any scale – Amazon Redshift is a self-tuning and self-learning system that allows you to get the best performance for your workloads without the undifferentiated heavy lifting of tuning your data warehouse with tasks such as defining sort keys and distribution keys, and new capabilities like materialized views, auto-refresh, and auto-query rewrite. Amazon Redshift scales to deliver consistently fast results from gigabytes to petabytes of data, and from a few users to thousands. As your user base scales to thousands of concurrent users, the concurrency scaling capability automatically deploys the necessary compute resources to manage the additional load. Amazon Redshift RA3 instances with managed storage separate compute and storage, so you can scale each independently and only pay for the storage you need. AQUA (Advanced Query Accelerator) for Amazon Redshift is a new distributed and hardware-accelerated cache that automatically boosts certain types of queries.
  • Easy analytics for everyone – Amazon Redshift is a fully managed data warehouse that abstracts away the burden of detailed infrastructure management or performance optimization. You can focus on getting to insights, rather than performing maintenance tasks like provisioning infrastructure, creating backups, setting up the layout of data, and other tasks. You can operate data in open formats, use familiar SQL commands, and take advantage of query visualizations available through the new Query Editor v2. You can also access data from any application through a secure data API without configuring software drivers, managing database connections. Amazon Redshift is compatible with business intelligence (BI) tools, opening up the power and integration of Amazon Redshift to business users who operate from within the BI tool.

A modern data architecture with a data lake architecture and modern data warehouse with Amazon Redshift helps businesses in all different sizes address big data challenges, make sense of a large amount of data, and drive business outcomes. You can start the journey of building a modern data architecture by migrating your data warehouse to Amazon Redshift.

Migration considerations

Data warehouse migration presents a challenge in terms of project complexity and poses a risk in terms of resources, time, and cost. To reduce the complexity of data warehouse migration, it’s essential to choose a right migration strategy based on your existing data warehouse landscape and the amount of transformation required to migrate to Amazon Redshift. The following are the key factors that can influence your migration strategy decision:

  • Size – The total size of the source data warehouse to be migrated is determined by the objects, tables, and databases that are included in the migration. A good understanding of the data sources and data domains required for moving to Amazon Redshift leads to an optimal sizing of the migration project.
  • Data transfer – Data warehouse migration involves data transfer between the source data warehouse servers and AWS. You can either transfer data over a network interconnection between the source location and AWS such as AWS Direct Connect or transfer data offline via the tools or services such as the AWS Snow Family.
  • Data change rate – How often do data updates or changes occur in your data warehouse? Your existing data warehouse data change rate determines the update intervals required to keep the source data warehouse and the target Amazon Redshift in sync. A source data warehouse with a high data change rate requires the service switching from the source to Amazon Redshift to complete within an update interval, which leads to a shorter migration cutover window.
  • Data transformation – Moving your existing data warehouse to Amazon Redshift is a heterogenous migration involving data transformation such as data mapping and schema change. The complexity of data transformation determines the processing time required for an iteration of migration.
  • Migration and ETL tools – The selection of migration and extract, transform, and load (ETL) tools can impact the migration project. For example, the efforts required for deployment and setup of these tools can vary. We look closer at AWS tools and services shortly.

After you have factored in all these considerations, you can pick a migration strategy option for your Amazon Redshift migration project.

Migration strategies

You can choose from three migration strategies: one-step migration, two-step migration, or wave-based migration.

One-step migration is a good option for databases that don’t require continuous operation such as continuous replication to keep ongoing data changes in sync between the source and destination. You can extract existing databases as comma separated value (CSV) files, or columnar format like Parquet, then use AWS Snow Family services such as AWS Snowball to deliver datasets to Amazon Simple Storage Service (Amazon S3) for loading into Amazon Redshift. You then test the destination Amazon Redshift database for data consistency with the source. After all validations have passed, the database is switched over to AWS.

Two-step migration is commonly used for databases of any size that require continuous operation, such as the continuous replication. During the migration, the source databases have ongoing data changes, and continuous replication keeps data changes in sync between the source and Amazon Redshift. The breakdown of the two-step migration strategy is as follows:

  • Initial data migration – The data is extracted from the source database, preferably during non-peak usage to minimize the impact. The data is then migrated to Amazon Redshift by following the one-step migration approach described previously.
  • Changed data migration – Data that changed in the source database after the initial data migration is propagated to the destination before switchover. This step synchronizes the source and destination databases. After all the changed data is migrated, you can validate the data in the destination database and perform necessary tests. If all tests are passed, you then switch over to the Amazon Redshift data warehouse.

Wave-based migration is suitable for large-scale data warehouse migration projects. The principle of wave-based migration is taking precautions to divide a complex migration project into multiple logical and systematic waves. This strategy can significantly reduce the complexity and risk. You start from a workload that covers a good number of data sources and subject areas with medium complexity, then add more data sources and subject areas in each subsequent wave. With this strategy, you run both the source data warehouse and Amazon Redshift production environments in parallel for a certain amount of time before you can fully retire the source data warehouse. See Develop an application migration methodology to modernize your data warehouse with Amazon Redshift for details on how to identify and group data sources and analytics applications to migrate from the source data warehouse to Amazon Redshift using the wave-based migration approach.

To guide your migration strategy decision, refer to the following table to map the consideration factors with a preferred migration strategy.

. One-Step Migration Two-Step Migration Wave-Based Migration
The number of subject areas in migration scope Small Medium to Large Medium to Large
Data transfer volume Small to Large Small to Large Small to Large
Data change rate during migration None Minimal to Frequent Minimal to Frequent
Data transformation complexity Any Any Any
Migration change window for switching from source to target Hours Seconds Seconds
Migration project duration Weeks Weeks to Months Months

Migration process

In this section, we review the three high-level steps of the migration process. The two-step migration strategy and wave-based migration strategy involve all three migration steps. However, the wave-based migration strategy includes a number of iterations. Because only databases that don’t require continuous operations are good fits for one-step migration, only Steps 1 and 2 in the migration process are required.

Step 1: Convert schema and subject area

In this step, you make the source data warehouse schema compatible with the Amazon Redshift schema by converting the source data warehouse schema using schema conversion tools such as AWS Schema Conversion Tool (AWS SCT) and the other tools from AWS partners. In some situations, you may also be required to use custom code to conduct complex schema conversions. We dive deeper into AWS SCT and migration best practices in a later section.

Step 2: Initial data extraction and load

In this step, you complete the initial data extraction and load the source data into Amazon Redshift for the first time. You can use AWS SCT data extractors to extract data from the source data warehouse and load data to Amazon S3 if your data size and data transfer requirements allow you to transfer data over the interconnected network. Alternatively, if there are limitations such as network capacity limit, you can load data to Snowball and from there data gets loaded to Amazon S3. When the data in the source data warehouse is available on Amazon S3, it’s loaded to Amazon Redshift. In situations when the source data warehouse native tools do a better data unload and load job than AWS SCT data extractors, you may choose to use the native tools to complete this step.

Step 3: Delta and incremental load

In this step, you use AWS SCT and sometimes source data warehouse native tools to capture and load delta or incremental changes from sources to Amazon Redshift. This is often referred to change data capture (CDC). CDC is a process that captures changes made in a database, and ensures that those changes are replicated to a destination such as a data warehouse.

You should now have enough information to start developing a migration plan for your data warehouse. In the following section, I dive deeper into the AWS services that can help you migrate your data warehouse to Amazon Redshift, and the best practices of using these services to accelerate a successful delivery of your data warehouse migration project.

Data warehouse migration services

Data warehouse migration involves a set of services and tools to support the migration process. You begin with creating a database migration assessment report and then converting the source data schema to be compatible with Amazon Redshift by using AWS SCT. To move data, you can use the AWS SCT data extraction tool, which has integration with AWS Data Migration Service (AWS DMS) to create and manage AWS DMS tasks and orchestrate data migration.

To transfer source data over the interconnected network between the source and AWS, you can use AWS Storage Gateway, Amazon Kinesis Data Firehose, Direct Connect, AWS Transfer Family services, Amazon S3 Transfer Acceleration, and AWS DataSync. For data warehouse migration involving a large volume of data, or if there are constraints with the interconnected network capacity, you can transfer data using the AWS Snow Family of services. With this approach, you can copy the data to the device, send it back to AWS, and have the data copied to Amazon Redshift via Amazon S3.

AWS SCT is an essential service to accelerate your data warehouse migration to Amazon Redshift. Let’s dive deeper into it.

Migrating using AWS SCT

AWS SCT automates much of the process of converting your data warehouse schema to an Amazon Redshift database schema. Because the source and target database engines can have many different features and capabilities, AWS SCT attempts to create an equivalent schema in your target database wherever possible. If no direct conversion is possible, AWS SCT creates a database migration assessment report to help you convert your schema. The database migration assessment report provides important information about the conversion of the schema from your source database to your target database. The report summarizes all the schema conversion tasks and details the action items for schema objects that can’t be converted to the DB engine of your target database. The report also includes estimates of the amount of effort that it will take to write the equivalent code in your target database that can’t be converted automatically.

Storage optimization is the heart of a data warehouse conversion. When using your Amazon Redshift database as a source and a test Amazon Redshift database as the target, AWS SCT recommends sort keys and distribution keys to optimize your database.

With AWS SCT, you can convert the following data warehouse schemas to Amazon Redshift:

  • Amazon Redshift
  • Azure Synapse Analytics (version 10)
  • Greenplum Database (version 4.3 and later)
  • Microsoft SQL Server (version 2008 and later)
  • Netezza (version 7.0.3 and later)
  • Oracle (version 10.2 and later)
  • Snowflake (version 3)
  • Teradata (version 13 and later)
  • Vertica (version 7.2 and later)

At AWS, we continue to release new features and enhancements to improve our product. For the latest supported conversions, visit the AWS SCT User Guide.

Migrating data using AWS SCT data extraction tool

You can use an AWS SCT data extraction tool to extract data from your on-premises data warehouse and migrate it to Amazon Redshift. The agent extracts your data and uploads the data to either Amazon S3 or, for large-scale migrations, an AWS Snowball Family service. You can then use AWS SCT to copy the data to Amazon Redshift. Amazon S3 is a storage and retrieval service. To store an object in Amazon S3, you upload the file you want to store to an S3 bucket. When you upload a file, you can set permissions on the object and also on any metadata.

In large-scale migrations involving data upload to a AWS Snowball Family service, you can use wizard-based workflows in AWS SCT to automate the process in which the data extraction tool orchestrates AWS DMS to perform the actual migration.

Considerations for Amazon Redshift migration tools

To improve and accelerate data warehouse migration to Amazon Redshift, consider the following tips and best practices. Tthis list is not exhaustive. Make sure you have a good understanding of your data warehouse profile and determine which best practices you can use for your migration project.

  • Use AWS SCT to create a migration assessment report and scope migration effort.
  • Automate migration with AWS SCT where possible. The experience from our customers shows that AWS SCT can automatically create the majority of DDL and SQL scripts.
  • When automated schema conversion is not possible, use custom scripting for the code conversion.
  • Install AWS SCT data extractor agents as close as possible to the data source to improve data migration performance and reliability.
  • To improve data migration performance, properly size your Amazon Elastic Compute Cloud (Amazon EC2) instance and its equivalent virtual machines that the data extractor agents are installed on.
  • Configure multiple data extractor agents to run multiple tasks in parallel to improve data migration performance by maximizing the usage of the allocated network bandwidth.
  • Adjust AWS SCT memory configuration to improve schema conversion performance.
  • Use Amazon S3 to store the large objects such as images, PDFs, and other binary data from your existing data warehouse.
  • To migrate large tables, use virtual partitioning and create sub-tasks to improve data migration performance.
  • Understand the use cases of AWS services such as Direct Connect, the AWS Transfer Family, and the AWS Snow Family. Select the right service or tool to meet your data migration requirements.
  • Understand AWS service quotas and make informed migration design decisions.

Summary

Data is growing in volume and complexity faster than ever. However, only a fraction of this invaluable asset is available for analysis. Traditional on-premises data warehouses have rigid architectures that don’t scale for modern big data analytics use cases. These traditional data warehouses are expensive to set up and operate, and require large upfront investments in both software and hardware.

In this post, we discussed Amazon Redshift as a fully managed, scalable, modern data warehouse that can help you analyze all your data, and achieve performance at any scale with low and predictable cost. To migrate your data warehouse to Amazon Redshift, you need to consider a range of factors, such as the total size of the data warehouse, data change rate, and data transformation complexity, before picking a suitable migration strategy and process to reduce the complexity and cost of your data warehouse migration project. With AWS services such AWS SCT and AWS DMS, and by adopting the tips and the best practices of these services, you can automate migration tasks, scale migration, accelerate the delivery of your data warehouse migration project, and delight your customers.


About the Author

Lewis Tang is a Senior Solutions Architect at Amazon Web Services based in Sydney, Australia. Lewis provides partners guidance to a broad range of AWS services and help partners to accelerate AWS practice growth.

4 Fallacies That Keep SMBs Vulnerable to Ransomware, Pt. 1

Post Syndicated from Ryan Weeks original https://blog.rapid7.com/2022/03/24/four-fallacies-that-keep-smbs-vulnerable-to-ransomware-pt-1/

4 Fallacies That Keep SMBs Vulnerable to Ransomware, Pt. 1

Ransomware has focused on big-game hunting of large enterprises in the past years, and those events often make the headlines. The risk can be even more serious for small and medium-sized businesses (SMBs), who struggle to both understand the changing nature of the threats and lack the resources to become cyber resilient. Ransomware poses a greater threat to SMBs’ core ability to continue to operate, as recovery can be impossible or expensive beyond their means.

SMBs commonly seek assistance from managed services providers (MSPs) for their foundational IT needs to run their business — MSPs have been the virtual CIOs for SMBs for years. Increasingly, SMBs are also turning to their MSP partners to help them fight the threat of ransomware, implicitly asking them to also take on the role of a virtual CISO, too. These MSPs have working knowledge of ransomware and are uniquely situated to assist SMBs that are ready to go on a cyber resilience journey.

With this expert assistance available, one would think that we would be making more progress on ransomware. However, MSPs are still meeting resistance when working to implement a cyber resilience plan for many SMBs.

In our experience working with MSPs and hearing the challenges they face with SMBs, we have come to the conclusion that much of this resistance they meet is based on under-awareness, biases, or fallacies.

In this two-part blog series, we will present four common mistakes SMBs make when thinking about ransomware risk, allowing you to examine your own beliefs and draw new conclusions. We contend that until SMBs resistance to resilience improvement do the work to unwind critical flaws in thinking, ransomware will continue to be a growing and existential problem they face.

1. Relying on flawed thinking

I’m concerned about the potential impacts of ransomware, but I do not have anything valuable that an attacker would want, so ransomware is not likely to happen to me.

Formal fallacies

These arguments are the most common form of resistance toward implementing adequate cyber resilience for SMBs, and they create a rationalization for inaction as well as a false sense of safety. However, they are formal fallacies, relying on common beliefs that are partially informed by cognitive biases.

Formal fallacies can best be classified simply as deductively invalid arguments that typically commit an easily recognizable logical error when properly examined. Either the premises are untrue, or the argument is invalid due to a logical flaw.

Looking at this argument, the conclusion “ransomware will not happen to me” is the logical conclusion of the prior statement, “I have nothing of value to an attacker.” The flaw in this argument is that the attacker does not need the data they steal or hold ransom to be intrinsically valuable to them — they only need it to be valuable to the attack target.

Data that is intrinsically valuable is nice to have for an attacker, as they can monetize it outside of the attack by exfiltrating it and selling it (potentially multiple times), but the primary objective is to hold it ransom, because you need it to run your business. Facing this fact, we can see that the conclusion “ransomware will not happen to me” is logically invalid based on the premise “I have nothing of value to an attacker.”

Confirmation bias

The belief “ransomware will not happen to me” can also be a standalone argument. The challenge here is that the premise of the argument is unknown. This means we need data to support probability. With insufficient reporting data to capture accurate rates of ransomware on SMBs, this is problematic and can lead to confirmation bias. If I can’t find data on others like me as an SMB, then I may conclude that this confirms I’m not at risk.

Anchoring bias

I may be able to find data in aggregate that states that my SMB’s industries are not as commonly targeted. This piece of data can lead to an anchoring bias, which is the tendency to rely heavily on the first piece of information we are given. While ransomware might not be as common in your industry, that does not mean it does not exist. We need to research further rather than latching onto this data to anchor our belief.

Acknowledge and act

The best way to combat these formal fallacies and biases is for the SMB and their MSP to acknowledge these beliefs and act to challenge them through proper education. Below are some of the most effective exercises we have seen SMBs and MSPs use to better educate themselves on real versus perceived ransomware risk likelihood:

  1. Threat profiling is an exercise that collects information, from vendor partners and open-source intelligence sources, to inform which threat actors are likely to target the business, using which tactics.
  2. Data flow diagrams can help you to map out your unique operating environment and see how all your systems connect together to better inform how data moves and resides within your IT environment.
  3. A risk assessment uses the threat profile information and overlays on the data flow diagram to determine where the business is most susceptible to attacker tactics.
  4. Corrective action planning is the last exercise, where you prioritize the largest gaps in protection using a threat- and risk-informed approach.

2. Being resigned to victimhood

“Large companies and enterprises get hit with ransomware all the time. As an SMB, I don’t stand a chance. I don’t have the resources they do. This is hopeless; there’s nothing I can do about it.”

Victim mentality

This past year has seen a number of companies that were supposedly “too large and well-funded to be hacked” reporting ransomware breaches. It feels like there is a constant stream of information re-enforcing the mentality that, even with a multi-million dollar security program, an SMB will not be able to effectively defend against the adverse outcomes from ransomware. This barrage of information can make them feel a loss of control and that the world is against them.

Learned helplessness

These frequent negative outcomes for “prepared” organizations are building a sense of learned helplessness, or powerlessness, within the SMB space. If a well-funded and organized company can’t stop ransomware, why should we even try?

This mentality takes a binary view on a ransomware attack, viewing it as an all-or-nothing event. In reality, there are degrees of success of a ransomware attack. The goal of becoming immune to ransomware can spark feelings of learned helplessness, but if you reframe it as minimizing the damage a successful attack will have, this allows you to regain a sense of control in what otherwise may feel like an impossible effort.

Pessimism bias

This echo chamber of successful attacks (and thus presumed unsuccessful mitigations) is driving a pessimism bias. As empathetic beings, we feel the pain of these attacked organizations as though it were our own. We then tie this negative emotion to our expectation of an event (i.e. a ransomware attack), creating the expectation of a negative outcome for our own organization.

Acknowledge and act

Biases and beliefs shape our reality. If an SMB believes they are going to fall victim to ransomware and fails to protect against it, they actually make that exact adverse outcome more likely.

Despite the fear and uncertainty, the most important variable missing from this mental math is environment complexity. The more complex the environment, the more difficult it is to protect. SMBs have an advantage over their large-business counterparts, as the SMB IT environment is usually easier to control with the right in-house tech staff and/or MSP partners. That means SMBs are better situated than large companies to deter and recover from attacks — with the right strategic investments.

Check back with us next week, when we’ll tackle the third and fourth major fallacies that hold SMBs back from securing themselves against ransomware.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Migration updates announced at re:Invent 2021

Post Syndicated from Angélica Ortega original https://aws.amazon.com/blogs/architecture/migration-updates-announced-at-reinvent-2021/

re:Invent is a yearly event that offers learning and networking opportunities for the global cloud computing community. 2021 marks the launch of several new features in different areas of cloud services and migration.

In this blog, we’ll cover some of the most important recent announcements.

AWS Mainframe Modernization (Preview)

Mainframe modernization has become a necessity for many companies. One of the main drivers fueling this requirement is the need for agility, as the market constantly demands new functionalities. The mainframe platform, due to its complex dependencies, long procurement cycles, and escalating costs, makes it impossible for companies to innovate at the needed pace.

Mainframe modernization can be a complex undertaking. To assist you, we have launched a comprehensive platform, called AWS Mainframe Modernization, that enables two popular migration patterns: replatforming, and automated refactoring.

AWS Mainframe Modernization flow

Figure 1. AWS Mainframe Modernization flow

AWS Migration and Modernization Competency

Application modernization is becoming an important migration strategy, especially for strategic business applications. It brings many benefits: software licensing and operation cost optimization, better performance, agility, resilience, and more. Selecting a partner with the required expertise can help reduce the time and risk for these kinds of projects. In the next section, you’ll find a summary of the experience required by a partner to get the AWS Migration and Modernization Competency. More information can be found at AWS Migration Competency Partners.

AWS Application Migration Service (AWS MGN)

AWS MGN is recommended as the primary migration service for lift and shift migrations. Customers currently using AWS Server Migration Service are encouraged to switch to it for future migrations.

Starting in November 2021, AWS MGN supports agentless replication from VMWare vCenter versions 6.7 and 7.0 to the AWS Cloud. This new feature is intended for users who want to rehost their applications to AWS, but cannot install the AWS Replication Agent on individual servers due to company policies or technical restrictions.

AWS Elastic Disaster Recovery

Two of the pillars of the Well-Architected Framework are Operational Excellence and Reliability. Both are directly concerned with the capability of a service to recover and work efficiently. AWS Elastic Disaster Recovery is a new service to help you to minimize downtime and data loss with fast, reliable, and recovery of on-premises and cloud-based applications. It uses storage, compute, point-in-time recovery, and cost-optimization.

AWS Resilience Hub

AWS Resilience Hub is a service designed to help customers define, measure, and manage the resilience of their applications in the cloud. This service helps you define RTO (Recovery Time Objective) and RPO (Recovery Point Objective) and evaluates the configuration to meet the requirements defined. Aligned with the AWS Well-Architected Framework, this service can recover applications deployed with AWS CloudFormation, and integrates with AWS Fault Injection Simulator, AWS Systems Manager, or Amazon CloudWatch.

AWS Migration Hub Strategy Recommendations

One of the critical tasks in a migration is determining the right strategy. AWS Migration Hub can help you build a migration and modernization strategy for applications running on-premises or in AWS. AWS Migration Hub Strategy Recommendations were announced on October 2021. It’s designed to be the starting point for your cloud journey. It helps you to assess the appropriate strategy to transform your portfolios to use the full benefits of cloud services.

AWS Migration Hub Refactor Spaces (Preview)

Refactoring is the migration strategy that requires the biggest effort, but it permits you to take full advantage of cloud-native features to improve agility, performance, and scalability. AWS Migration Hub Refactor Spaces is the starting point for incremental application refactoring to microservices in AWS. It will help you reduce the undifferentiated heavy lifting of building and operating your AWS infrastructure for incremental refactoring.

AWS Database Migration Service

AWS Database Migration Service (AWS DMS) is a service that helps you migrate databases to AWS quickly and securely.

AWS DMS Fleet Advisor is a new free feature of AWS DMS that enables you to quickly build a database and analytics migration plan, by automating the discovery and analysis of your fleet. AWS DMS Fleet Advisor is intended for users looking to migrate a large number of database and analytic servers to AWS.

AWS Microservice Extractor for .NET is a new free tool and simplifies the process of re-architecting applications into smaller code projects. Modernize and transform your .NET applications with an assistive tool that analyzes source code and runtime metrics. It creates a visual representation of your application and its dependencies.

This tool visualizes your applications source code, helps with code refactoring, and assists in extraction of the code base into separate code projects.  Teams can then develop, build, and operate independently to improve agility, uptime, and scalability.

AWS Migration Evaluator

AWS Migration Evaluator (ME) is a migration assessment service that helps you create a directional business case for AWS Cloud planning and migration. Building a business case for the cloud can be a time-consuming process on your own. With Migration Evaluator, organizations can accelerate their evaluation and decision-making for migration to AWS. During 2021, there were some existing improvements to mention:

  • Quick Insights. This new capability of Migration Evaluator, provides customers with a one-page summary of their projected AWS costs, based on measured on-premises provisioning and utilization.
  • Enhanced Microsoft SQL Discovery. This is a new feature of the Migration Evaluator Collector, which assists you by including your SQL Server environment in their migration assessment.
  • Agentless Collection for Dependency Mapping. The ME Collector now enables agentless network traffic collection to be sent to the customer’s AWS Migration Hub account.

AWS Amplify Studio

This is a visual development environment that offers frontend developers new features to accelerate UI development with minimal coding, while integrating with Amplify. Read Introducing AWS Amplify Studio.

Conclusion

Migration is a crucial process for many enterprises as they move from on-premises systems to the cloud. It helps accelerate your cloud journey, and offers additional tools and methodologies created by AWS. AWS has created and is continually improving services and features to optimize the migration process and help you reach your business goals faster.

Related information

Calling All Security Researchers: Join the Backblaze Bug Bounty Program

Post Syndicated from Ola Nordstrom original https://www.backblaze.com/blog/calling-all-security-researchers-join-the-backblaze-bug-bounty-program/

Here at Backblaze, we help people build applications, host content, manage media, back up and archive data, and more securely in the cloud—and that “securely” part of the equation has always been paramount. We use a variety of tools and techniques to stay ahead of any potential security threats, including our participation over the past year plus in the Bugcrowd security platform. Today, we are opening up our Bugcrowd Bug Bounty Program to all security researchers.

Now, anyone can join Bugcrowd and start hacking away at our desktop and mobile apps, APIs, or web applications in order to help us find any vulnerabilities and strengthen the security of our services. Read on to learn more about the program and the other measures we take to spot and address potential security vulnerabilities.

Join Ola Nordstrom, Lead Application Security Engineer; Chris Vickery, Senior Risk Assessment Specialist; and Pat Patterson, Chief Developer Evangelist, on April 21, 2022 at 1 p.m. PDT to learn more about why we decided to implement the Bugcrowd Bug Bounty Program, how it fits into the Backblaze security portfolio, and how you can join in on either side: as hacker or hackee.
 
➔ Register for the Webinar Today

How Backblaze Keeps Customer Data Safe

Over the years, Backblaze has consistently invested in maintaining and upgrading its security portfolio. User files are encrypted by default, we also support server-side encryption for the Backblaze S3 Compatible API, and have doubled the size of our Security team over the last year under the leadership of CISO Mark Potter.

But all those security features and frankly all software, not just Backblaze, are vulnerable to security bugs that can expose user information and data. Oftentimes, these are caused by implementation mistakes or changes in how a piece of software is used over time. The recent Log4j (aka Log4Shell) vulnerability affected nearly everyone due to its ubiquitous use across software platforms and the industry as a whole.

I’ve been working to secure software my whole career. Before the advent of crowdsourced security platforms such as Bugcrowd, managing vulnerability reports was a painful task. Emails, typically sent to [email protected], were copied back and forth between bug tracking platforms. Reviewing submissions and gathering metrics was difficult since every engineering team or organization always had their own process for tagging and categorizing bug reports. Everything was copied back and forth to make any sense of the data (Think Excel spreadsheets!). In a world where zero-day vulnerabilities are commonplace, such processes are just too slow and you end up playing catch-up with the bad guys.

How Does Bugcrowd Fit Into the Backblaze Security Portfolio?

Bugcrowd takes the grunt work out of the process to let us focus on addressing the vulnerability and communicating with researchers. Bugcrowd encourages white hat hackers to attack businesses, find vulnerabilities in their software and processes, and aid in guiding the remediation of those vulnerabilities before they can be exploited by anyone else.

What’s more, and perhaps most important to security researchers around the world, is that Bugcrowd allows us to pay security researchers for finding vulnerabilities. Without Bugcrowd, Backblaze wouldn’t have a cost-effective way to pay for a bug report from a researcher in another country or another continent. It’s only fair we pay for the work they do to help us out, and in addition, having a public program ensures transparency and fairness for everyone.

How You Can Join the Backblaze Bugcrowd Bug Bounty Program

Backblaze’s private beta has been running for over a year, but now that the program is public, any interested security researcher can sign up to hack away the company’s in-scope products and networks. If you think you’ve found a vulnerability or you’d like more information about the in-scope products, URLs, or bounty ranges, check out the Backblaze Bugcrowd Bug Bounty Program here. And, don’t forget to register for our webinar to learn more about the program.

The post Calling All Security Researchers: Join the Backblaze Bug Bounty Program appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close