Tag Archives: windows

Remote Log Collection on Windows

Post Syndicated from Bozho original https://techblog.bozho.net/remote-log-collection-on-windows/

Every organization needs to collect logs from multiple sources in order to put them in either a log collector or SIEM (or a dedicated audit trail solution). And there are two options for that – using an agent and agentless.

Using an agent is easy – you install a piece of software on each machine that generates logs and it forwards them wherever needed. This is however not preferred by many organizations as it complicates things – upgrading to new versions, keeping track of dozens of configurations, and potentially impacting performance of the target machines.

So some organizations prefer to collect logs remotely, or use standard tooling, already present on the target machine. For Linux that’s typically syslog, where forwarding is configured. Logs can also be read remotely via SCP/SSH.

However, on Windows things are less straightforward. You need to access the Windows Event Log facility remotely, but there is barely a single place that describes all the required steps. This blogpost comes close, but I’d like to provide the full steps, as there are many, many things that one may miss. It is a best practice to use a non-admin, service account for that and you have to give multiple permissions to allow reading the event logs remotely.

There are also multiple ways to read the logs remotely:

  • Through the Event Viewer UI – it’s the simplest to get right, as only one domain group is required for access
  • Through Win32 native API calls (and DCOM) – i.e. EvtOpenSession and the related methods
  • Through PowerShell Get-WinEvent (Get-EventLog is a legacy cmdlet that doesn’t support remoting)
  • Through WMI directly (e.g. this or this. To be honest, I don’t know whether the native calls and the powershell commands don’t use WMI and/or CIM underneath as well – probably.

So, in order to get these options running, the following configurations have to be done:

  1. Allow the necessary network connections to the target machines (through network rules and firewall rules, if applicable)
  2. Go to Windows Firewall -> Inbound rules and enable the rules regarding “Remote log management”
  3. Create a service account and configure it in the remote collector. The other option is to have an account on the collector machine that is given the proper access, so that you can use the integrated AD authentication
  4. Add the account to the following domain groups: Event log readers, Distributed COM users. The linked article above mentions “Remote management users” as well, but that’s optional if you just want to read the logs
  5. Give the “Manage auditing and security log” privilege to the service account through group policies (GPO) or via “local security policy”. Find it under User Rights Assignment > Manage auditing and security log
  6. Give WMI access – open “wmimgmt” -> right click -> properties > Security -> Advanced and allow the service account to “Execute Methods”, “Provider Write”, “Enable Account”, “Remote Enable”. To be honest, I’m not sure exactly which folder that should be applied to, and applying it to the root may be too wide, so you’d have to experiment
  7. Give registry permissions: Regedit -> Local machine -> System\CurrentControlSet\Services\eventlog\Security -> right click -> permissions and add the service account. According to the linked post you also have to modify a particular registry entry, but that’s not required just for reading the log. This step is probably the most bizarre and unexpected one.
  8. Make sure you have DCOM rights. This comes automatically wit the DCOM group, but double check via DCOMCnfg -> right click -> COM security
  9. Grant permissions for the service account on c:\windows\system32\winevt. This step is not required for “simple” reading of the logs, but I’ve seen it in various places, so in some scenarios you might need to check it
  10. Make sure the application or service that is reading the logs remotely has sufficient permissions – it can usually run with admin privileges, because it’s on a separate, dedicated machine.
  11. Restart services – that is optional, but can be done just in case: Restart “Windows Remote Management (WS-Management)” and “Windows Event Log” on the target machine

As you can see, there are many things that you can miss, and there isn’t a single place in any documentation to list those steps (though there are good guides like this that go in a slightly different direction).

I can’t but make a high-level observation here – the need to do everything above is an example of how security measures can “explode” and become really hard to manage. There are many service, groups, privileges, policies, inbound rules and whatnot, instead of just “Allow remote log reading for this user”. I know it’s inherently complex, but maybe security products should make things simpler by providing recipes for typical scenarios. Following guides in some blog is definitely worse than running a predefined set of commands. And running the “Allow remote access to event log” recipe would do just what you need. Of course, knowing which recipe to run and how to parameterize it would require specific knowledge, but you can’t do security without trained experts.

The post Remote Log Collection on Windows appeared first on Bozho's tech blog.

Massive iPhone Hack Targets Uyghurs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/massive_iphone_.html

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google’s Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google’s announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

EDITED TO ADD (9/7): More on Apple’s pushbacks.

Applying Netflix DevOps Patterns to Windows

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/applying-netflix-devops-patterns-to-windows-2a57f2dbbf79?source=rss----2615bd06b42e---4

Baking Windows with Packer

By Justin Phelps and Manuel Correa

Customizing Windows images at Netflix was a manual, error-prone, and time consuming process. In this blog post, we describe how we improved the methodology, which technologies we leveraged, and how this has improved service deployment and consistency.

Artisan Crafted Images

In the Netflix full cycle DevOps culture the team responsible for building a service is also responsible for deploying, testing, infrastructure, and operation of that service. A key responsibility of Netflix engineers is identifying gaps and pain points in the development and operation of services. Though the majority of our services run on Linux Amazon Machine Images (AMIs), there are still many services critical to the Netflix Playback Experience running on Windows Elastic Compute Cloud (EC2) instances at scale.

We looked at our process for creating a Windows AMI and discovered it was error-prone and full of toil. First, an engineer would launch an EC2 instance and wait for the instance to come online. Once the instance was available, the engineer would use a remote administration tool like RDP to login to the instance to install software and customize settings. This image was then saved as an AMI and used in an Auto Scale Group to deploy a cluster of instances. Because this process was time consuming and painful, our Windows instances were usually missing the latest security updates from Microsoft.

Last year, we decided to improve the AMI baking process. The challenges with service management included:

  • Stale documentation
  • OS Updates
  • High cognitive overhead
  • A lack of continuous testing

Scaling Image Creation

Our existing AMI baking tool Aminator does not support Windows so we had to leverage other tools. We had several goals in mind when trying to improve the baking methodology:

Configuration as Code

The first part of our new Windows baking solution is Packer. Packer allows you to describe your image customization process as a JSON file. We make use of the amazon-ebs Packer builder to launch an EC2 instance. Once online, Packer uses WinRM to copy files and run PowerShell scripts against the instance. If all of the configuration steps are successful then Packer saves a new AMI. The configuration file, referenced scripts, and artifact dependency definitions all live in an internal git repository. We now have the software and instance configuration as code. This means changes can be tracked and reviewed like any other code change.

Packer requires specific information for your baking environment and extensive AWS IAM permissions. In order to simplify the use of Packer for our software developers, we bundled Netflix-specific AWS environment information and helper scripts. Initially, we did this with a git repository and Packer variable files. There was also a special EC2 instance where Packer was executed as Jenkins jobs. This setup was better than manually baking images but we still had some ergonomic challenges. For example, it became cumbersome to ensure users of Packer received updates.

The last piece of the puzzle was finding a way to package our software for installation on Windows. This would allow for reuse of helper scripts and infrastructure tools without requiring every user to copy that solution into their Packer scripts. Ideally, this would work similar to how applications are packaged in the Animator process. We solved this by leveraging Chocolatey, the package manager for Windows. Chocolatey packages are created and then stored in an internal artifact repository. This repository is added as a source for the choco install command. This means we can create and reuse packages that help integrate Windows into the Netflix ecosystem.

Leverage Spinnaker for Continuous Delivery

Flow chart showing how Docker image inheretance is used in the creation of a Windows AMI.
The Base Dockerfile allows updates of Packer, helper scripts, and environment configuration to propagate through the entire Windows Baking process.

To make the baking process more robust we decided to create a Docker image that contains Packer, our environment configuration, and helper scripts. Downstream users create their own Docker images based on this base image. This means we can update the base image with new environment information and helper scripts, and users get these updates automatically. With their new Docker image, users launch their Packer baking jobs using Titus, our container management system. The Titus job produces a property file as part of a Spinnaker pipeline. The resulting property file contains the AMI ID and is consumed by later pipeline stages for deployment. Running the bake in Titus removed the single EC2 instance limitation, allowing for parallel execution of the jobs.

Now each change in the infrastructure is tested, canaried, and deployed like any other code change. This process is automated via a Spinnaker pipeline:

Screenshot of an example Spinnaker pipeline showing Docker image, Windows AMI, Canary Analysis, and Deployment stages.
Example Spinnaker pipeline showing the bake, canary, and deployment stages.

In the canary stage, Kayenta is used to compare metrics between a baseline (current AMI) and the canary (new AMI). The canary stage will determine a score based on metrics such as CPU, threads, latency, and GC pauses. If this score is within a healthy threshold the AMI is deployed to each environment. Running a canary for each change and testing the AMI in production allows us to capture insights around impact on Windows updates, script changes, tuning web server configuration, among others.

Eliminate Toil

Automating these tedious operational tasks allows teams to move faster. Our engineers no longer have to manually update Windows, Java, Tomcat, IIS, and other services. We can easily test server tuning changes, software upgrades, and other modifications to the runtime environment. Every code and infrastructure change goes through the same testing and deployment pipeline.

Reaping the Benefits

Changes that used to require hours of manual work are now easy to modify, test, and deploy. Other teams can quickly deploy secure and reproducible instances in an automated fashion. Services are more reliable, testable, and documented. Changes to the infrastructure are now reviewed like any other code change. This removes unnecessary cognitive load and documents tribal knowledge. Removing toil has allowed the team to focus on other features and bug fixes. All of these benefits reduce the risk of a customer-affecting outage. Adopting the Immutable Server pattern for Windows using Packer and Chocolatey has paid big dividends.


Applying Netflix DevOps Patterns to Windows was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

New – Trigger a Kernel Panic to Diagnose Unresponsive EC2 Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-trigger-a-kernel-panic-to-diagnose-unresponsive-ec2-instances/

When I was working on systems deployed in on-premises data centers, it sometimes happened I had to debug an unresponsive server. It usually involved asking someone to physically press a non-maskable interrupt (NMI) button on the frozen server or to send a signal to a command controller over a serial interface (yes, serial, such as in RS-232).This command triggered the system to dump the state of the frozen kernel to a file for further analysis. Such a file is usually called a core dump or a crash dump. The crash dump includes an image of the memory of the crashed process, the system registers, program counter, and other information useful in determining the root cause of the freeze.

Today, we are announcing a new Amazon Elastic Compute Cloud (EC2) API allowing you to remotely trigger the generation of a kernel panic on EC2 instances. The EC2:SendDiagnosticInterrupt API sends a diagnostic interrupt, similar to pressing a NMI button on a physical machine, to a running EC2 instance. It causes the instance’s hypervisor to send a non-maskable interrupt (NMI) to the operating system. The behaviour of your operating system when a NMI interrupt is received depends on its configuration. Typically, it involves entering into kernel panic. The kernel panic behaviour also depends on the operating system configuration, it might trigger the generation of the crash dump data file, obtain a backtrace, load a replacement kernel or restart the system.

You can control who in your organisation is authorized to use that API through IAM Policies, I will give an example below.

Cloud and System Engineers, or specialists in kernel diagnosis and debugging, find in the crash dump invaluable information to analyse the causes of a kernel freeze. Tools like WinDbg (on Windows) and crash (on Linux) can be used to inspect the dump.

Using Diagnostic Interrupt
Using this API is a three step process. First you need to configure the behavior of your OS when it receives the interrupt.

By default, our Windows Server AMIs have memory dump already turned on. Automatic restart after the memory dump has been saved is also selected. The default location for the memory dump file is %SystemRoot% which is equivalent to C:\Windows.

You can access these options by going to :
Start > Control Panel > System > Advanced System Settings > Startup and Recovery

On Amazon Linux 2, you need to install and configurekdump & kexec. This is a one-time setup.

$ sudo yum install kexec-tools

Then edit the file /etc/default/grub to allocate the amount of memory to be reserved for the crash kernel. In this example, we reserve 160M by adding crashkernel=160M. The amount of memory to allocate depends on your instance’s memory size. The general recommendation is to test kdump to see if the allocated memory is sufficient. The kernel doc has the full syntax of the crashkernel kernel parameter.

GRUB_CMDLINE_LINUX_DEFAULT="crashkernel=160M console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 rd.emergency=poweroff rd.shell=0"

And rebuild the grub configuration:

$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Finally edit /etc/sysctl.conf and add a line : kernel.unknown_nmi_panic=1. This tells the kernel to trigger a kernel panic upon receiving the interrupt.

You are now ready to reboot your instance. Be sure to include these commands in your user data script or in your AMI to automatically configure this on all your instances. Once the instance is rebooted, verify that kdump is correctly started.

$ systemctl status kdump.service
● kdump.service - Crash recovery kernel arming
   Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2019-07-05 15:09:04 UTC; 3h 13min ago
  Process: 2494 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS)
 Main PID: 2494 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/kdump.service

Jul 05 15:09:02 ip-172-31-15-244.ec2.internal systemd[1]: Starting Crash recovery kernel arming...
Jul 05 15:09:04 ip-172-31-15-244.ec2.internal kdumpctl[2494]: kexec: loaded kdump kernel
Jul 05 15:09:04 ip-172-31-15-244.ec2.internal kdumpctl[2494]: Starting kdump: [OK]
Jul 05 15:09:04 ip-172-31-15-244.ec2.internal systemd[1]: Started Crash recovery kernel arming.

Our documentation contains the instructions for other operating systems.

Once this one-time configuration is done, you’re ready for the second step, to trigger the API. You can do this from any machine where the AWS CLI or SDK is configured. For example :

$ aws ec2 send-diagnostic-interrupt --region us-east-1 --instance-id <value>

There is no return value from the CLI, this is expected. If you have a terminal session open on that instance, it disconnects. Your instance reboots. You reconnect to your instance, you find the crash dump in /var/crash.

The third and last step is to analyse the content of the crash dump. On Linux systems, you need to install the crash utility and the debugging symbols for your version of the kernel. Note that the kernel version should be the same that was captured by kdump. To find out which kernel you are currently running, use the uname -r command.

$ sudo yum install crash
$ sudo debuginfo-install kernel
$ sudo crash /usr/lib/debug/lib/modules/4.14.128-112.105.amzn2.x86_64/vmlinux /var/crash/127.0.0.1-2019-07-05-15\:08\:43/vmcore

crash 7.2.6-1.amzn2.0.1
... output suppressed for brevity ...

      KERNEL: /usr/lib/debug/lib/modules/4.14.128-112.105.amzn2.x86_64/vmlinux
    DUMPFILE: /var/crash/127.0.0.1-2019-07-05-15:08:43/vmcore  [PARTIAL DUMP]
        CPUS: 2
        DATE: Fri Jul  5 15:08:38 2019
      UPTIME: 00:07:23
LOAD AVERAGE: 0.00, 0.00, 0.00
       TASKS: 104
    NODENAME: ip-172-31-15-244.ec2.internal
     RELEASE: 4.14.128-112.105.amzn2.x86_64
     VERSION: #1 SMP Wed Jun 19 16:53:40 UTC 2019
     MACHINE: x86_64  (2500 Mhz)
      MEMORY: 7.9 GB
       PANIC: "Kernel panic - not syncing: NMI: Not continuing"
         PID: 0
     COMMAND: "swapper/0"
        TASK: ffffffff82013480  (1 of 2)  [THREAD_INFO: ffffffff82013480]
         CPU: 0
       STATE: TASK_RUNNING (PANIC)

Collecting kernel crash dumps is often the only way to collect kernel debugging information, be sure to test this procedure frequently, in particular after updating your operating system or when you will create new AMIs.

Control Who Is Authorized to Send Diagnostic Interrupt
You can control who in your organisation is authorized to send the Diagnostic Interrupt, and to which instances, through IAM policies with resource-level permissions, like in the example below.

{
   "Version": "2012-10-17",
   "Statement": [
      {
      "Effect": "Allow",
      "Action": "ec2:SendDiagnosticInterrupt",
      "Resource": "arn:aws:ec2:region:account-id:instance/instance-id"
      }
   ]
}

Pricing
There are no additional charges for using this feature. However, as your instance continues to be in a ‘running’ state after it receives the diagnostic interrupt, instance billing will continue as usual.

Availability
You can send Diagnostic Interrupts to all EC2 instances powered by the AWS Nitro System, except A1 (Arm-based). This is C5, C5d, C5n, i3.metal, I3en, M5, M5a, M5ad, M5d, p3dn.24xlarge, R5, R5a, R5ad, R5d, T3, T3a, and Z1d as I write this.

The Diagnostic Interrupt API is now available in all public AWS Regions and GovCloud (US), you can start to use it today.

— seb

Chinese Military Wants to Develop Custom OS

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/chinese_militar.html

Citing security concerns, the Chinese military wants to replace Windows with its own custom operating system:

Thanks to the Snowden, Shadow Brokers, and Vault7 leaks, Beijing officials are well aware of the US’ hefty arsenal of hacking tools, available for anything from smart TVs to Linux servers, and from routers to common desktop operating systems, such as Windows and Mac.

Since these leaks have revealed that the US can hack into almost anything, the Chinese government’s plan is to adopt a “security by obscurity” approach and run a custom operating system that will make it harder for foreign threat actors — mainly the US — to spy on Chinese military operations.

It’s unclear exactly how custom this new OS will be. It could be a Linux variant, like North Korea’s Red Star OS. Or it could be something completely new. Normally, I would be highly skeptical of a country being able to write and field its own custom operating system, but China is one of the few that is large enough to actually be able to do it. So I’m just moderately skeptical.

EDITED TO ADD (6/12): Russia also wants to develop its own flavor of Linux.

Running the most reliable choice for Windows workloads: Windows on AWS

Post Syndicated from Sandy Carter original https://aws.amazon.com/blogs/compute/running-the-most-reliable-choice-for-windows-workloads-windows-on-aws/

Some of you may not know, but AWS began supporting Microsoft Windows workloads on AWS in 2008—over 11 years ago. Year over year, we have released exciting new services and enhancements based on feedback from customers like you. AWS License Manager and Amazon CloudWatch Application Insights for .NET and SQL Server are just some of the recent examples. The rate and pace of innovation is eye-popping.

In addition to innovation, one of the key areas that companies value is the reliability of the cloud platform. I recently chatted with David Sheehan, DevOps engineer at eMarketer. He told me, “Our move from Azure to AWS improved the performance and reliability of our microservices in addition to significant cost savings.” If a healthcare clinic can’t connect to the internet, then it’s possible that they can’t deliver care to their patients. If a bank can’t process transactions because of an outage, they could lose business.

In 2018, the next-largest cloud provider had almost 7x more downtime hours than AWS per data pulled directly from the public service health dashboards of the major cloud providers. It is the reason companies like Edwards Lifesciences chose AWS. They are a global leader in patient-focused medical innovations for structural heart disease, as well as critical care and surgical monitoring. Rajeev Bhardwaj, the senior director for Enterprise Technology, recently told me, “We chose AWS for our data center workloads, including Windows, based on our assessment of the security, reliability, and performance of the platform.”

There are several reasons as to why AWS delivers a more reliable platform for Microsoft workloads but I would like to focus on two here: designing for reliability and scaling within a Region.

Reason #1—It’s designed for reliability

AWS has significantly better reliability than the next largest cloud provider, due to our fundamentally better global infrastructure design based on Regions and Availability Zones. The AWS Cloud spans 64 zones within 21 geographic Regions around the world. We’ve announced plans for 12 more zones and four more Regions in Bahrain, Cape Town, Jakarta, and Milan.

Look at networking capabilities across five key areas: security, global coverage, performance, manageability, and availability. AWS has made deep investments in each of these areas over the past 12 years. We want to ensure that AWS has the networking capabilities required to run the world’s most demanding workloads.

There is no compression algorithm for experience. From running the most extensive, reliable, and secure global cloud infrastructure technology platform, we’ve learned that you care about the availability and performance of your applications. You want to deploy applications across multiple zones in the same Region for fault tolerance and latency.

I want to take a moment to emphasize that our approach to building our network is fundamentally different from our competitors, and that difference matters. Each of our Regions is fully isolated from all other Regions. Unlike virtually every other cloud provider, each AWS Region has multiple zones and data centers. These zones are a fully isolated partition of our infrastructure that contains up to eight separate data centers.

The zones are connected to each other with fast, private fiber-optic networking, enabling you to easily architect applications that automatically fail over between zones without interruption. With their own power infrastructure, the zones are physically separated by a meaningful distance, many kilometers, from any other zone. You can partition applications across multiple zones in the same Region to better isolate any issues and achieve high availability.

The AWS control plane (including APIs) and AWS Management Console are distributed across AWS Regions. They use a Multi-AZ architecture within each Region to deliver resilience and ensure continuous availability. This ensures that you avoid having a critical service dependency on a single data center.

While other cloud vendors claim to have Availability Zones, they do not have the same stringent requirements for isolation between zones, leading to impact across multiple zones. Furthermore, AWS has more zones and more Regions with support for multiple zones than any other cloud provider. This design is why the next largest cloud provider had almost 7x more downtime hours in 2018 than AWS.

Reason #2—Scale within a Region

We also designed our services into smaller cells that scale out within a Region, as opposed to a single-Region instance that scales up. This approach reduces the blast radius when there is a cell-level failure. It is why AWS—unlike other providers—has never experienced a network event spanning multiple Regions.

AWS also provides the most detailed information on service availability via the Service Health Dashboard, including Regions affected, services impacted, and downtime duration. AWS keeps a running log of all service interruptions for the past year. Finally, you can subscribe to an RSS feed to be notified of interruptions to each individual service.

Reliability matters

Running Windows workloads on AWS means that you not only get the most innovative cloud, but you also have the most reliable cloud as well.

For example, Mary Kay is one of the world’s leading direct sellers of skin care products and cosmetics. They have tens of thousands of employees and beauty consultants working outside the office, so the IT system is fundamental for the success of their company.

Mary Kay used Availability Zones and Microsoft Active Directory to architect their applications on AWS. AWS Microsoft Managed AD provides Mary Kay the features that enabled them to deploy SQL Server Always On availability groups on Amazon EC2 Windows. This configuration gave Mary Kay the control to scale their deployment out to meet their performance requirements. They were able to deploy the service in multiple Regions to support users worldwide. Their on-premises users get the same experience when using Active Directory–aware services, either on-premises or in the AWS Cloud.

Now, with our cross-account and cross-VPC support, Mary Kay is looking at reducing their managed Active Directory infrastructure footprint, saving money and reducing complexity. But this identity management system must be reliable and scalable as well as innovative.

Fugro is a Dutch multinational public company headquartered in the Netherlands. They provide geotechnical, survey, subsea, and geoscience services for clients, typically oil and gas, telecommunications cable, and infrastructure companies. Fugro leverages the cloud to support the delivery of geo-intelligence and asset management services for clients globally in industries including onshore and offshore energy, renewables, power, and construction.

As I was chatting with Scott Carpenter, the global cloud architect for Fugro, he said, “Fugro is also now in the process of migrating a complex ESRI ArcGIS environment from an existing cloud provider to AWS. It is going to centralize and accelerate access from existing AWS hosted datasets, while still providing flexibility and interoperability to external and third-party data sources. The ArcGIS migration is driven by a focus on providing the highest level of operational excellence.”

With AWS, you don’t have to be concerned about reliability. AWS has the reliability and scale that drives innovation for Windows applications running in the cloud. And the same reliability that makes it best for your Windows applications is the same reliability that makes AWS the best cloud for all your applications.

Let AWS help you assess how your company can get the most out of cloud. Join all the AWS customers that trust us to run their most important applications in the best cloud. To have us create an assessment for your Windows applications or all your applications, email us at [email protected].

Fact-checking the truth on TCO for running Windows workloads in the cloud

Post Syndicated from Sandy Carter original https://aws.amazon.com/blogs/compute/fact-checking-the-truth-on-tco-for-running-windows-workloads-in-the-cloud/

We’ve been talking to many customers over the last 3–4 months who are concerned about the total cost of ownership (TCO) for running Microsoft Windows workloads in the cloud.

For example, Infor is a global leader in enterprise resource planning (ERP) for manufacturing, healthcare, and retail. They’ve moved thousands of their existing Microsoft SQL Server workloads to Amazon EC2 instances. As a result, they are saving 75% on monthly backup costs. With these tremendous cost savings, Infor can now focus their resources on exponential business growth, with initiatives around AI and optimization.

We also love the story of Just Eat, a UK-based company that has migrated their SQL Server workloads to AWS. They’re now focused on using that data to train Alexa skills for ordering take out!

Here are three fact checks that you should review to ensure that you are getting the best TCO!

Fact check #1: Microsoft’s cost comparisons are misleading for running Windows workloads in the cloud

Customers have shared with us over and over why they continue to trust AWS to run their most important Windows workloads. Still, some of those customers tell us that Microsoft claims Azure is cheaper for running Windows workloads. But can this really be true?

When looking at Microsoft’s cost comparisons, we can see that their analysis is misleading because of some false assumptions. For example, Microsoft only compares the costs of the compute service and licenses. But every workload needs storage and networking! By leaving out these necessary services, Microsoft is not comparing real-world workloads.

The comparison also assumes that the AWS and Azure offerings are at a performance parity, which isn’t true. While the comparison uses equivalent virtual instance configuration, Microsoft ignores the significantly higher performance of AWS compute. We hear that customers must run between two to three times as many Azure instances to get the same performance as they do on AWS (see Fact check #2).

And the list goes on. Microsoft’s analysis only looks at 2008 versions of Windows Server or SQL Server. Then, it adds in the cost for expensive Extended Support to the AWS calculation (extended support costs 75% of the current license cost per year). This addition makes up more than half of the claimed cost difference.

Microsoft assumes that in the next three years, customers won’t move off software that’s more than 10 years old. What we hear from customers is that they plan to use their upgrade rights from Software Assurance (SA) to move to newer versions, such as SQL Server 2016. They’ll use our new automated upgrade tool to eliminate the need for these expensive fees.

Finally, the comparison assumes the use of Azure Hybrid Benefit to reduce the cost of the Azure virtual instance. It does not factor in the cost of the required Microsoft SA on each license. The required SA adds significant cost to the Microsoft side of the example and further demonstrates that their example was misleading.

These assumptions result in a comparison that does not factor in all the costs needed to run SQL Server in Azure. Nor does it account for the performance gains that you get from running on AWS.

At AWS, we are committed to helping you make the most efficient use of cloud resources and lower your Microsoft bill. It appears that Microsoft is focused on keeping those line items flat or growing them over time by adding more and more licensing complexity.

Fact check #2: Price-performance matters to your business for running SQL Server in the cloud

When deciding what cloud is best for your Windows workloads, you should consider both price and performance to find the right operational combination to maximize value for your business. It is also important to think about the future and not make important platform decisions based on technology that was designed before the rise of the cloud.

We know that getting better application performance for your apps is critical to your customers’ satisfaction. In fact, excellent application performance leads to 39% higher customer satisfaction. For more information, see the Netmagic Solutions whitepaper, Application Performance Management: How End-User Experience Affects Your Bottom Line. Poor performance may lead to damaged reputations or even worse, customer attrition.

To make sure that you have the best possible experience for your customers, we focused on pushing the boundaries around performance.

With that in mind, here are some comparisons done between Azure and AWS:

  • DB Best, an enterprise database consulting company, wrote two blog posts—one each for Azure and AWS. They showed how to get the best price-performance ratio for running current versions of SQL Server in the cloud.
  • ZK Research took these posts and compared the results from DB Best to show an apples-to-apples comparison. The testing from DB Best found that SQL Server on AWS consistently shows a 2–3x better performance compared to Azure, using a TPC-C-like benchmark tool called HammerDB.
  • ZK Research then used the DB Best data to calculate the comparison cost for running 1 billion transactions per month. ZK Research found that SQL Server running on Azure would have twice the cost than when running on AWS, when comparing price-performance, including storage, compute, and networking.

As you can see from this data, running on AWS gives you the best price-performance ratio for Windows workloads.

Fact check #3: What does an optimized TCO for Windows workloads in the cloud look like?

When assessing which cloud to run your Windows workloads, your comparison must go well beyond just the compute and support costs. Look at the TCO of your workloads and include everything necessary to run and support these workloads, like storage, networking, and the cost benefits of better reliability. Then, see how you can use the cloud to lower your overall TCO.

So how do you lower your costs to run Windows workloads like Windows Server and SQL Server in the cloud? Optimize those workloads for the scalability and flexibility of cloud. When companies plan cloud migrations on their own, they often use a spreadsheet inventory of their on-premises servers and try to map them, one-to-one, to new cloud-based servers. But these inventories don’t account for the capabilities of cloud-based systems.

On-premises servers are not optimized, with 84% of workloads currently over-provisioned. Many Windows and SQL Server 2008 workloads are running on older, slower server hardware. By sizing your workloads for performance and capability, not by physical servers, you can optimize your cloud migration.

Reducing the number of licenses that you use, both by server and core counts, can also drive significant cost savings. See which on-premises workloads are fault-tolerant, and then use Amazon EC2 Spot Instances to save up to 90% on your compute costs vs. On-Demand pricing.

To get the most out of moving your Windows workloads into the cloud, review and optimize each workload to take best advantage of cloud scalability and flexibility. Our customers have made the most efficient use of cloud resources by working with assessment partners like Movere or TSO Logic, which is now part of AWS.

By running detailed assessments of their environments before migration, customers can yield up to 36% savings using AWS over three years. Customer with optimized environments often find that their AWS solutions are price-competitive with Microsoft even before taking in account the AWS price-performance advantage.

In addition, you can optimize utilization with AWS Trusted Advisor. In fact, over the last couple years, we’ve used AWS Trusted Advisor to tell customers how to spend less money with us, leading to hundreds of millions of dollars in savings for our customers every year.

Why run Windows Server and SQL Server anywhere else but AWS?

For the past 10 years, many companies, such as Adobe and Salesforce, have trusted AWS to run Windows-based workloads such as Windows Server and SQL Server. Many customers tell us the reasons they choose AWS is due to TCO and reliability. Customers have been able to run their Windows workloads with lower costs and higher performance than on any other cloud. To learn more about our story and why customers trust AWS for their Windows workloads, check out Windows on AWS.

After the workloads are optimized for cloud, you can save even more money by efficiently managing your Window Server and SQL Server licenses with AWS License Manager. By the way, License Manager lets you manage on-premises and in the cloud, as well as other software like SAP, Oracle, and IBM.

Dedicated hosts allow customers to bring Windows Server and SQL Server licenses with or without Software Assurance. Licenses without Software Assurance cannot be taken to Azure. Furthermore, Dedicated Hosts allow customers to license Windows Server at the physical level and achieve a greater number of instances at a lower price than they would get through Azure Hybrid Use Benefits.

Summary

The answer is clear: AWS is the best cloud to run your Windows workloads. AWS offers the best experience for Windows workloads in the cloud, which is why we run almost 2x the number of Windows workloads compared to the next largest cloud.

Our customers have found that migrating their Windows workloads to AWS can yield significant savings and better performance. Customers like Sysco, Hess, Sony DADC New Media Solutions, Ancestry, and Expedia have chosen AWS to upgrade, migrate, and modernize their Windows workloads in the cloud.

Don’t let misleading cost comparisons prevent you from getting the most out of cloud. Let AWS help you assess how you can get the most out of cloud. Join all the AWS customers that trust us to run their most important applications in the best cloud for Windows workloads. If you want us to do an assessment for you, email us at [email protected].

New Whitepaper: Active Directory Domain Services on AWS

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/architecture/new-whitepaper-active-directory-domain-services-on-aws/

The cloud is now at the center of most Enterprise IT strategies. As such, a well-planned move to the cloud can result in immediate business payoff. To achieve such success, it’s important that you adopt Microsoft Active Directory (AD), the foundation of many large enterprise Windows and .NET applications in a secure, scalable, and highly available manner within the AWS Cloud.

AWS offers flexible options for running AD, so as a customer it’s essential to select an architecture well-suited to support your applications. AWS offers a fully managed option called AWS Managed Active Directory, which enables your directory-aware workloads to use Managed Active Directory in AWS. You can also run Active Directory on Amazon Elastic Compute Cloud (Amazon EC2) and manage both the EC2 Instances and Active Directory, which provides the flexibility needed to extend an existing Active Directory domain to the AWS infrastructure.

In this regard, we are very excited to release Active Directory Domain Services for AWS Whitepaper. This Active Directory whitepaper describes best practices for running Active Directory on AWS, including different architectural approaches for running AWS Managed AD and Active Directory on EC2 Instances. In addition, this document discusses the design considerations, security, network connectivity, and multi-region deployment of Active Directory for both scenarios.

Read the whitepaper: Active Directory on AWS.

About the author

Vinod MadabushiVinod Madabushi is an Enterprise Solutions Architect and subject matter expert in Microsoft technologies, including Active Directory. He works with customers on building highly available, scalable, and resilient applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

 

Windows @ AWS re:Invent 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/windows-aws-reinvent-2018/

This post is courtesy of Rodney Bozo, Senior Solutions Architect – Microsoft Technologies – AWS

Windows has been a first-class citizen at AWS for over a decade. More enterprises run Windows workloads today on AWS than any other cloud—according to IDC, it’s over 57%, 2X than the next provider. Over this period, we’ve worked with customers across the globe and taken their feedback to build the solutions that best support their Microsoft workloads.

Since 2008, the Microsoft ecosystem on AWS has grown to much more than just running virtual machines. We have solutions for SQL, Active Directory, .NET developers and more, as well as options to bring your licenses to extend the value of your existing investments.

Over the course of the week at AWS re:Invent, we are offering over 75 sessions covering Microsoft technologies in AWS, with a combination of breakout sessions, workshops, chalk talks, and builder sessions.

Find the entire list of Windows and .NET sessions on the session catalog. Here are some you should try not to miss:

Leadership and Management

Windows and Active Directory

SQL

.NET

Looking to get hands-on with Microsoft?

Still looking for more?

We have an extensive list of curated content on the AWS for Microsoft Workloads Self-Study Guide, including case studies, whitepapers, previous re:Invent presentations, reference architectures, and how-to instructional videos. Check it out!

EC2 Instance Update – M5 Instances with Local NVMe Storage (M5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-m5-instances-with-local-nvme-storage-m5d/

Earlier this month we launched the C5 Instances with Local NVMe Storage and I told you that we would be doing the same for additional instance types in the near future!

Today we are introducing M5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for workloads that require a balance of compute and memory resources. Here are the specs:

Instance NamevCPUsRAMLocal StorageEBS-Optimized BandwidthNetwork Bandwidth
m5d.large28 GiB1 x 75 GB NVMe SSDUp to 2.120 GbpsUp to 10 Gbps
m5d.xlarge416 GiB1 x 150 GB NVMe SSDUp to 2.120 GbpsUp to 10 Gbps
m5d.2xlarge832 GiB1 x 300 GB NVMe SSDUp to 2.120 GbpsUp to 10 Gbps
m5d.4xlarge1664 GiB1 x 600 GB NVMe SSD2.210 GbpsUp to 10 Gbps
m5d.12xlarge48192 GiB2 x 900 GB NVMe SSD5.0 Gbps10 Gbps
m5d.24xlarge96384 GiB4 x 900 GB NVMe SSD10.0 Gbps25 Gbps

The M5d instances are powered by Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, including support for AVX-512.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage on the M5d instances:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
M5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent M5 instances.

Jeff;

 

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Flight Sim Company Threatens Reddit Mods Over “Libelous” DRM Posts

Post Syndicated from Andy original https://torrentfreak.com/flight-sim-company-threatens-reddit-mods-over-libellous-drm-posts-180604/

Earlier this year, in an effort to deal with piracy of their products, flight simulator company FlightSimLabs took drastic action by installing malware on customers’ machines.

The story began when a Reddit user reported something unusual in his download of FlightSimLabs’ A320X module. A file – test.exe – was being flagged up as a ‘Chrome Password Dump’ tool, something which rang alarm bells among flight sim fans.

As additional information was made available, the story became even more sensational. After first dodging the issue with carefully worded statements, FlightSimLabs admitted that it had installed a password dumper onto ALL users’ machines – whether they were pirates or not – in an effort to catch a particular software cracker and launch legal action.

It was an incredible story that no doubt did damage to FlightSimLabs’ reputation. But the now the company is at the center of a new storm, again centered around anti-piracy measures and again focused on Reddit.

Just before the weekend, Reddit user /u/walkday reported finding something unusual in his A320X module, the same module that caused the earlier controversy.

“The latest installer of FSLabs’ A320X puts two cmdhost.exe files under ‘system32\’ and ‘SysWOW64\’ of my Windows directory. Despite the name, they don’t open a command-line window,” he reported.

“They’re a part of the authentication because, if you remove them, the A320X won’t get loaded. Does someone here know more about cmdhost.exe? Why does FSLabs give them such a deceptive name and put them in the system folders? I hate them for polluting my system folder unless, of course, it is a dll used by different applications.”

Needless to say, the news that FSLabs were putting files into system folders named to make them look like system files was not well received.

“Hiding something named to resemble Window’s “Console Window Host” process in system folders is a huge red flag,” one user wrote.

“It’s a malware tactic used to deceive users into thinking the executable is a part of the OS, thus being trusted and not deleted. Really dodgy tactic, don’t trust it and don’t trust them,” opined another.

With a disenchanted Reddit userbase simmering away in the background, FSLabs took to Facebook with a statement to quieten down the masses.

“Over the past few hours we have become aware of rumors circulating on social media about the cmdhost file installed by the A320-X and wanted to clear up any confusion or misunderstanding,” the company wrote.

“cmdhost is part of our eSellerate infrastructure – which communicates between the eSellerate server and our product activation interface. It was designed to reduce the number of product activation issues people were having after the FSX release – which have since been resolved.”

The company noted that the file had been checked by all major anti-virus companies and everything had come back clean, which does indeed appear to be the case. Nevertheless, the critical Reddit thread remained, bemoaning the actions of a company which probably should have known better than to irritate fans after February’s debacle. In response, however, FSLabs did just that once again.

In private messages to the moderators of the /r/flightsim sub-Reddit, FSLabs’ Marketing and PR Manager Simon Kelsey suggested that the mods should do something about the thread in question or face possible legal action.

“Just a gentle reminder of Reddit’s obligations as a publisher in order to ensure that any libelous content is taken down as soon as you become aware of it,” Kelsey wrote.

Noting that FSLabs welcomes “robust fair comment and opinion”, Kelsey gave the following advice.

“The ‘cmdhost.exe’ file in question is an entirely above board part of our anti-piracy protection and has been submitted to numerous anti-virus providers in order to verify that it poses no threat. Therefore, ANY suggestion that current or future products pose any threat to users is absolutely false and libelous,” he wrote, adding:

“As we have already outlined in the past, ANY suggestion that any user’s data was compromised during the events of February is entirely false and therefore libelous.”

Noting that FSLabs would “hate for lawyers to have to get involved in this”, Kelsey advised the /r/flightsim mods to ensure that no such claims were allowed to remain on the sub-Reddit.

But after not receiving the response he would’ve liked, Kelsey wrote once again to the mods. He noted that “a number of unsubstantiated and highly defamatory comments” remained online and warned that if something wasn’t done to clean them up, he would have “no option” than to pass the matter to FSLabs’ legal team.

Like the first message, this second effort also failed to have the desired effect. In fact, the moderators’ response was to post an open letter to Kelsey and FSLabs instead.

“We sincerely disagree that you ‘welcome robust fair comment and opinion’, demonstrated by the censorship on your forums and the attempted censorship on our subreddit,” the mods wrote.

“While what you do on your forum is certainly your prerogative, your rules do not extend to Reddit nor the r/flightsim subreddit. Removing content you disagree with is simply not within our purview.”

The letter, which is worth reading in full, refutes Kelsey’s claims and also suggests that critics of FSLabs may have been subjected to Reddit vote manipulation and coordinated efforts to discredit them.

What will happen next is unclear but the matter has now been placed in the hands of Reddit’s administrators who have agreed to deal with Kelsey and FSLabs’ personally.

It’s a little early to say for sure but it seems unlikely that this will end in a net positive for FSLabs, no matter what decision Reddit’s admins take.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

Raspberry Jam Cameroon #PiParty

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/raspberry-jam-cameroon-piparty/

Earlier this year on 3 and 4 March, communities around the world held Raspberry Jam events to celebrate Raspberry Pi’s sixth birthday. We sent out special birthday kits to participating Jams — it was amazing to know the kits would end up in the hands of people in parts of the world very far from Raspberry Pi HQ in Cambridge, UK.

The Raspberry Jam Camer team: Damien Doumer, Eyong Etta, Loïc Dessap and Lionel Sichom, aka Lionel Tellem

Preparing for the #PiParty

One birthday kit went to Yaoundé, the capital of Cameroon. There, a team of four students in their twenties — Lionel Sichom (aka Lionel Tellem), Eyong Etta, Loïc Dessap, and Damien Doumer — were organising Yaoundé’s first Jam, called Raspberry Jam Camer, as part of the Raspberry Jam Big Birthday Weekend. The team knew one another through their shared interests and skills in electronics, robotics, and programming. Damien explains in his blog post about the Jam that they planned ahead for several activities for the Jam based on their own projects, so they could be confident of having a few things that would definitely be successful for attendees to do and see.

Show-and-tell at Raspberry Jam Cameroon

Loïc presented a Raspberry Pi–based, Android app–controlled robot arm that he had built, and Lionel coded a small video game using Scratch on Raspberry Pi while the audience watched. Damien demonstrated the possibilities of Windows 10 IoT Core on Raspberry Pi, showing how to install it, how to use it remotely, and what you can do with it, including building a simple application.

Loïc Dessap, wearing a Raspberry Jam Big Birthday Weekend T-shirt, sits at a table with a robot arm, a laptop with a Pi sticker and other components. He is making an adjustment to his set-up.

Loïc showcases the prototype robot arm he built

There was lots more too, with others discussing their own Pi projects and talking about the possibilities Raspberry Pi offers, including a Pi-controlled drone and car. Cake was a prevailing theme of the Raspberry Jam Big Birthday Weekend around the world, and Raspberry Jam Camer made sure they didn’t miss out.

A round pink-iced cake decorated with the words "Happy Birthday RBP" and six candles, on a table beside Raspberry Pi stickers, Raspberry Jam stickers and Raspberry Jam fliers

Yay, birthday cake!!

A big success

Most visitors to the Jam were secondary school students, while others were university students and graduates. The majority were unfamiliar with Raspberry Pi, but all wanted to learn about Raspberry Pi and what they could do with it. Damien comments that the fact most people were new to Raspberry Pi made the event more interactive rather than creating any challenges, because the visitors were all interested in finding out about the little computer. The Jam was an all-round success, and the team was pleased with how it went:

What I liked the most was that we sensitized several people about the Raspberry Pi and what one can be capable of with such a small but powerful device. — Damien Doumer

The Jam team rounded off the event by announcing that this was the start of a Raspberry Pi community in Yaoundé. They hope that they and others will be able to organise more Jams and similar events in the area to spread the word about what people can do with Raspberry Pi, and to help them realise their ideas.

The Raspberry Jam Camer team, wearing Raspberry Jam Big Birthday Weekend T-shirts, pose with young Jam attendees outside their venue

Raspberry Jam Camer gets the thumbs-up

The Raspberry Pi community in Cameroon

In a French-language interview about their Jam, the team behind Raspberry Jam Camer said they’d like programming to become the third official language of Cameroon, after French and English; their aim is to to popularise programming and digital making across Cameroonian society. Neither of these fields is very familiar to most people in Cameroon, but both are very well aligned with the country’s ambitions for development. The team is conscious of the difficulties around the emergence of information and communication technologies in the Cameroonian context; in response, they are seizing the opportunities Raspberry Pi offers to give children and young people access to modern and constantly evolving technology at low cost.

Thanks to Lionel, Eyong, Damien, and Loïc, and to everyone who helped put on a Jam for the Big Birthday Weekend! Remember, anyone can start a Jam at any time — and we provide plenty of resources to get you started. Check out the Guidebook, the Jam branding pack, our specially-made Jam activities online (in multiple languages), printable worksheets, and more.

The post Raspberry Jam Cameroon #PiParty appeared first on Raspberry Pi.

EC2 Instance Update – C5 Instances with Local NVMe Storage (C5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-c5-instances-with-local-nvme-storage-c5d/

As you can see from my EC2 Instance History post, we add new instance types on a regular and frequent basis. Driven by increasingly powerful processors and designed to address an ever-widening set of use cases, the size and diversity of this list reflects the equally diverse group of EC2 customers!

Near the bottom of that list you will find the new compute-intensive C5 instances. With a 25% to 50% improvement in price-performance over the C4 instances, the C5 instances are designed for applications like batch and log processing, distributed and or real-time analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. Some of these applications can benefit from access to high-speed, ultra-low latency local storage. For example, video encoding, image manipulation, and other forms of media processing often necessitates large amounts of I/O to temporary storage. While the input and output files are valuable assets and are typically stored as Amazon Simple Storage Service (S3) objects, the intermediate files are expendable. Similarly, batch and log processing runs in a race-to-idle model, flushing volatile data to disk as fast as possible in order to make full use of compute resources.

New C5d Instances with Local Storage
In order to meet this need, we are introducing C5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for the applications that I described above, as well as others that you will undoubtedly dream up! Here are the specs:

Instance NamevCPUsRAMLocal StorageEBS BandwidthNetwork Bandwidth
c5d.large24 GiB1 x 50 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.xlarge48 GiB1 x 100 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.2xlarge816 GiB1 x 225 GB NVMe SSDUp to 2.25 GbpsUp to 10 Gbps
c5d.4xlarge1632 GiB1 x 450 GB NVMe SSD2.25 GbpsUp to 10 Gbps
c5d.9xlarge3672 GiB1 x 900 GB NVMe SSD4.5 Gbps10 Gbps
c5d.18xlarge72144 GiB2 x 900 GB NVMe SSD9 Gbps25 Gbps

Other than the addition of local storage, the C5 and C5d share the same specs. Both are powered by 3.0 GHz Intel Xeon Platinum 8000-series processors, optimized for EC2 and with full control over C-states on the two largest sizes, giving you the ability to run two cores at up to 3.5 GHz using Intel Turbo Boost Technology.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
C5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent C5 instances.

Jeff;

PS – We will be adding local NVMe storage to other EC2 instance types in the months to come, so stay tuned!

Connect Veeam to the B2 Cloud: Episode 3 — Using OpenDedup

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/opendedup-for-cloud-storage/

Veeam backup to Backblaze B2 logo

In this, the third post in our series on connecting Veeam with Backblaze B2 Cloud Storage, we discuss how to back up your VMs to B2 using Veeam and OpenDedup. In our previous posts, we covered how to connect Veeam to the B2 cloud using Synology, and how to connect Veeam with B2 using StarWind VTL.

Deduplication and OpenDedup

Deduplication is simply the process of eliminating redundant data on disk. Deduplication reduces storage space requirements, improves backup speed, and lowers backup storage costs. The dedup field used to be dominated by a few big-name vendors who sold dedup systems that were too expensive for most of the SMB market. Then an open-source challenger came along in OpenDedup, a project that produced the Space Deduplication File System (SDFS). SDFS provides many of the features of commercial dedup products without their cost.

OpenDedup provides inline deduplication that can be used with applications such as Veeam, Veritas Backup Exec, and Veritas NetBackup.

Features Supported by OpenDedup:

  • Variable Block Deduplication to cloud storage
  • Local Data Caching
  • Encryption
  • Bandwidth Throttling
  • Fast Cloud Recovery
  • Windows and Linux Support

Why use Veeam with OpenDedup to Backblaze B2?

With your VMs backed up to B2, you have a number of options to recover from a disaster. If the unexpected occurs, you can quickly restore your VMs from B2 to the location of your choosing. You also have the option to bring up cloud compute through B2’s compute partners, thereby minimizing any loss of service and ensuring business continuity.

Veeam logo + OpenDedup logo + Backblaze B2 logo

Backblaze’s B2 is an ideal solution for backing up Veeam’s backup repository due to B2’s combination of low-cost and high availability. Users of B2 save up to 75% compared to other cloud solutions such as Microsoft Azure, Amazon AWS, or Google Cloud Storage. When combined with OpenDedup’s no-cost deduplication, you’re got an efficient and economical solution for backing up VMs to the cloud.

How to Use OpenDedup with B2

For step-by-step instructions for how to set up OpenDedup for use with B2 on Windows or Linux, see Backblaze B2 Enabled on the OpenDedup website.

Are you backing up Veeam to B2 using one of the solutions we’ve written about in this series? If you have, we’d love to hear from you in the comments.

View all posts in the Veeam series.

The post Connect Veeam to the B2 Cloud: Episode 3 — Using OpenDedup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Brutus 2: the gaming PC case of your dreams

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/brutus-2-gaming-pc-case/

Attention, case modders: take a look at the Brutus 2, an extremely snazzy computer case with a partly transparent, animated side panel that’s powered by a Pi. Daniel Otto and Carsten Lehman have a current crowdfunder for the case; their video is in German, but the looks of the build speak for themselves. There are some truly gorgeous effects here.

der BRUTUS 2 by 3nb Gaming

Vorbestellungen ab sofort auf https://www.startnext.com/brutus2 Weitere Infos zu uns auf: https://3nb.de https://www.facebook.com/3nb.de https://www.instagram.com/3nb.de Über 3nb: – GbR aus Leipzig, gegründet 2017 – wir kommen aus den Bereichen Elektronik und Informatik – erstes Produkt: der Brutus One ein Gaming PC mit transparentem Display in der Seite Kurzinfo Brutus 2: – Markencomputergehäuse für Gaming- /Casemoddingszene – Besonderheit: animiertes Seitenfenster angesteuert mit einem Raspberry Pi – Vorteile von unserem Case: o Case ist einzeln lieferbar und nicht nur als komplett-PC o kein Leistungsverbrauch der Grafikkarte dank integriertem Raspberry Pi o bessere Darstellung von Texten und Grafiken durch unscharfen Hintergrund

What’s case modding?

Case modding just means modifying your computer or gaming console’s case, and it’s very popular in the gaming community. Some mods are functional, while others improve the way the case looks. Lots of dedicated gamers don’t only want a powerful computer, they also want it to look amazing — at home, or at LAN parties and games tournaments.

The Brutus 2 case

The Brutus 2 case is made by Daniel and Carsten’s startup, 3nb electronics, and it’s a product that is officially Powered by Raspberry Pi. Its standout feature is the semi-transparent TFT screen, which lets you play any video clip you choose while keeping your gaming hardware on display. It looks incredibly cool. All the graphics for the case’s screen are handled by a Raspberry Pi, so it doesn’t use any of your main PC’s GPU power and your gaming won’t suffer.

Brutus 2 PC case powered by Raspberry Pi

The software

To use Brutus 2, you just need to run a small desktop application on your PC to choose what you want to display on the case. A number of neat animations are included, and you can upload your own if you want.

So far, the app only runs on Windows, but 3nb electronics are planning to make the code open-source, so you can modify it for other operating systems, or to display other file types. This is true to the spirit of the case modding and Raspberry Pi communities, who love adapting, retrofitting, and overhauling projects and code to fit their needs.

Brutus 2 PC case powered by Raspberry Pi

Daniel and Carsten say that one of their campaign’s stretch goals is to implement more functionality in the Brutus 2 app. So in the future, the case could also show things like CPU temperature, gaming stats, and in-game messages. Of course, there’s nothing stopping you from integrating features like that yourself.

If you have any questions about the case, you can post them directly to Daniel and Carsten here.

The crowdfunding campaign

The Brutus 2 campaign on Startnext is currently halfway to its first funding goal of €10000, with over three weeks to go until it closes. If you’re quick, you still be may be able to snatch one of the early-bird offers. And if your whole guild NEEDS this, that’s OK — there are discounts for bulk orders.

The post Brutus 2: the gaming PC case of your dreams appeared first on Raspberry Pi.