Tag Archives: server

An Invitation for CrashPlan Customers: Try Backblaze

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/crashplan-alternative-backup-solution/

Welcome CrashPlan Users
With news coming out this morning of CrashPlan exiting the consumer market, we know some of you may be considering which backup provider to call home. We welcome you to try us.

For over a decade, Backblaze has provided unlimited cloud backup for Windows and Macintosh computers at $5 per month (or $50 per year).

Backblaze is excellent if you’re looking for the cheapest online backup option that still offers serious file protection.” — Dann Berg, Tom’s Guide.

That’s it. Ready to make sure your data is safe? Try Backblaze for free — it’ll take you less than a minute and you don’t need a credit card to start protecting your data.

Our customers don’t have to choose between competing feature sets or hard to understand fine print. There are no extra charges and no limits on the size of your files — no matter how many videos you want to back up. And when we say unlimited, we mean unlimited; there are no restrictions on files, gigabytes, or restores. Customers also love the choices they have for getting their data back — web, mobile apps, and our free Restore by Mail option. We’re also the fastest to back up your data. While other services throttle your upload speeds, we want to get you protected as quickly as possible.

Backblaze vs. Carbonite

We know that CrashPlan is encouraging customers to look at Carbonite as an alternative. We would like to offer you another option: Backblaze. We cost less, we offer more, we store over 350 Petabytes of data, we have restored over 20 billion files, and customers in over 120 countries around the world trust us with their data.

Backblaze Carbonite Basic Carbonite Prime
Price per Computer $50/year $59.99/year $149.99/year
Back Up All User Data By Default – No Picking And Choosing Yes No No
Automatically Back Up Files Of Any Size, Including Videos Yes No Yes1
Back Up Multiple USB External Hard Drives Yes No No
Restore by Mail for Free Yes No No
Locate Computer Yes No No
Manage Families & Teams Yes No No
Protect Accounts Via Two Factor VerificationSMS & Authenticator Apps Yes No No
Protect Data Via Private Encryption Key Yes No No2
(1) All videos and files over 4GB require manual selection.  (2) Available on Windows Only

To get just some of the features offered by Backblaze for $50/year, you would need to purchase Carbonite Prime at $149.99/year.

Reminder: Sync is Not Backup

“Backblaze is my favorite online backup service, mostly because everything about it is so simple, especially its pricing and software.“ Tim Fisher — Lifewire: 22 Online Backup Services Reviewed

Of course, there are plenty of options in the marketplace. We encourage you to choose one to make sure you stay backed up. One thing we tell our own friends and family: sync is not backup.

If you’re considering using a sync service — Dropbox, Google Drive, OneDrive, iCloud, etc. — you should know that these services are not designed to back up all your data. Typically, they only sync data from a specific directory or folder. If the service detects a file was deleted from your sync folder, it also will delete it from their server, and you’re out of luck. In addition, most don’t support external drives and have tiered pricing that gets quite expensive.

Backblaze is the Simple, Reliable, and Affordable Choice for Unlimited Backup of All Your Data
People have trusted Backblaze to protect their digital photos, music, movies, and documents for the past 10 years. We look forward to doing the same for your valuable data.

Your CrashPlan service may not be getting shut off today. But there’s no reason to wait until your data is at risk. Try Backblaze for FREE today — all you need to do is pick an email/password and click download.

The post An Invitation for CrashPlan Customers: Try Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Announcement: IPS code

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/announcement-ips-code.html

So after 20 years, IBM is killing off my BlackICE code created in April 1998. So it’s time that I rewrite it.

BlackICE was the first “inline” intrusion-detection system, aka. an “intrusion prevention system” or IPS. ISS purchased my company in 2001 and replaced their RealSecure engine with it, and later renamed it Proventia. Then IBM purchased ISS in 2006. Now, they are formally canceling the project and moving customers onto Cisco’s products, which are based on Snort.

So now is a good time to write a replacement. The reason is that BlackICE worked fundamentally differently than Snort, using protocol analysis rather than pattern-matching. In this way, it worked more like Bro than Snort. The biggest benefit of protocol-analysis is speed, making it many times faster than Snort. The second benefit is better detection ability, as I describe in this post on Heartbleed.

So my plan is to create a new project. I’ll be checking in the starter bits into GitHub starting a couple weeks from now. I need to figure out a new name for the project, so I don’t have to rip off a name from William Gibson like I did last time :).

Some notes:

  • Yes, it’ll be GNU open source. I’m a capitalist, so I’ll earn money like snort/nmap dual-licensing it, charging companies who don’t want to open-source their addons. All capitalists GNU license their code.
  • C, not Rust. Sorry, I’m going for extreme scalability. We’ll re-visit this decision later when looking at building protocol parsers.
  • It’ll be 95% compatible with Snort signatures. Their language definition leaves so much ambiguous it’ll be hard to be 100% compatible.
  • It’ll support Snort output as well, though really, Snort’s events suck.
  • Protocol parsers in Lua, so you can use it as a replacement for Bro, writing parsers to extract data you are interested in.
  • Protocol state machine parsers in C, like you see in my Masscan project for X.509.
  • First version IDS only. These days, “inline” means also being able to MitM the SSL stack, so I’m gong to have to think harder on that.
  • Mutli-core worker threads off PF_RING/DPDK/netmap receive queues. Should handle 10gbps, tracking 10 million concurrent connections, with quad-core CPU.
So if you want to contribute to the project, here’s what I need:
  • Requirements from people who work daily with IDS/IPS today. I need you to write up what your products do well that you really like. I need to you write up what they suck at that needs to be fixed. These need to be in some detail.
  • Testing environment to play with. This means having a small server plugged into a real-world link running at a minimum of several gigabits-per-second available for the next year. I’ll sign NDAs related to the data I might see on the network.
  • Coders. I’ll be doing the basic architecture, but protocol parsers, output plugins, etc. will need work. Code will be in C and Lua for the near term. Unfortunately, since I’m going to dual-license, I’ll need waivers before accepting pull requests.
Anyway, follow me on Twitter @erratarob if you want to contribute.

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/731309/rss

Security updates have been issued by CentOS (git), Debian (firefox-esr and mariadb-10.0), Gentoo (bind and tnef), Mageia (kauth, kdelibs4, poppler, subversion, and vim), openSUSE (fossil, git, libheimdal, libxml2, minicom, nodejs4, nodejs6, openjpeg2, openldap2, potrace, subversion, and taglib), Oracle (git and kernel), Red Hat (git, groovy, httpd24-httpd, and mercurial), Scientific Linux (git), and SUSE (freeradius-server, ImageMagick, and subversion).

What You Need To Know About Server Side Request Forgery (SSRF)

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/jiE0TjlsGI4/

SSRF or Server Side Request Forgery is an attack vector that has been around for a long time, but do you actually know what it is? Server Side Request Forgery (SSRF) refers to an attack where in an attacker is able to send a crafted request from a vulnerable web application. SSRF is usually used […]

The post What You Need To Know About…

Read the full post at darknet.org.uk

AWS Online Tech Talks – August 2017

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-august-2017/

Welcome to mid-August, everyone–the season of beach days, family road trips, and an inbox full of “out of office” emails from your coworkers. Just in case spending time indoors has you feeling a bit blue, we’ve got a piping hot batch of AWS Online Tech Talks for you to check out. Kick up your feet, grab a glass of ice cold lemonade, and dive into our latest Tech Talks on Compute and DevOps.

August 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of August. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Thursday, August 17 – Compute

9:00 – 9:40 AM PDT: Deep Dive on [email protected].

Monday, August 28 – DevOps

10:30 – 11:10 AM PDT: Building a Python Serverless Applications with AWS Chalice.

12:00 – 12:40 PM PDT: How to Deploy .NET Code to AWS from Within Visual Studio.

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.

– Sara (Hello everyone, I’m a co-op from Northeastern University joining the team until December.)

What’s the Diff: Programs, Processes, and Threads

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/whats-the-diff-programs-processes-and-threads/

let's talk about Threads

How often have you heard the term threading in relation to a computer program, but you weren’t exactly sure what it meant? How about processes? You likely understand that a thread is somehow closely related to a program and a process, but if you’re not a computer science major, maybe that’s as far as your understanding goes.

Knowing what these terms mean is absolutely essential if you are a programmer, but an understanding of them also can be useful to the average computer user. Being able to look at and understand the Activity Monitor on the Macintosh, the Task Manager on Windows, or Top on Linux can help you troubleshoot which programs are causing problems on your computer, or whether you might need to install more memory to make your system run better.

Let’s take a few minutes to delve into the world of computer programs and sort out what these terms mean. We’ll simplify and generalize some of the ideas, but the general concepts we cover should help clarify the difference between the terms.

Programs

First of all, you probably are aware that a program is the code that is stored on your computer that is intended to fulfill a certain task. There are many types of programs, including programs that help your computer function and are part of the operating system, and other programs that fulfill a particular job. These task-specific programs are also known as “applications,” and can include programs such as word processing, web browsing, or emailing a message to another computer.

Program

Programs are typically stored on disk or in non-volatile memory in a form that can be executed by your computer. Prior to that, they are created using a programming language such as C, Lisp, Pascal, or many others using instructions that involve logic, data and device manipulation, recurrence, and user interaction. The end result is a text file of code that is compiled into binary form (1’s and 0’s) in order to run on the computer. Another type of program is called “interpreted,” and instead of being compiled in advance in order to run, is interpreted into executable code at the time it is run. Some common, typically interpreted programming languages, are Python, PHP, JavaScript, and Ruby.

The end result is the same, however, in that when a program is run, it is loaded into memory in binary form. The computer’s CPU (Central Processing Unit) understands only binary instructions, so that’s the form the program needs to be in when it runs.

Perhaps you’ve heard the programmer’s joke, “There are only 10 types of people in the world, those who understand binary, and those who don’t.”

Binary is the native language of computers because an electrical circuit at its basic level has two states, on or off, represented by a one or a zero. In the common numbering system we use every day, base 10, each digit position can be anything from 0 to 9. In base 2 (or binary), each position is either a 0 or a 1. (In a future blog post we might cover quantum computing, which goes beyond the concept of just 1’s and 0’s in computing.)

Decimal—Base 10 Binary—Base 2
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001

How Processes Work

The program has been loaded into the computer’s memory in binary form. Now what?

An executing program needs more than just the binary code that tells the computer what to do. The program needs memory and various operating system resources that it needs in order to run. A “process” is what we call a program that has been loaded into memory along with all the resources it needs to operate. The “operating system” is the brains behind allocating all these resources, and comes in different flavors such as macOS, iOS, Microsoft Windows, Linux, and Android. The OS handles the task of managing the resources needed to turn your program into a running process.

Some essential resources every process needs are registers, a program counter, and a stack. The “registers” are data holding places that are part of the computer processor (CPU). A register may hold an instruction, a storage address, or other kind of data needed by the process. The “program counter,” also called the “instruction pointer,” keeps track of where a computer is in its program sequence. The “stack” is a data structure that stores information about the active subroutines of a computer program and is used as scratch space for the process. It is distinguished from dynamically allocated memory for the process that is known as “the heap.”

diagram of how processes work

There can be multiple instances of a single program, and each instance of that running program is a process. Each process has a separate memory address space, which means that a process runs independently and is isolated from other processes. It cannot directly access shared data in other processes. Switching from one process to another requires some time (relatively) for saving and loading registers, memory maps, and other resources.

This independence of processes is valuable because the operating system tries its best to isolate processes so that a problem with one process doesn’t corrupt or cause havoc with another process. You’ve undoubtedly run into the situation in which one application on your computer freezes or has a problem and you’ve been able to quit that program without affecting others.

How Threads Work

So, are you still with us? We finally made it to threads!

A thread is the unit of execution within a process. A process can have anywhere from just one thread to many threads.

Process vs. Thread

diagram of threads in a process over time

When a process starts, it is assigned memory and resources. Each thread in the process shares that memory and resources. In single-threaded processes, the process contains one thread. The process and the thread are one and the same, and there is only one thing happening.

In multithreaded processes, the process contains more than one thread, and the process is accomplishing a number of things at the same time (technically, it’s almost at the same time—read more on that in the “What about Parallelism and Concurrency?” section below).

diagram of single and multi-treaded process

We talked about the two types of memory available to a process or a thread, the stack and the heap. It is important to distinguish between these two types of process memory because each thread will have its own stack, but all the threads in a process will share the heap.

Threads are sometimes called lightweight processes because they have their own stack but can access shared data. Because threads share the same address space as the process and other threads within the process, the operational cost of communication between the threads is low, which is an advantage. The disadvantage is that a problem with one thread in a process will certainly affect other threads and the viability of the process itself.

Threads vs. Processes

So to review:

  1. The program starts out as a text file of programming code,
  2. The program is compiled or interpreted into binary form,
  3. The program is loaded into memory,
  4. The program becomes one or more running processes.
  5. Processes are typically independent of each other,
  6. While threads exist as the subset of a process.
  7. Threads can communicate with each other more easily than processes can,
  8. But threads are more vulnerable to problems caused by other threads in the same process.

Processes vs. Threads — Advantages and Disadvantages

Process Thread
Processes are heavyweight operations Threads are lighter weight operations
Each process has its own memory space Threads use the memory of the process they belong to
Inter-process communication is slow as processes have different memory addresses Inter-thread communication can be faster than inter-process communication because threads of the same process share memory with the process they belong to
Context switching between processes is more expensive Context switching between threads of the same process is less expensive
Processes don’t share memory with other processes Threads share memory with other threads of the same process

What about Concurrency and Parallelism?

A question you might ask is whether processes or threads can run at the same time. The answer is: it depends. On a system with multiple processors or CPU cores (as is common with modern processors), multiple processes or threads can be executed in parallel. On a single processor, though, it is not possible to have processes or threads truly executing at the same time. In this case, the CPU is shared among running processes or threads using a process scheduling algorithm that divides the CPU’s time and yields the illusion of parallel execution. The time given to each task is called a “time slice.” The switching back and forth between tasks happens so fast it is usually not perceptible. The terms parallelism (true operation at the same time) and concurrency (simulated operation at the same time), distinguish between the two type of real or approximate simultaneous operation.

diagram of concurrency and parallelism

Why Choose Process over Thread, or Thread over Process?

So, how would a programmer choose between a process and a thread when creating a program in which she wants to execute multiple tasks at the same time? We’ve covered some of the differences above, but let’s look at a real world example with a program that many of us use, Google Chrome.

When Google was designing the Chrome browser, they needed to decide how to handle the many different tasks that needed computer, communications, and network resources at the same time. Each browser window or tab communicates with multiple servers on the internet to retrieve text, programs, graphics, audio, video, and other resources, and renders that data for display and interaction with the user. In addition, the browser can open many windows, each with many tasks.

Google had to decide how to handle that separation of tasks. They chose to run each browser window in Chrome as a separate process rather than a thread or many threads, as is common with other browsers. Doing that brought Google a number of benefits. Running each window as a process protects the overall application from bugs and glitches in the rendering engine and restricts access from each rendering engine process to others and to the rest of the system. Isolating JavaScript programs in a process prevents them from running away with too much CPU time and memory, and making the entire browser non-responsive.

Google made the calculated trade-off with a multi-processing design as starting a new process for each browser window has a higher fixed cost in memory and resources than using threads. They were betting that their approach would end up with less memory bloat overall.

Using processes instead of threads provides better memory usage when memory gets low. An inactive window is treated as a lower priority by the operating system and becomes eligible to be swapped to disk when memory is needed for other processes, helping to keep the user-visible windows more responsive. If the windows were threaded, it would be more difficult to separate the used and unused memory as cleanly, wasting both memory and performance.

You can read more about Google’s design decisions on Google’s Chromium Blog or on the Chrome Introduction Comic.

The screen capture below shows the Google Chrome processes running on a MacBook Air with many tabs open. Some Chrome processes are using a fair amount of CPU time and resources, and some are using very little. You can see that each process also has many threads running as well.

activity monitor of Google Chrome

The Activity Monitor or Task Manager on your system can be a valuable ally in helping fine-tune your computer or troubleshooting problems. If your computer is running slowly, or a program or browser window isn’t responding for a while, you can check its status using the system monitor. Sometimes you’ll see a process marked as “Not Responding.” Try quitting that process and see if your system runs better. If an application is a memory hog, you might consider choosing a different application that will accomplish the same task.

Windows Task Manager view

Made it This Far?

We hope this Tron-like dive into the fascinating world of computer programs, processes, and threads has helped clear up some questions you might have had.

The next time your computer is running slowly or an application is acting up, you know your assignment. Fire up the system monitor and take a look under the hood to see what’s going on. You’re in charge now.

We love to hear from you

Are you still confused? Have questions? If so, please let us know in the comments. And feel free to suggest topics for future blog posts.

The post What’s the Diff: Programs, Processes, and Threads appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Spinrilla Refuses to Share Its Source Code With the RIAA

Post Syndicated from Ernesto original https://torrentfreak.com/spinrilla-refuses-to-share-its-source-code-with-the-riaa-170815/

Earlier this year, a group of well-known labels targeted Spinrilla, a popular hip-hop mixtape site and accompanying app with millions of users.

The coalition of record labels including Sony Music, Warner Bros. Records, and Universal Music Group, filed a lawsuit accusing the service of alleged copyright infringements.

Both sides have started the discovery process and recently asked the court to rule on several unresolved matters. The parties begin with their statements of facts, clearly from opposite angles.

The RIAA remains confident that the mixtape site is ripping off music creators and wants its operators to be held accountable.

“Since Spinrilla launched, Defendants have facilitated millions of unauthorized downloads and streams of thousands of Plaintiffs’ sound recordings without Plaintiffs’ permission,” RIAA writes, complaining about “rampant” infringement on the site.

However, Spinrilla itself believes that the claims are overblown. The company points out that the RIAA’s complaint only lists a tiny fraction of all the songs uploaded by its users. These somehow slipped through its Audible Magic anti-piracy filter.

Where the RIAA paints a picture of rampant copyright infringement, the mixtape site stresses that the record labels are complaining about less than 0.001% of all the tracks they ever published.

“From 2013 to the present, Spinrilla users have uploaded about 1 million songs to Spinrilla’s servers and Spinrilla published about 850,000 of those. Plaintiffs are complaining that 210 of those songs are owned by them and published on Spinrilla without permission,” Spinrilla’s lawyers write.

“That means that Plaintiffs make no claim to 99.9998% of the songs on Spinrilla. Plaintiffs’ shouting of ‘rampant infringement on Spinrilla’, an accusation that Spinrilla was designed to allow easy and open access to infringing material, and assertion that ‘Defendants have facilitated millions of unauthorized downloads’ of those 210 songs is untrue – it is nothing more than a wish and a dream.”

The company reiterates that it’s a platform for independent musicians and that it doesn’t want to feature the Eminem’s and Bieber’s of this world, especially not without permission.

As for the discovery process, there are still several outstanding issues they need the Court’s advice on. Spinrilla has thus far produced 12,000 pages of documents and answered all RIAA interrogatories, but refuses to hand over certain information, including its source code.

According to Spinrilla, there is no reason for the RIAA to have access to its “crown jewel.”

“The source code is the crown jewel of any software based business, including Spinrilla. Even worse, Plaintiffs want an ‘executable’ version of Spinrilla’s source code, which would literally enable them to replicate Spinrilla’s entire website. Any Plaintiff could, in hours, delete all references to ‘Spinrilla,’ add its own brand and launch Spinrilla’s exact website.

“If we sued YouTube for hosting 210 infringing videos, would I be entitled to the source code for YouTube? There is simply no justification for Spinrilla sharing its source code with Plaintiffs,” Spinrilla adds.

The RIAA, on the other hand, argues that the source code will provide insight into several critical issues, including Spinrilla’s knowledge about infringing activity and its ability to terminate repeat copyright infringers.

In addition to the source code, the RIAA has also requested detailed information about the site’s users, including their download and streaming history. This request is too broad, the mixtape site argues, and has offered to provide information on the uploaders of the 210 infringing tracks instead.

It’s clear that the RIAA and Spinrilla disagree on various fronts and it will be up to the court to decide what information must be handed over. So far, however, the language used clearly shows that both parties are far from reaching some kind of compromise.

The first joint discovery statement is available in full here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/731035/rss

Security updates have been issued by Arch Linux (audiofile, git, jdk7-openjdk, libytnef, mercurial, spice, strongswan, subversion, and xorg-server), Debian (gajim, krb5, and libraw), Fedora (kernel, postgresql, sscep, subversion, and varnish), Mageia (firefox, phpldapadmin, and x11-server), Red Hat (kernel and spice), SUSE (subversion), and Ubuntu (libgd2).

Hacking a Gene Sequencer by Encoding Malware in a DNA Strand

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/hacking_a_gene_.html

One of the common ways to hack a computer is to mess with its input data. That is, if you can feed the computer data that it interprets — or misinterprets — in a particular way, you can trick the computer into doing things that it wasn’t intended to do. This is basically what a buffer overflow attack is: the data input overflows a buffer and ends up being executed by the computer process.

Well, some researchers did this with a computer that processes DNA, and they encoded their malware in the DNA strands themselves:

To make the malware, the team translated a simple computer command into a short stretch of 176 DNA letters, denoted as A, G, C, and T. After ordering copies of the DNA from a vendor for $89, they fed the strands to a sequencing machine, which read off the gene letters, storing them as binary digits, 0s and 1s.

Erlich says the attack took advantage of a spill-over effect, when data that exceeds a storage buffer can be interpreted as a computer command. In this case, the command contacted a server controlled by Kohno’s team, from which they took control of a computer in their lab they were using to analyze the DNA file.

News articles. Research paper.

BREIN is Taking Infamous ‘Piracy’ Hosting Provider Ecatel to Court

Post Syndicated from Andy original https://torrentfreak.com/brein-is-taking-infamous-piracy-hosting-provider-ecatel-to-court-170815/

A regular website can be easily hosted in most countries of the world but when the nature of the project begins to step on toes, opportunities begin to reduce. Openly hosting The Pirate Bay, for example, is something few providers want to get involved with.

There are, however, providers out there who specialize in hosting services that others won’t touch. They develop a reputation of turning a blind eye to their customers’ activities, only reacting when a crisis looms on the horizon. Despite the problems, there are a few that are surprisingly resilient.

One such host is Netherlands-based Ecatel, which has hit the headlines many times over the years for allegedly having customers involved in warez, torrents, and streaming, not to mention spam and malware. For hosting the former group, it’s now in the crosshairs of Dutch anti-piracy group BREIN.

According to an application for a witness hearing filed with The Court of the Hague by BREIN, Ecatel has repeatedly hosted websites dealing in infringing content over recent years. While this is nothing particularly out of the ordinary, BREIN claims that complaints filed against the sites were dealt with slowly by Ecatel or not at all.

Ecatel Ltd is a company incorporated in the UK with servers in the Netherlands but since 2015, another hosting company called Novogara has appeared in tandem. Court documents suggest that Novogara is associated with Ecatel, something that was confirmed early 2016 in an email sent out by Ecatel itself.

“We’d like to inform you that all services of Ecatel Ltd are taken over by a new brand called Novogara Ltd with immediate effect. The take-over includes Ecatel and all her subsidiaries,” the email read.

Muddying the waters a little more, in 2015 Ecatel’s IP addresses were apparently taken over by Quasi Networks Ltd, a Seychelles-based company whose business is described locally as being conducted entirely overseas.

“Stichting BREIN has found several websites in the network of Quasi Networks with obviously infringing content. Quasi Networks, however, does not respond structurally to requests for closing those websites. This involves unlawful acts against the parties associated with the BREIN Foundation,” a ruling from the Court reads.

As a result, BREIN wants a witness hearing with three defendants connected to the Ecatel/Novgara/Quasi group of companies in order to establish the relationship between the businesses, where their servers are, and who is behind Quasi Networks.

“Stichting BREIN is interested in this information in order to be able to judge who it can appeal to and whether it is useful to start a legal procedure,” the Court adds.

Two of the defendants failed to lodge a defense against BREIN’s application but one objected to the request for a hearing. He said that since Quasi Networks, Ecatel and Novogara are all incorporated outside the Netherlands, a trial must also be conducted abroad and therefore a Dutch judge would not have jurisdiction.

He also argued that BREIN would use the witness hearing as a “fishing expedition” in order to gather information it currently does not have, in order to formulate some kind of case against the defendants, in one way or another.

In a decision published this week, The Court of the Hague rejected that argument, noting that the basis for the claim is copyright infringement through Netherlands-hosted websites. Furthermore, the majority of the witnesses are resident in the district of The Hague. It also underlined the importance of a hearing.

“The request for holding a preliminary witness hearing opens an independent petition procedure, which does not address the eligibility of any claim that may be lodged. An investigation must be made by the judge who has to deal with and decide the main case – if it comes.

“The court points out that a preliminary witness hearing is now (partly) necessary to clarify whether and to what extent a claim has any chance of success,” the decision reads.

According to documents published by Companies House in the UK, Ecatel Ltd ceased to exist this morning, having been dissolved at the request of its directors.

The hearing of the witnesses is set to take place on Tuesday, September 26, 2017 at 9.30 in the Palace of Justice at Prince Claus 60 in The Hague.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Why that "file-copy" forensics of DNC hack is wrong

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/why-that-file-copy-forensics-of-dnc.html

People keep asking me about this story about how forensics “experts” have found proof the DNC hack was an inside job, because files were copied at 22-megabytes-per-second, faster than is reasonable for Internet connections.

This story is bogus.
Yes, the forensics is correct that at some point, files were copied at 22-mBps. But there’s no evidence this was the point at Internet transfer out of the DNC.
One point might from one computer to another within the DNC. Indeed, as someone experienced doing this sort of hack, it’s almost certain that at some point, such a copy happened. The computers you are able to hack into are rarely the computers that have the data you want. Instead, you have to copy the data from other computers to the hacked computer, and then exfiltrate the data out of the hacked computer.
Another point might have been from one computer to another within the hacker’s own network, after the data was stolen. As a hacker, I can tell you that I frequently do this. Indeed, as this story points out, the timestamps of the file shows that the 22-mBps copy happened months after the hack was detected.
If the 22-mBps was the copy exfiltrating data, it might not have been from inside the DNC building, but from some cloud service, as this tweet points out. Hackers usually have “staging” servers in the cloud that can talk to other cloud serves at easily 10 times the 22-mBps, even around the world. I have staging servers that will do this, and indeed, have copied files at this data rate. If the DNC had that data or backups in the cloud, this would explain it. 
My point is that while the forensic data-point is good, there’s just a zillion ways of explaining it. It’s silly to insist on only the one explanation that fits your pet theory.
As a side note, you can tell this already from the way the story is told. For example, rather than explain the evidence and let it stand on its own, the stories hype the credentials of those who believe the story, using the “appeal to authority” fallacy.

AWS Summit New York – Summary of Announcements

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-summit-new-york-summary-of-announcements/

Whew – what a week! Tara, Randall, Ana, and I have been working around the clock to create blog posts for the announcements that we made at the AWS Summit in New York. Here’s a summary to help you to get started:

Amazon Macie – This new service helps you to discover, classify, and secure content at scale. Powered by machine learning and making use of Natural Language Processing (NLP), Macie looks for patterns and alerts you to suspicious behavior, and can help you with governance, compliance, and auditing. You can read Tara’s post to see how to put Macie to work; you select the buckets of interest, customize the classification settings, and review the results in the Macie Dashboard.

AWS GlueRandall’s post (with deluxe animated GIFs) introduces you to this new extract, transform, and load (ETL) service. Glue is serverless and fully managed, As you can see from the post, Glue crawls your data, infers schemas, and generates ETL scripts in Python. You define jobs that move data from place to place, with a wide selection of transforms, each expressed as code and stored in human-readable form. Glue uses Development Endpoints and notebooks to provide you with a testing environment for the scripts you build. We also announced that Amazon Athena now integrates with Amazon Glue, as does Apache Spark and Hive on Amazon EMR.

AWS Migration Hub – This new service will help you to migrate your application portfolio to AWS. My post outlines the major steps and shows you how the Migration Hub accelerates, tracks,and simplifies your migration effort. You can begin with a discovery step, or you can jump right in and migrate directly. Migration Hub integrates with tools from our migration partners and builds upon the Server Migration Service and the Database Migration Service.

CloudHSM Update – We made a major upgrade to AWS CloudHSM, making the benefits of hardware-based key management available to a wider audience. The service is offered on a pay-as-you-go basis, and is fully managed. It is open and standards compliant, with support for multiple APIs, programming languages, and cryptography extensions. CloudHSM is an integral part of AWS and can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), and through API calls. Read my post to learn more and to see how to set up a CloudHSM cluster.

Managed Rules to Secure S3 Buckets – We added two new rules to AWS Config that will help you to secure your S3 buckets. The s3-bucket-public-write-prohibited rule identifies buckets that have public write access and the s3-bucket-public-read-prohibited rule identifies buckets that have global read access. As I noted in my post, you can run these rules in response to configuration changes or on a schedule. The rules make use of some leading-edge constraint solving techniques, as part of a larger effort to use automated formal reasoning about AWS.

CloudTrail for All Customers – Tara’s post revealed that AWS CloudTrail is now available and enabled by default for all AWS customers. As a bonus, Tara reviewed the principal benefits of CloudTrail and showed you how to review your event history and to deep-dive on a single event. She also showed you how to create a second trail, for use with CloudWatch CloudWatch Events.

Encryption of Data at Rest for EFS – When you create a new file system, you now have the option to select a key that will be used to encrypt the contents of the files on the file system. The encryption is done using an industry-standard AES-256 algorithm. My post shows you how to select a key and to verify that it is being used.

Watch the Keynote
My colleagues Adrian Cockcroft and Matt Wood talked about these services and others on the stage, and also invited some AWS customers to share their stories. Here’s the video:

Jeff;

 

Launch – AWS Glue Now Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/launch-aws-glue-now-generally-available/

Today we’re excited to announce the general availability of AWS Glue. Glue is a fully managed, serverless, and cloud-optimized extract, transform and load (ETL) service. Glue is different from other ETL services and platforms in a few very important ways.

First, Glue is “serverless” – you don’t need to provision or manage any resources and you only pay for resources when Glue is actively running. Second, Glue provides crawlers that can automatically detect and infer schemas from many data sources, data types, and across various types of partitions. It stores these generated schemas in a centralized Data Catalog for editing, versioning, querying, and analysis. Third, Glue can automatically generate ETL scripts (in Python!) to translate your data from your source formats to your target formats. Finally, Glue allows you to create development endpoints that allow your developers to use their favorite toolchains to construct their ETL scripts. Ok, let’s dive deep with an example.

In my job as a Developer Evangelist I spend a lot of time traveling and I thought it would be cool to play with some flight data. The Bureau of Transportations Statistics is kind enough to share all of this data for anyone to use here. We can easily download this data and put it in an Amazon Simple Storage Service (S3) bucket. This data will be the basis of our work today.

Crawlers

First, we need to create a Crawler for our flights data from S3. We’ll select Crawlers in the Glue console and follow the on screen prompts from there. I’ll specify s3://crawler-public-us-east-1/flight/2016/csv/ as my first datasource (we can add more later if needed). Next, we’ll create a database called flights and give our tables a prefix of flights as well.

The Crawler will go over our dataset, detect partitions through various folders – in this case months of the year, detect the schema, and build a table. We could add additonal data sources and jobs into our crawler or create separate crawlers that push data into the same database but for now let’s look at the autogenerated schema.

I’m going to make a quick schema change to year, moving it from BIGINT to INT. Then I can compare the two versions of the schema if needed.

Now that we know how to correctly parse this data let’s go ahead and do some transforms.

ETL Jobs

Now we’ll navigate to the Jobs subconsole and click Add Job. Will follow the prompts from there giving our job a name, selecting a datasource, and an S3 location for temporary files. Next we add our target by specifying “Create tables in your data target” and we’ll specify an S3 location in Parquet format as our target.

After clicking next, we’re at screen showing our various mappings proposed by Glue. Now we can make manual column adjustments as needed – in this case we’re just going to use the X button to remove a few columns that we don’t need.

This brings us to my favorite part. This is what I absolutely love about Glue.

Glue generated a PySpark script to transform our data based on the information we’ve given it so far. On the left hand side we can see a diagram documenting the flow of the ETL job. On the top right we see a series of buttons that we can use to add annotated data sources and targets, transforms, spigots, and other features. This is the interface I get if I click on transform.

If we add any of these transforms or additional data sources, Glue will update the diagram on the left giving us a useful visualization of the flow of our data. We can also just write our own code into the console and have it run. We can add triggers to this job that fire on completion of another job, a schedule, or on demand. That way if we add more flight data we can reload this same data back into S3 in the format we need.

I could spend all day writing about the power and versatility of the jobs console but Glue still has more features I want to cover. So, while I might love the script editing console, I know many people prefer their own development environments, tools, and IDEs. Let’s figure out how we can use those with Glue.

Development Endpoints and Notebooks

A Development Endpoint is an environment used to develop and test our Glue scripts. If we navigate to “Dev endpoints” in the Glue console we can click “Add endpoint” in the top right to get started. Next we’ll select a VPC, a security group that references itself and then we wait for it to provision.


Once it’s provisioned we can create an Apache Zeppelin notebook server by going to actions and clicking create notebook server. We give our instance an IAM role and make sure it has permissions to talk to our data sources. Then we can either SSH into the server or connect to the notebook to interactively develop our script.

Pricing and Documentation

You can see detailed pricing information here. Glue crawlers, ETL jobs, and development endpoints are all billed in Data Processing Unit Hours (DPU) (billed by minute). Each DPU-Hour costs $0.44 in us-east-1. A single DPU provides 4vCPU and 16GB of memory.

We’ve only covered about half of the features that Glue has so I want to encourage everyone who made it this far into the post to go read the documentation and service FAQs. Glue also has a rich and powerful API that allows you to do anything console can do and more.

We’re also releasing two new projects today. The aws-glue-libs provide a set of utilities for connecting, and talking with Glue. The aws-glue-samples repo contains a set of example jobs.

I hope you find that using Glue reduces the time it takes to start doing things with your data. Look for another post from me on AWS Glue soon because I can’t stop playing with this new service.
Randall

AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudhsm-update-cost-effective-hardware-key-management/

Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).

Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs).

Major CloudHSM Update
Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:

Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.

Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.

Open & Compatible  – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.

More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.

AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls.

Diving In
You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).

All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.

At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.

Setting Up a Cluster
Let’s set up a cluster using the CloudHSM Console:

I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):

Then I review my settings and click on Create:

After a few minutes, my cluster exists, but is uninitialized:

Initialization simply means retrieving a certificate signing request (the Cluster CSR):

And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster):

$ openssl genrsa -out CustomerRoot.key 2048
$ openssl req -new -x509 -days 365 -key CustomerRoot.key -out CustomerRoot.crt
$ openssl x509 -req -days 365 -in ID_ClusterCsr.csr   \
                              -CA CustomerRoot.crt    \
                              -CAkey CustomerRoot.key \
                              -CAcreateserial         \
                              -out ID_CustomerHsmCertificate.crt

The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).

Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.

Available Today
The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.

Jeff;

AWS Migration Hub – Plan & Track Enterprise Application Migration

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-migration-hub-plan-track-enterprise-application-migration/

About once a week, I speak to current and potential AWS customers in our Seattle Executive Briefing Center. While I generally focus on our innovation process, we sometimes discuss other topics, including application migration. When enterprises decide to migrate their application portfolios they want to do it in a structured, orderly fashion. These portfolios typically consist of hundreds of complex Windows and Linux applications, relational databases, and more. Customers find themselves eager yet uncertain as to how to proceed. After spending time working with these customers, we have learned that their challenges generally fall in to three major categories:

Discovery – They want to make sure that they have a deep and complete understanding of all of the moving parts that power each application.

Server & Database Migration – They need to transfer on-premises workloads and database tables to the cloud.

Tracking / Management – With large application portfolios and multiple migrations happening in parallel, they need to track and manage progress in an application-centric fashion.

Over the last couple of years we have launched a set of tools that address the first two challenges. The AWS Application Discovery Service automates the process of discovering and collecting system information, the AWS Server Migration Service takes care of moving workloads to the cloud, and the AWS Database Migration Service moves relational databases, NoSQL databases, and data warehouses with minimal downtime. Partners like Racemi and CloudEndure also offer migration tools of their own.

New AWS Migration Hub
Today we are bringing this collection of AWS and partner migration tools together in the AWS Migration Hub. The hub provides access to the tools that I mentioned above, guides you through the migration process, and tracks the status of each migration, all in accord with the methodology and tenets described in our Migration Acceleration Program (MAP).

Here’s the main screen. It outlines the migration process (discovery, migration, and tracking):

Clicking on Start discovery reveals the flow of the migration process:

It is also possible to skip the Discovery step and begin the migration immediately:

The Servers list is populated using data from an AWS migration service (Server Migration Service or Database Migration Service), partner tools, or using data collected by the AWS Application Discovery Service:

I can on Group as application to create my first application:

Once I identify some applications to migrate, I can track them in the Migrations section of the Hub:

The migration tools, if authorized, automatically send status updates and results back to Migration Hub, for display on the migration status page for the application. Here you can see that Racemi DynaCenter and CloudEndure Migration have played their parts in the migration:

I can track the status of my migrations by checking the Migration Hub Dashboard:

Migration Hub works with migration tools from AWS and our Migration Partners; see the list of integrated partner tools to learn more:

Available Now
AWS Migration Hub can manage migrations in any AWS Region that has the necessary migration tools available; the hub itself runs in the US West (Oregon) Region. There is no charge for the Hub; you pay only for the AWS services that you consume in the course of the migration.

If you are ready to begin your migration to the cloud and are in need of some assistance, please take advantage of the services offered by our Migration Acceleration Partners. These organizations have earned their migration competency by repeatedly demonstrating their ability to deliver large-scale migration.

Jeff;