Tag Archives: Staff

Looking Forward to 2018

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/12/07/looking-forward-to-2018.html

Let’s Encrypt had a great year in 2017. We more than doubled the number of active (unexpired) certificates we service to 46 million, we just about tripled the number of unique domains we service to 61 million, and we did it all while maintaining a stellar security and compliance track record. Most importantly though, the Web went from 46% encrypted page loads to 67% according to statistics from Mozilla – a gain of 21% in a single year – incredible. We’re proud to have contributed to that, and we’d like to thank all of the other people and organizations who also worked hard to create a more secure and privacy-respecting Web.

While we’re proud of what we accomplished in 2017, we are spending most of the final quarter of the year looking forward rather than back. As we wrap up our own planning process for 2018, I’d like to share some of our plans with you, including both the things we’re excited about and the challenges we’ll face. We’ll cover service growth, new features, infrastructure, and finances.

Service Growth

We are planning to double the number of active certificates and unique domains we service in 2018, to 90 million and 120 million, respectively. This anticipated growth is due to continuing high expectations for HTTPS growth in general in 2018.

Let’s Encrypt helps to drive HTTPS adoption by offering a free, easy to use, and globally available option for obtaining the certificates required to enable HTTPS. HTTPS adoption on the Web took off at an unprecedented rate from the day Let’s Encrypt launched to the public.

One of the reasons Let’s Encrypt is so easy to use is that our community has done great work making client software that works well for a wide variety of platforms. We’d like to thank everyone involved in the development of over 60 client software options for Let’s Encrypt. We’re particularly excited that support for the ACME protocol and Let’s Encrypt is being added to the Apache httpd server.

Other organizations and communities are also doing great work to promote HTTPS adoption, and thus stimulate demand for our services. For example, browsers are starting to make their users more aware of the risks associated with unencrypted HTTP (e.g. Firefox, Chrome). Many hosting providers and CDNs are making it easier than ever for all of their customers to use HTTPS. Government agencies are waking up to the need for stronger security to protect constituents. The media community is working to Secure the News.

New Features

We’ve got some exciting features planned for 2018.

First, we’re planning to introduce an ACME v2 protocol API endpoint and support for wildcard certificates along with it. Wildcard certificates will be free and available globally just like our other certificates. We are planning to have a public test API endpoint up by January 4, and we’ve set a date for the full launch: Tuesday, February 27.

Later in 2018 we plan to introduce ECDSA root and intermediate certificates. ECDSA is generally considered to be the future of digital signature algorithms on the Web due to the fact that it is more efficient than RSA. Let’s Encrypt will currently sign ECDSA keys from subscribers, but we sign with the RSA key from one of our intermediate certificates. Once we have an ECDSA root and intermediates, our subscribers will be able to deploy certificate chains which are entirely ECDSA.

Infrastructure

Our CA infrastructure is capable of issuing millions of certificates per day with multiple redundancy for stability and a wide variety of security safeguards, both physical and logical. Our infrastructure also generates and signs nearly 20 million OCSP responses daily, and serves those responses nearly 2 billion times per day. We expect issuance and OCSP numbers to double in 2018.

Our physical CA infrastructure currently occupies approximately 70 units of rack space, split between two datacenters, consisting primarily of compute servers, storage, HSMs, switches, and firewalls.

When we issue more certificates it puts the most stress on storage for our databases. We regularly invest in more and faster storage for our database servers, and that will continue in 2018.

We’ll need to add a few additional compute servers in 2018, and we’ll also start aging out hardware in 2018 for the first time since we launched. We’ll age out about ten 2u compute servers and replace them with new 1u servers, which will save space and be more energy efficient while providing better reliability and performance.

We’ll also add another infrastructure operations staff member, bringing that team to a total of six people. This is necessary in order to make sure we can keep up with demand while maintaining a high standard for security and compliance. Infrastructure operations staff are systems administrators responsible for building and maintaining all physical and logical CA infrastructure. The team also manages a 24/7/365 on-call schedule and they are primary participants in both security and compliance audits.

Finances

We pride ourselves on being an efficient organization. In 2018 Let’s Encrypt will secure a large portion of the Web with a budget of only $3.0M. For an overall increase in our budget of only 13%, we will be able to issue and service twice as many certificates as we did in 2017. We believe this represents an incredible value and that contributing to Let’s Encrypt is one of the most effective ways to help create a more secure and privacy-respecting Web.

Our 2018 fundraising efforts are off to a strong start with Platinum sponsorships from Mozilla, Akamai, OVH, Cisco, Google Chrome and the Electronic Frontier Foundation. The Ford Foundation has renewed their grant to Let’s Encrypt as well. We are seeking additional sponsorship and grant assistance to meet our full needs for 2018.

We had originally budgeted $2.91M for 2017 but we’ll likely come in under budget for the year at around $2.65M. The difference between our 2017 expenses of $2.65M and the 2018 budget of $3.0M consists primarily of the additional infrastructure operations costs previously mentioned.

Support Let’s Encrypt

We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at [email protected]. We ask that you make an individual contribution if it is within your means.

We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web!

Zach Joins The Support Team

Post Syndicated from Yev original https://www.backblaze.com/blog/zach-joins-support-team/

As Backblaze continues to grow, one thing that runs linearly with our growth is the number of folks we need in support. We believe strongly that people writing in to get a helping hand should be quickly and kindly take care of. To help us with that, we’d like to welcome Zach, our latest Support Tech to the Backblaze team! Lets take a minute to learn a bit more about Zach shall we?

What is your Backblaze Title?
Jr. Support Technician

Where are you originally from?
I was born in Pasadena, CA, but I’ve spent most of my life in the Bay Area.

What attracted you to Backblaze?
I have a few friends that have been with the company for some time who would do nothing but gush about the respect that Backblaze has for its employees. More than anything I was drawn to the loyalty and faith the company has for its staff.

Where else have you worked?
Previously I have worked support roles for other tech companies as well as general IT and computer hardware repair.

What’s your dream job?
Somewhere that I feel I can grow within the company and find success in a role that makes me feel satisfied. Or a touring musician. That would be cool, too.

Favorite place you’ve traveled?
Canada! Everyone was so nice!

Favorite hobby?
In my spare time I like to write sad songs.

Of what achievement are you most proud?
One of my favorite singers told me that I have a really nice voice. So I suppose my proudest achievement is being born with a nice voice.

Star Trek or Star Wars?
I cried during Episode VII.

Coke or Pepsi?
Coke, obviously.

Favorite food?
Is bread an acceptable answer?

Anything else you’d like you’d like to tell us?
I’m also a big Disney fan like so many other people who work here. ¯\_(ツ)_/¯

We certainly do have a lot of Disney fans on staff — there must be something in the air. Welcome aboard Zach!

The post Zach Joins The Support Team appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Glenn’s Take on re:Invent 2017 Part 1

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-2017-part-1/

GREETINGS FROM LAS VEGAS

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We have a lot of exciting announcements this week. I’m going to post to the AWS Architecture blog each day with my take on what’s interesting about some of the announcements from a cloud architectural perspective.

Why not start at the beginning? At the Midnight Madness launch on Sunday night, we announced Amazon Sumerian, our platform for VR, AR, and mixed reality. The hype around VR/AR has existed for many years, though for me, it is a perfect example of how a working end-to-end solution often requires innovation from multiple sources. For AR/VR to be successful, we need many components to come together in a coherent manner to provide a great experience.

First, we need lightweight, high-definition goggles with motion tracking that are comfortable to wear. Second, we need to track movement of our body and hands in a 3-D space so that we can interact with virtual objects in the virtual world. Third, we need to build the virtual world itself and populate it with assets and define how the interactions will work and connect with various other systems.

There has been rapid development of the physical devices for AR/VR, ranging from iOS devices to Oculus Rift and HTC Vive, which provide excellent capabilities for the first and second components defined above. With the launch of Amazon Sumerian we are solving for the third area, which will help developers easily build their own virtual worlds and start experimenting and innovating with how to apply AR/VR in new ways.

Already, within 48 hours of Amazon Sumerian being announced, I have had multiple discussions with customers and partners around some cool use cases where VR can help in training simulations, remote-operator controls, or with new ideas around interacting with complex visual data sets, which starts bringing concepts straight out of sci-fi movies into the real (virtual) world. I am really excited to see how Sumerian will unlock the creative potential of developers and where this will lead.

Amazon MQ
I am a huge fan of distributed architectures where asynchronous messaging is the backbone of connecting the discrete components together. Amazon Simple Queue Service (Amazon SQS) is one of my favorite services due to its simplicity, scalability, performance, and the incredible flexibility of how you can use Amazon SQS in so many different ways to solve complex queuing scenarios.

While Amazon SQS is easy to use when building cloud-native applications on AWS, many of our customers running existing applications on-premises required support for different messaging protocols such as: Java Message Service (JMS), .Net Messaging Service (NMS), Advanced Message Queuing Protocol (AMQP), MQ Telemetry Transport (MQTT), Simple (or Streaming) Text Orientated Messaging Protocol (STOMP), and WebSockets. One of the most popular applications for on-premise message brokers is Apache ActiveMQ. With the release of Amazon MQ, you can now run Apache ActiveMQ on AWS as a managed service similar to what we did with Amazon ElastiCache back in 2012. For me, there are two compelling, major benefits that Amazon MQ provides:

  • Integrate existing applications with cloud-native applications without having to change a line of application code if using one of the supported messaging protocols. This removes one of the biggest blockers for integration between the old and the new.
  • Remove the complexity of configuring Multi-AZ resilient message broker services as Amazon MQ provides out-of-the-box redundancy by always storing messages redundantly across Availability Zones. Protection is provided against failure of a broker through to complete failure of an Availability Zone.

I believe that Amazon MQ is a major component in the tools required to help you migrate your existing applications to AWS. Having set up cross-data center Apache ActiveMQ clusters in the past myself and then testing to ensure they work as expected during critical failure scenarios, technical staff working on migrations to AWS benefit from the ease of deploying a fully redundant, managed Apache ActiveMQ cluster within minutes.

Who would have thought I would have been so excited to revisit Apache ActiveMQ in 2017 after using SQS for many, many years? Choice is a wonderful thing.

Amazon GuardDuty
Maintaining application and information security in the modern world is increasingly complex and is constantly evolving and changing as new threats emerge. This is due to the scale, variety, and distribution of services required in a competitive online world.

At Amazon, security is our number one priority. Thus, we are always looking at how we can increase security detection and protection while simplifying the implementation of advanced security practices for our customers. As a result, we released Amazon GuardDuty, which provides intelligent threat detection by using a combination of multiple information sources, transactional telemetry, and the application of machine learning models developed by AWS. One of the biggest benefits of Amazon GuardDuty that I appreciate is that enabling this service requires zero software, agents, sensors, or network choke points. which can all impact performance or reliability of the service you are trying to protect. Amazon GuardDuty works by monitoring your VPC flow logs, AWS CloudTrail events, DNS logs, as well as combing other sources of security threats that AWS is aggregating from our own internal and external sources.

The use of machine learning in Amazon GuardDuty allows it to identify changes in behavior, which could be suspicious and require additional investigation. Amazon GuardDuty works across all of your AWS accounts allowing for an aggregated analysis and ensuring centralized management of detected threats across accounts. This is important for our larger customers who can be running many hundreds of AWS accounts across their organization, as providing a single common threat detection of their organizational use of AWS is critical to ensuring they are protecting themselves.

Detection, though, is only the beginning of what Amazon GuardDuty enables. When a threat is identified in Amazon GuardDuty, you can configure remediation scripts or trigger Lambda functions where you have custom responses that enable you to start building automated responses to a variety of different common threats. Speed of response is required when a security incident may be taking place. For example, Amazon GuardDuty detects that an Amazon Elastic Compute Cloud (Amazon EC2) instance might be compromised due to traffic from a known set of malicious IP addresses. Upon detection of a compromised EC2 instance, we could apply an access control entry restricting outbound traffic for that instance, which stops loss of data until a security engineer can assess what has occurred.

Whether you are a customer running a single service in a single account, or a global customer with hundreds of accounts with thousands of applications, or a startup with hundreds of micro-services with hourly release cycle in a devops world, I recommend enabling Amazon GuardDuty. We have a 30-day free trial available for all new customers of this service. As it is a monitor of events, there is no change required to your architecture within AWS.

Stay tuned for tomorrow’s post on AWS Media Services and Amazon Neptune.

 

Glenn during the Tour du Mont Blanc

GDPR – A Practical Guide For Developers

Post Syndicated from Bozho original https://techblog.bozho.net/gdpr-practical-guide-developers/

You’ve probably heard about GDPR. The new European data protection regulation that applies practically to everyone. Especially if you are working in a big company, it’s most likely that there’s already a process for gettign your systems in compliance with the regulation.

The regulation is basically a law that must be followed in all European countries (but also applies to non-EU companies that have users in the EU). In this particular case, it applies to companies that are not registered in Europe, but are having European customers. So that’s most companies. I will not go into yet another “12 facts about GDPR” or “7 myths about GDPR” posts/whitepapers, as they are often aimed at managers or legal people. Instead, I’ll focus on what GDPR means for developers.

Why am I qualified to do that? A few reasons – I was advisor to the deputy prime minister of a EU country, and because of that I’ve been both exposed and myself wrote some legislation. I’m familiar with the “legalese” and how the regulatory framework operates in general. I’m also a privacy advocate and I’ve been writing about GDPR-related stuff in the past, i.e. “before it was cool” (protecting sensitive data, the right to be forgotten). And finally, I’m currently working on a project that (among other things) aims to help with covering some GDPR aspects.

I’ll try to be a bit more comprehensive this time and cover as many aspects of the regulation that concern developers as I can. And while developers will mostly be concerned about how the systems they are working on have to change, it’s not unlikely that a less informed manager storms in in late spring, realizing GDPR is going to be in force tomorrow, asking “what should we do to get our system/website compliant”.

The rights of the user/client (referred to as “data subject” in the regulation) that I think are relevant for developers are: the right to erasure (the right to be forgotten/deleted from the system), right to restriction of processing (you still keep the data, but mark it as “restricted” and don’t touch it without further consent by the user), the right to data portability (the ability to export one’s data), the right to rectification (the ability to get personal data fixed), the right to be informed (getting human-readable information, rather than long terms and conditions), the right of access (the user should be able to see all the data you have about them), the right to data portability (the user should be able to get a machine-readable dump of their data).

Additionally, the relevant basic principles are: data minimization (one should not collect more data than necessary), integrity and confidentiality (all security measures to protect data that you can think of + measures to guarantee that the data has not been inappropriately modified).

Even further, the regulation requires certain processes to be in place within an organization (of more than 250 employees or if a significant amount of data is processed), and those include keeping a record of all types of processing activities carried out, including transfers to processors (3rd parties), which includes cloud service providers. None of the other requirements of the regulation have an exception depending on the organization size, so “I’m small, GDPR does not concern me” is a myth.

It is important to know what “personal data” is. Basically, it’s every piece of data that can be used to uniquely identify a person or data that is about an already identified person. It’s data that the user has explicitly provided, but also data that you have collected about them from either 3rd parties or based on their activities on the site (what they’ve been looking at, what they’ve purchased, etc.)

Having said that, I’ll list a number of features that will have to be implemented and some hints on how to do that, followed by some do’s and don’t’s.

  • “Forget me” – you should have a method that takes a userId and deletes all personal data about that user (in case they have been collected on the basis of consent, and not due to contract enforcement or legal obligation). It is actually useful for integration tests to have that feature (to cleanup after the test), but it may be hard to implement depending on the data model. In a regular data model, deleting a record may be easy, but some foreign keys may be violated. That means you have two options – either make sure you allow nullable foreign keys (for example an order usually has a reference to the user that made it, but when the user requests his data be deleted, you can set the userId to null), or make sure you delete all related data (e.g. via cascades). This may not be desirable, e.g. if the order is used to track available quantities or for accounting purposes. It’s a bit trickier for event-sourcing data models, or in extreme cases, ones that include some sort of blcokchain/hash chain/tamper-evident data structure. With event sourcing you should be able to remove a past event and re-generate intermediate snapshots. For blockchain-like structures – be careful what you put in there and avoid putting personal data of users. There is an option to use a chameleon hash function, but that’s suboptimal. Overall, you must constantly think of how you can delete the personal data. And “our data model doesn’t allow it” isn’t an excuse.
  • Notify 3rd parties for erasure – deleting things from your system may be one thing, but you are also obligated to inform all third parties that you have pushed that data to. So if you have sent personal data to, say, Salesforce, Hubspot, twitter, or any cloud service provider, you should call an API of theirs that allows for the deletion of personal data. If you are such a provider, obviously, your “forget me” endpoint should be exposed. Calling the 3rd party APIs to remove data is not the full story, though. You also have to make sure the information does not appear in search results. Now, that’s tricky, as Google doesn’t have an API for removal, only a manual process. Fortunately, it’s only about public profile pages that are crawlable by Google (and other search engines, okay…), but you still have to take measures. Ideally, you should make the personal data page return a 404 HTTP status, so that it can be removed.
  • Restrict processing – in your admin panel where there’s a list of users, there should be a button “restrict processing”. The user settings page should also have that button. When clicked (after reading the appropriate information), it should mark the profile as restricted. That means it should no longer be visible to the backoffice staff, or publicly. You can implement that with a simple “restricted” flag in the users table and a few if-clasues here and there.
  • Export data – there should be another button – “export data”. When clicked, the user should receive all the data that you hold about them. What exactly is that data – depends on the particular usecase. Usually it’s at least the data that you delete with the “forget me” functionality, but may include additional data (e.g. the orders the user has made may not be delete, but should be included in the dump). The structure of the dump is not strictly defined, but my recommendation would be to reuse schema.org definitions as much as possible, for either JSON or XML. If the data is simple enough, a CSV/XLS export would also be fine. Sometimes data export can take a long time, so the button can trigger a background process, which would then notify the user via email when his data is ready (twitter, for example, does that already – you can request all your tweets and you get them after a while).
  • Allow users to edit their profile – this seems an obvious rule, but it isn’t always followed. Users must be able to fix all data about them, including data that you have collected from other sources (e.g. using a “login with facebook” you may have fetched their name and address). Rule of thumb – all the fields in your “users” table should be editable via the UI. Technically, rectification can be done via a manual support process, but that’s normally more expensive for a business than just having the form to do it. There is one other scenario, however, when you’ve obtained the data from other sources (i.e. the user hasn’t provided their details to you directly). In that case there should still be a page where they can identify somehow (via email and/or sms confirmation) and get access to the data about them.
  • Consent checkboxes – this is in my opinion the biggest change that the regulation brings. “I accept the terms and conditions” would no longer be sufficient to claim that the user has given their consent for processing their data. So, for each particular processing activity there should be a separate checkbox on the registration (or user profile) screen. You should keep these consent checkboxes in separate columns in the database, and let the users withdraw their consent (by unchecking these checkboxes from their profile page – see the previous point). Ideally, these checkboxes should come directly from the register of processing activities (if you keep one). Note that the checkboxes should not be preselected, as this does not count as “consent”.
  • Re-request consent – if the consent users have given was not clear (e.g. if they simply agreed to terms & conditions), you’d have to re-obtain that consent. So prepare a functionality for mass-emailing your users to ask them to go to their profile page and check all the checkboxes for the personal data processing activities that you have.
  • “See all my data” – this is very similar to the “Export” button, except data should be displayed in the regular UI of the application rather than an XML/JSON format. For example, Google Maps shows you your location history – all the places that you’ve been to. It is a good implementation of the right to access. (Though Google is very far from perfect when privacy is concerned)
  • Age checks – you should ask for the user’s age, and if the user is a child (below 16), you should ask for parent permission. There’s no clear way how to do that, but my suggestion is to introduce a flow, where the child should specify the email of a parent, who can then confirm. Obviosuly, children will just cheat with their birthdate, or provide a fake parent email, but you will most likely have done your job according to the regulation (this is one of the “wishful thinking” aspects of the regulation).

Now some “do’s”, which are mostly about the technical measures needed to protect personal data. They may be more “ops” than “dev”, but often the application also has to be extended to support them. I’ve listed most of what I could think of in a previous post.

  • Encrypt the data in transit. That means that communication between your application layer and your database (or your message queue, or whatever component you have) should be over TLS. The certificates could be self-signed (and possibly pinned), or you could have an internal CA. Different databases have different configurations, just google “X encrypted connections. Some databases need gossiping among the nodes – that should also be configured to use encryption
  • Encrypt the data at rest – this again depends on the database (some offer table-level encryption), but can also be done on machine-level. E.g. using LUKS. The private key can be stored in your infrastructure, or in some cloud service like AWS KMS.
  • Encrypt your backups – kind of obvious
  • Implement pseudonymisation – the most obvious use-case is when you want to use production data for the test/staging servers. You should change the personal data to some “pseudonym”, so that the people cannot be identified. When you push data for machine learning purposes (to third parties or not), you can also do that. Technically, that could mean that your User object can have a “pseudonymize” method which applies hash+salt/bcrypt/PBKDF2 for some of the data that can be used to identify a person
  • Protect data integrity – this is a very broad thing, and could simply mean “have authentication mechanisms for modifying data”. But you can do something more, even as simple as a checksum, or a more complicated solution (like the one I’m working on). It depends on the stakes, on the way data is accessed, on the particular system, etc. The checksum can be in the form of a hash of all the data in a given database record, which should be updated each time the record is updated through the application. It isn’t a strong guarantee, but it is at least something.
  • Have your GDPR register of processing activities in something other than Excel – Article 30 says that you should keep a record of all the types of activities that you use personal data for. That sounds like bureaucracy, but it may be useful – you will be able to link certain aspects of your application with that register (e.g. the consent checkboxes, or your audit trail records). It wouldn’t take much time to implement a simple register, but the business requirements for that should come from whoever is responsible for the GDPR compliance. But you can advise them that having it in Excel won’t make it easy for you as a developer (imagine having to fetch the excel file internally, so that you can parse it and implement a feature). Such a register could be a microservice/small application deployed separately in your infrastructure.
  • Log access to personal data – every read operation on a personal data record should be logged, so that you know who accessed what and for what purpose
  • Register all API consumers – you shouldn’t allow anonymous API access to personal data. I’d say you should request the organization name and contact person for each API user upon registration, and add those to the data processing register. Note: some have treated article 30 as a requirement to keep an audit log. I don’t think it is saying that – instead it requires 250+ companies to keep a register of the types of processing activities (i.e. what you use the data for). There are other articles in the regulation that imply that keeping an audit log is a best practice (for protecting the integrity of the data as well as to make sure it hasn’t been processed without a valid reason)

Finally, some “don’t’s”.

  • Don’t use data for purposes that the user hasn’t agreed with – that’s supposed to be the spirit of the regulation. If you want to expose a new API to a new type of clients, or you want to use the data for some machine learning, or you decide to add ads to your site based on users’ behaviour, or sell your database to a 3rd party – think twice. I would imagine your register of processing activities could have a button to send notification emails to users to ask them for permission when a new processing activity is added (or if you use a 3rd party register, it should probably give you an API). So upon adding a new processing activity (and adding that to your register), mass email all users from whom you’d like consent.
  • Don’t log personal data – getting rid of the personal data from log files (especially if they are shipped to a 3rd party service) can be tedious or even impossible. So log just identifiers if needed. And make sure old logs files are cleaned up, just in case
  • Don’t put fields on the registration/profile form that you don’t need – it’s always tempting to just throw as many fields as the usability person/designer agrees on, but unless you absolutely need the data for delivering your service, you shouldn’t collect it. Names you should probably always collect, but unless you are delivering something, a home address or phone is unnecessary.
  • Don’t assume 3rd parties are compliant – you are responsible if there’s a data breach in one of the 3rd parties (e.g. “processors”) to which you send personal data. So before you send data via an API to another service, make sure they have at least a basic level of data protection. If they don’t, raise a flag with management.
  • Don’t assume having ISO XXX makes you compliant – information security standards and even personal data standards are a good start and they will probably 70% of what the regulation requires, but they are not sufficient – most of the things listed above are not covered in any of those standards

Overall, the purpose of the regulation is to make you take conscious decisions when processing personal data. It imposes best practices in a legal way. If you follow the above advice and design your data model, storage, data flow , API calls with data protection in mind, then you shouldn’t worry about the huge fines that the regulation prescribes – they are for extreme cases, like Equifax for example. Regulators (data protection authorities) will most likely have some checklists into which you’d have to somehow fit, but if you follow best practices, that shouldn’t be an issue.

I think all of the above features can be implemented in a few weeks by a small team. Be suspicious when a big vendor offers you a generic plug-and-play “GDPR compliance” solution. GDPR is not just about the technical aspects listed above – it does have organizational/process implications. But also be suspicious if a consultant claims GDPR is complicated. It’s not – it relies on a few basic principles that are in fact best practices anyway. Just don’t ignore them.

The post GDPR – A Practical Guide For Developers appeared first on Bozho's tech blog.

Amazon MQ – Managed Message Broker Service for ActiveMQ

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-mq-managed-message-broker-service-for-activemq/

Messaging holds the parts of a distributed application together, while also adding resiliency and enabling the implementation of highly scalable architectures. For example, earlier this year, Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) supported the processing of customer orders on Prime Day, collectively processing 40 billion messages at a rate of 10 million per second, with no customer-visible issues.

SQS and SNS have been used extensively for applications that were born in the cloud. However, many of our larger customers are already making use of open-sourced or commercially-licensed message brokers. Their applications are mission-critical, and so is the messaging that powers them. Our customers describe the setup and on-going maintenance of their messaging infrastructure as “painful” and report that they spend at least 10 staff-hours per week on this chore.

New Amazon MQ
Today we are launching Amazon MQ – a managed message broker service for Apache ActiveMQ that lets you get started in minutes with just three clicks! As you may know, ActiveMQ is a popular open-source message broker that is fast & feature-rich. It offers queues and topics, durable and non-durable subscriptions, push-based and poll-based messaging, and filtering.

As a managed service, Amazon MQ takes care of the administration and maintenance of ActiveMQ. This includes responsibility for broker provisioning, patching, failure detection & recovery for high availability, and message durability. With Amazon MQ, you get direct access to the ActiveMQ console and industry standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. This allows you to move from any message broker that uses these standards to Amazon MQ–along with the supported applications–without rewriting code.

You can create a single-instance Amazon MQ broker for development and testing, or an active/standby pair that spans AZs, with quick, automatic failover. Either way, you get data replication across AZs and a pay-as-you-go model for the broker instance and message storage.

Amazon MQ is a full-fledged part of the AWS family, including the use of AWS Identity and Access Management (IAM) for authentication and authorization to use the service API. You can use Amazon CloudWatch metrics to keep a watchful eye metrics such as queue depth and initiate Auto Scaling of your consumer fleet as needed.

Launching an Amazon MQ Broker
To get started, I open up the Amazon MQ Console, select the desired AWS Region, enter a name for my broker, and click on Next step:

Then I choose the instance type, indicate that I want to create a standby , and click on Create broker (I can select a VPC and fine-tune other settings in the Advanced settings section):

My broker will be created and ready to use in 5-10 minutes:

The URLs and endpoints that I use to access my broker are all available at a click:

I can access the ActiveMQ Web Console at the link provided:

The broker publishes instance, topic, and queue metrics to CloudWatch. Here are the instance metrics:

Available Now
Amazon MQ is available now and you can start using it today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Sydney) Regions.

The AWS Free Tier lets you use a single-AZ micro instance for up to 750 hours and to store up to 1 gigabyte each month, for one year. After that, billing is based on instance-hours and message storage, plus charges Internet data transfer if the broker is accessed from outside of AWS.

Jeff;

Eben Moglen is no longer a friend of the free software community

Post Syndicated from Matthew Garrett original https://mjg59.dreamwidth.org/49370.html

(Note: While the majority of the events described below occurred while I was a member of the board of directors of the Free Software Foundation, I am no longer. This is my personal position and should not be interpreted as the opinion of any other organisation or company I have been affiliated with in any way)

Eben Moglen has done an amazing amount of work for the free software community, serving on the board of the Free Software Foundation and acting as its general counsel for many years, leading the drafting of GPLv3 and giving many forceful speeches on the importance of free software. However, his recent behaviour demonstrates that he is no longer willing to work with other members of the community, and we should reciprocate that.

In early 2016, the FSF board became aware that Eben was briefing clients on an interpretation of the GPL that was incompatible with that held by the FSF. He later released this position publicly with little coordination with the FSF, which was used by Canonical to justify their shipping ZFS in a GPL-violating way. He had provided similar advice to Debian, who were confused about the apparent conflict between the FSF’s position and Eben’s.

This situation was obviously problematic – Eben is clearly free to provide whatever legal opinion he holds to his clients, but his very public association with the FSF caused many people to assume that these positions were held by the FSF and the FSF were forced into the position of publicly stating that they disagreed with legal positions held by their general counsel. Attempts to mediate this failed, and Eben refused to commit to working with the FSF on avoiding this sort of situation in future[1].

Around the same time, Eben made legal threats towards another project with ties to FSF. These threats were based on a license interpretation that ran contrary to how free software licenses had been interpreted by the community for decades, and was made without any prior discussion with the FSF. This, in conjunction with his behaviour over the ZFS issue, led to him stepping down as the FSF’s general counsel.

Throughout this period, Eben disparaged FSF staff and other free software community members in various semi-public settings. In doing so he harmed the credibility of many people who have devoted significant portions of their lives to aiding the free software community. At Libreplanet earlier this year he made direct threats against an attendee – this was reported as a violation of the conference’s anti-harassment policy.

Eben has acted against the best interests of an organisation he publicly represented. He has threatened organisations and individuals who work to further free software. His actions are no longer to the benefit of the free software community and the free software community should cease associating with him.

[1] Contrary to the claim provided here, Bradley was not involved in this process.

comment count unavailable comments

Sci-Hub Won’t Be Blocked by US ISPs Anytime Soon

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-wont-be-blocked-by-us-isps-anytime-soon-171111/

Sci-Hub, often referred to as the “Pirate Bay of Science,” hasn’t had a particularly good run in US courts so far.

Following a $15 million defeat against Elsevier in June, the American Chemical Society won a default judgment of $4.8 million in copyright damages late last week.

In addition, the publisher was granted an unprecedented injunction, requiring various third-party services to stop providing access to the site.

The order specifically mentions domain registrars and hosting companies, but also search engines and ISPs, although only those who are in “active concert or participation” with the site. This order sparked fears that Google, Comcast, and others would be ordered to take action, but that’s not the case.

After the news broke ACS issued a press release clarifying that it would not go after search engines and ISPs when they are not in “active participation” with Sci-Hub. The problem is that this can be interpreted quite broadly, leaving plenty of room for uncertainty.

Luckily, ACS Director Glenn Ruskin was willing to provide more clarity. He stressed that search engines and ISPs won’t be targeted for simply linking users to Sci-Hub. Companies that host the content are a target though.

“The court’s affirmative ruling does not apply to search engines writ large, but only to those entities who have been in active concert or participation with Sci-Hub, such as websites that host ACS content stolen by Sci-Hub,” Ruskin said.

When we asked whether this means that ISPs such as Comcast are not likely to be targeted, the answer was affirmative.

“That is correct, unless the internet service provider has been in active concert or participation with SciHub. Simply linking to SciHub does not rise to be in active concert or participation,” Ruskin clarified.

The above suggests that ACS will go after domain name registrars, hosting companies, and perhaps Cloudflare, but not any further. Still, even if that’s the case there is cause for worry among several digital rights activists.

The Electronic Frontier Foundation believes that these type of orders set a dangerous precedent. The concept of “active concert or participation” should only cover close associates and co-conspirators, not everyone who provides a service to the defendant. Domain registrars and registries have often been compelled to take action in similar cases, but EFF says this goes too far.

“The courts need to limit who can be bound by orders like this one, to prevent them from being abused,” EFF Senior Staff Attorney Mitch Stoltz informs TorrentFreak.

“In particular, domain name registrars and registries shouldn’t be ordered to help take down a website because of a dispute over the site’s contents. That invites others to use the domain name system as a tool for censorship.”

News of the Sci-Hub injunction has sparked controversy and confusion in recent days, not least because Sci-hub.cc became unavailable soon after. Instead of showing the usual search box, visitors now see a “403 Forbidden” error message. On top of that, the bulletproof Tor version of the site also went offline.

The error message indicates that there’s a hosting issue. While it’s easy to conclude that the court’s injunction has something to do with this, that might not necessarily be the case. Sci-Hub’s hosting company isn’t tied to the US and has a history of protecting sites from takedown efforts.

We reached out to Sci-Hub founder Alexandra Elbakyan for comment but we’re yet to receive a response. The site hasn’t posted any relevant updates on its social media pages either.

That said, the site is far from done. In addition to the Tor domain, Sci-Hub has several other backups in place such as Sci-Hub.io and Sci-Hub.ac, which are up and running as usual.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Backing Up the Modern Enterprise with Backblaze for Business

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-solutions/

Endpoint backup diagram

Organizations of all types and sizes need reliable and secure backup. Whether they have as few as 3 or as many as 300,000 computer users, an organization’s computer data is a valuable business asset that needs to be protected.

Modern organizations are changing how they work and where they work, which brings new challenges to making sure that company’s data assets are not only available, but secure. Larger organizations have IT departments that are prepared to address these needs, but often times in smaller and newer organizations the challenge falls upon office management who might not be as prepared or knowledgeable to face a work environment undergoing dramatic changes.

Whether small or large, local or world-wide, for-profit or non-profit, organizations need a backup strategy and solution that matches the new ways of working in the enterprise.

The Enterprise Has Changed, and So Has Data Use

More and more, organizations are working in the cloud. These days organizations can operate just fine without their own file servers, database servers, mail servers, or other IT infrastructure that used to be standard for all but the smallest organization.

The reality is that for most organizations, though, it’s a hybrid work environment, with a combination of cloud-based and PC and Macintosh-based applications. Legacy apps aren’t going away any time soon. They will be with us for a while, with their accompanying data scattered amongst all the desktops, laptops and other endpoints in corporate headquarters, home offices, hotel rooms, and airport waiting areas.

In addition, the modern workforce likely combines regular full-time employees, remote workers, contractors, and sometimes interns, volunteers, and other temporary workers who also use company IT assets.

The Modern Enterprise Brings New Challenges for IT

These changes in how enterprises work present a problem for anyone tasked with making sure that data — no matter who uses it or where it lives — is adequately backed-up. Cloud-based applications, when properly used and managed, can be adequately backed up, provided that users are connected to the internet and data transfers occur regularly — which is not always the case. But what about the data on the laptops, desktops, and devices used by remote employees, contractors, or just employees whose work keeps them on the road?

The organization’s backup solution must address all the needs of the modern organization or enterprise using both cloud and PC and Mac-based applications, and not be constrained by employee or computer location.

A Ten-Point Checklist for the Modern Enterprise for Backing Up

What should the modern enterprise look for when evaluating a backup solution?

1) Easy to deploy to workers’ computers

Whether installed by the computer user or an IT person locally or remotely, the backup solution must be easy to implement quickly with minimal demands on the user or administrator.

2) Fast and unobtrusive client software

Backups should happen in the background by efficient (native) PC and Macintosh software clients that don’t consume valuable processing power or take memory away from applications the user needs.

3) Easy to configure

The backup solutions must be easy to configure for both the user and the IT professional. Ease-of-use means less time to deploy, configure, and manage.

4) Defaults to backing up all valuable data

By default, the solution backs up commonly used files and folders or directories, including desktops. Some backup solutions are difficult and intimidating because they require that the user chose what needs to be backed up, often missing files and folders/directories that contain valuable data.

5) Works automatically in the background

Backups should happen automatically, no matter where the computer is located. The computer user, especially the remote or mobile one, shouldn’t be required to attach cables or drives, or remember to initiate backups. A working solution backs up automatically without requiring action by the user or IT administrator.

6) Data restores are fast and easy

Whether it’s a single file, directory, or an entire system that must be restored, a user or IT sysadmin needs to be able to restore backed up data as quickly as possible. In cases of large restores to remote locations, the ability to send a restore via physical media is a must.

7) No limitations on data

Throttling, caps, and data limits complicate backups and require guesses about how much storage space will be needed.

8) Safe & Secure

Organizations require that their data is secure during all phases of initial upload, storage, and restore.

9) Easy-to-manage

The backup solution needs to provide a clear and simple web management interface for all functions. Designing for ease-of-use leads to efficiency in management and operation.

10) Affordable and transparent pricing

Backup costs should be predictable, understandable, and without surprises.

Two Scenarios for the Modern Enterprise

Enterprises exist in many forms and types, but wanting to meet the above requirements is common across all of them. Below, we take a look at two common scenarios showing how enterprises face these challenges. Three case studies are available that provide more information about how Backblaze customers have succeeded in these environments.

Enterprise Profile 1

The needs of a smaller enterprise differ from those of larger, established organizations. This organization likely doesn’t have anyone who is devoted full-time to IT. The job of on-boarding new employees and getting them set up with a computer likely falls upon an executive assistant or office manager. This person might give new employees a checklist with the software and account information and lets users handle setting up the computer themselves.

Organizations in this profile need solutions that are easy to install and require little to no configuration. Backblaze, by default, backs up all user data, which lets the organization be secure in knowing all the data will be backed up to the cloud — including files left on the desktop. Combined with Backblaze’s unlimited data policy, organizations have a truly “set it and forget it” platform.

Customizing Groups To Meet Teams’ Needs

The Groups feature of Backblaze for Business allows an organization to decide whether an individual client’s computer will be Unmanaged (backups and restores under the control of the worker), or Managed, in which an administrator can monitor the status and frequency of backups and handle restores should they become necessary. One group for the entire organization might be adequate at this stage, but the organization has the option to add additional groups as it grows and needs more flexibility and control.

The organization, of course, has the choice of managing and monitoring users using Groups. With Backblaze’s Groups, organizations can set user-based access rules, which allows the administrator to create restores for lost files or entire computers on an employee’s behalf, to centralize billing for all client computers in the organization, and to redeploy a recovered computer or new computer with the backed up data.

Restores

In this scenario, the decision has been made to let each user manage her own backups, including restores, if necessary, of individual files or entire systems. If a restore of a file or system is needed, the restore process is easy enough for the user to handle it by herself.

Case Study 1

Read about how PagerDuty uses Backblaze for Business in a mixed enterprise of cloud and desktop/laptop applications.

PagerDuty Case Study

In a common approach, the employee can retrieve an accidentally deleted file or an earlier version of a document on her own. The Backblaze for Business interface is easy to navigate and was designed with feedback from thousands of customers over the course of a decade.

In the event of a lost, damaged, or stolen laptop,  administrators of Managed Groups can  initiate the restore, which could be in the form of a download of a restore ZIP file from the web management console, or the overnight shipment of a USB drive directly to the organization or user.

Enterprise Profile 2

This profile is for an organization with a full-time IT staff. When a new worker joins the team, the IT staff is tasked with configuring the computer and delivering it to the new employee.

Backblaze for Business Groups

Case Study 2

Global charitable organization charity: water uses Backblaze for Business to back up workers’ and volunteers’ laptops as they travel to developing countries in their efforts to provide clean and safe drinking water.

charity: water Case Study

This organization can take advantage of additional capabilities in Groups. A Managed Group makes sense in an organization with a geographically dispersed work force as it lets IT ensure that workers’ data is being regularly backed up no matter where they are. Billing can be company-wide or assigned to individual departments or geographical locations. The organization has the choice of how to divide the organization into Groups (location, function, subsidiary, etc.) and whether the Group should be Managed or Unmanaged. Using Managed Groups might be suitable for most of the organization, but there are exceptions in which sensitive data might dictate using an Unmanaged Group, such as could be the case with HR, the executive team, or finance.

Deployment

By Invitation Email, Link, or Domain

Backblaze for Business allows a number of options for deploying the client software to workers’ computers. Client installation is fast and easy on both Windows and Macintosh, so sending email invitations to users or automatically enrolling users by domain or invitation link, is a common approach.

By Remote Deployment

IT might choose to remotely and silently deploy Backblaze for Business across specific Groups or the entire organization. An administrator can silently deploy the Backblaze backup client via the command-line, or use common RMM (Remote Monitoring and Management) tools such as Jamf and Munki.

Restores

Case Study 3

Read about how Bright Bear Technology Solutions, an IT Managed Service Provider (MSP), uses the Groups feature of Backblaze for Business to manage customer backups and restores, deploy Backblaze licenses to their customers, and centralize billing for all their client-based backup services.

Bright Bear Case Study

Some organizations are better equipped to manage or assist workers when restores become necessary. Individual users will be pleased to discover they can roll-back files to an earlier version if they wish, but IT will likely manage any complete system restore that involves reconfiguring a computer after a repair or requisitioning an entirely new system when needed.

This organization might chose to retain a client’s entire computer backup for archival purposes, using Backblaze B2 as the cloud storage solution. This is another advantage of having a cloud storage provider that combines both endpoint backup and cloud object storage among its services.

The Next Step: Server Backup & Data Archiving with B2 Cloud Storage

As organizations grow, they have increased needs for cloud storage beyond Macintosh and PC data backup. Backblaze’s object cloud storage, Backblaze B2, provides low-cost storage and archiving of records, media, and server data that can grow with the organization’s size and needs.

B2 Cloud Storage is available through the same Backblaze management console as Backblaze Computer Backup. This means that Admins have one console for billing, monitoring, deployment, and role provisioning. B2 is priced at 1/4 the cost of Amazon S3, or $0.005 per month per gigabyte (which equals $5/month per terabyte).

Why Modern Enterprises Chose Backblaze

Backblaze for Business

Businesses and organizations select Backblaze for Business for backup because Backblaze is designed to meet the needs of the modern enterprise. Backblaze customers are part of a a platform that has a 10+ year track record of innovation and over 400 petabytes of customer data already under management.

Backblaze’s backup model is proven through head-to-head comparisons to back up data that other backup solutions overlook in their default configurations — including valuable files that are needed after an accidental deletion, theft, or computer failure.

Backblaze is the only enterprise-level backup company that provides TOTP (Time-based One-time Password) via both SMS and Authentication app to all accounts at no incremental charge. At just $50/year/computer, Backblaze is affordable for any size of enterprise.

Modern Enterprises can Meet The Challenge of The Changing Data Environment

With the right backup solution and strategy, the modern enterprise will be prepared to ensure that its data is protected from accident, disaster, or theft, whether its data is in one office or dispersed among many locations, and remote and mobile employees.

Backblaze for Business is an affordable solution that enables organizations to meet the evolving data demands facing the modern enterprise.

The post Backing Up the Modern Enterprise with Backblaze for Business appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

SFLC Files Bizarre Legal Action Against Its Former Client, Software Freedom Conservancy (Conservancy Blog)

Post Syndicated from jake original https://lwn.net/Articles/738046/rss

The Software Freedom Conservancy (SFC) blog reveals a recent action taken by the Software Freedom Law Center (SFLC) to try to cancel the trademark for SFC. On September 22, SFLC filed a complaint with the US Patent and Trademark Office asking that the trademark be canceled because there is a likelihood of confusion between the trademarks:
Registrant’s SOFTWARE FREEDOM CONSERVANCY Mark is confusingly similar to
Petitioner’s SOFTWARE FREEDOM LAW CENTER Mark.
” On November 2, SFC filed a response that lists the defenses it plans to use. From the blog post: “We are surprised and sad that our former attorneys, who kindly helped our organization start in our earliest days and later excitedly endorsed us when we moved from a volunteer organization to a staffed one, would seek to invalidate our trademark. Conservancy and SFLC are very different organizations and sometimes publicly disagree about detailed policy issues. Yet, both non-profits are charities organized to promote the public’s interest. Thus, we are especially disappointed that SFLC would waste the precious resources of both organizations in this frivolous action.

Tableau 10.4 Supports Amazon Redshift Spectrum with External Amazon S3 Tables

Post Syndicated from Robin Cottiss original https://aws.amazon.com/blogs/big-data/tableau-10-4-supports-amazon-redshift-spectrum-with-external-amazon-s3-tables/

This is a guest post by Robin Cottiss, strategic customer consultant, Russell Christopher, staff product manager, and Vaidy Krishnan, senior manager of product marketing, at Tableau. Tableau, in their own words, “helps anyone quickly analyze, visualize, and share information. More than 61,000 customer accounts get rapid results with Tableau in the office and on the go. Over 300,000 people use Tableau Public to share public data in their blogs and websites.”

We’re excited to announce today an update to our Amazon Redshift connector with support for Amazon Redshift Spectrum to analyze data in external Amazon S3 tables. This feature, the direct result of joint engineering and testing work performed by the teams at Tableau and AWS, was released as part of Tableau 10.3.3 and will be available broadly in Tableau 10.4.1. With this update, you can quickly and directly connect Tableau to data in Amazon Redshift and analyze it in conjunction with data in Amazon S3—all with drag-and-drop ease.

This connector is yet another in a series of market-leading integrations of Tableau with AWS’s analytics platform, with services such as Amazon Redshift, Amazon EMR, and Amazon Athena. These integrations have allowed Tableau to become the natural choice of tool for analyzing data stored on AWS. Beyond this, Tableau Server runs seamlessly in the AWS Cloud infrastructure. If you prefer to deploy all your applications inside AWS, you have a complete solution offering from Tableau.

How does support for Amazon Redshift Spectrum help you?

If you’re like many Tableau customers, you have large buckets of data stored in Amazon S3. You might need to access this data frequently and store it in a consistent, highly structured format. If so, you can provision it to a data warehouse like Amazon Redshift. You might also want to explore this S3 data on an ad hoc basis. For example, you might want to determine whether or not to provision the data, and where—options might be Hadoop, Impala, Amazon EMR, or Amazon Redshift. To do so, you can use Amazon Athena, a serverless interactive query service from AWS that requires no infrastructure setup and management.

But what if you want to analyze both the frequently accessed data stored locally in Amazon Redshift AND your full datasets stored cost-effectively in Amazon S3? What if you want the throughput of disk and sophisticated query optimization of Amazon Redshift AND a service that combines a serverless scale-out processing capability with the massively reliable and scalable S3 infrastructure? What if you want the super-fast performance of Amazon Redshift AND support for open storage formats (for example, Parquet or ORC) in S3?

To enable these AND and resolve the tyranny of ORs, AWS launched Amazon Redshift Spectrum earlier this year.

Amazon Redshift Spectrum gives you the freedom to store your data where you want, in the format you want, and have it available for processing when you need it. Since the Amazon Redshift Spectrum launch, Tableau has worked tirelessly to provide best-in-class support for this new service. With Tableau and Redshift Spectrum, you can extend your Amazon Redshift analyses out to the entire universe of data in your S3 data lakes.

This latest update has been tested by many customers with very positive feedback. One such customer is the world’s largest food product distributor, Sysco—you can watch their session referencing the Amazon Spectrum integration at Tableau Conference 2017. Sysco also plans to reprise its “Tableau on AWS” story again in a month’s time at AWS re:Invent.

Now, I’d like to use a concrete example to demonstrate how Tableau works with Amazon Redshift Spectrum. In this example, I also show you how and why you might want to connect to your AWS data in different ways.

The setup

I use the pipeline described following to ingest, process, and analyze data with Tableau on an AWS stack. The source data is the New York City Taxi dataset, which has 9 years’ worth of taxi rides activity (including pick-up and drop-off location, amount paid, payment type, and so on) captured in 1.2 billion records.

In this pipeline, this data lands in S3, is cleansed and partitioned by using Amazon EMR, and is then converted to a columnar Parquet format that is analytically optimized. You can point Tableau to the raw data in S3 by using Amazon Athena. You can also access the cleansed data with Tableau using Presto through your Amazon EMR cluster.

Why use Tableau this early in the pipeline? Because sometimes you want to understand what’s there and what questions are worth asking before you even start the analysis.

After you find out what those questions are and determine if this sort of analysis has long-term usefulness, you can automate and optimize that pipeline. You do this to add new data as soon as possible as it arrives, to get it to the processes and people that need it. You might also want to provision this data to a highly performant “hotter” layer (Amazon Redshift or Tableau Extract) for repeated access.

In the illustration preceding, S3 contains the raw denormalized ride data at the timestamp level of granularity. This S3 data is the fact table. Amazon Redshift has the time dimensions broken out by date, month, and year, and also has the taxi zone information.

Now imagine I want to know where and when taxi pickups happen on a certain date in a certain borough. With support for Amazon Redshift Spectrum, I can now join the S3 tables with the Amazon Redshift dimensions, as shown following.

I can next analyze the data in Tableau to produce a borough-by-borough view of New York City ride density on Christmas Day 2015.

Or I can hone in on just Manhattan and identify pickup hotspots, with ride charges way above the average!

With Amazon Redshift Spectrum, you now have a fast, cost-effective engine that minimizes data processed with dynamic partition pruning. You can further improve query performance by reducing the data scanned. You do this by partitioning and compressing data and by using a columnar format for storage.

At the end of the day, which engine you use behind Tableau is a function of what you want to optimize for. Some possible engines are Amazon Athena, Amazon Redshift, and Redshift Spectrum, or you can bring a subset of data into Tableau Extract. Factors in planning optimization include these:

  • Are you comfortable with the serverless cost model of Amazon Athena and potential full scans? Or do you prefer the advantages of no setup?
  • Do you want the throughput of local disk?
  • Effort and time of setup. Are you okay with the lead-time of an Amazon Redshift cluster setup, as opposed to just bringing everything into Tableau Extract?

To meet the many needs of our customers, Tableau’s approach is simple: It’s all about choice. The choice of how you want to connect to and analyze your data. Throughout the history of our product and into the future, we have and will continue to empower choice for customers.

For more on how to deal with choice, as you go about making architecture decisions for your enterprise, watch this big data strategy session my friend Robin Cottiss and I delivered at Tableau Conference 2017. This session includes several customer examples leveraging the Tableau on AWS platform, and also a run-through of the aforementioned demonstration.

If you’re curious to learn more about analyzing data with Tableau on Amazon Redshift we encourage you to check out the following resources:

Hot Startups on AWS – October 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/hot-startups-on-aws-october-2017/

In 2015, the Centers for Medicare and Medicaid Services (CMS) reported that healthcare spending made up 17.8% of the U.S. GDP – that’s almost $3.2 trillion or $9,990 per person. By 2025, the CMS estimates this number will increase to nearly 20%. As cloud technology evolves in the healthcare and life science industries, we are seeing how companies of all sizes are using AWS to provide powerful and innovative solutions to customers across the globe. This month we are excited to feature the following startups:

  • ClearCare – helping home care agencies operate efficiently and grow their business.
  • DNAnexus – providing a cloud-based global network for sharing and managing genomic data.

ClearCare (San Francisco, CA)

ClearCare envisions a future where home care is the only choice for aging in place. Home care agencies play a critical role in the economy and their communities by significantly lowering the overall cost of care, reducing the number of hospital admissions, and bending the cost curve of aging. Patients receiving home care typically have multiple chronic conditions and functional limitations, driving over $190 billion in healthcare spending in the U.S. each year. To offset these costs, health insurance payers are developing in-home care management programs for patients. ClearCare’s goal is to help home care agencies leverage technology to improve costs, outcomes, and quality of life for the aging population. The company’s powerful software platform is specifically designed for use by non-medical, in-home care agencies to manage their businesses.

Founder and CEO Geoff Nudd created ClearCare because of his own grandmother’s need for care. Keeping family members and caregivers up to date on a loved one’s well being can be difficult, so Geoff created what is now ClearCare’s Family Room, which enables caregivers and agency staff to check schedules and receive real-time updates about what’s happening in the home. Since then, agencies have provided feedback on others areas of their businesses that could be streamlined. ClearCare has now built over 20 modules to help home care agencies optimize operations with services including a telephony service, billing and payroll, and more. ClearCare now serves over 4,000 home care agencies, representing 500,000 caregivers and 400,000 seniors.

Using AWS, ClearCare is able to spin up reliable infrastructure for proofs of concept and iterate on those systems to quickly get value to market. The company runs many AWS services including Amazon Elasticsearch Service, Amazon RDS, and Amazon CloudFront. Amazon EMR and Amazon Athena have enabled ClearCare to build a Hadoop-based ETL and data warehousing system that processes terabytes of data each day. By utilizing these managed services, ClearCare has been able to go from concept to customer delivery in less than three months.

To learn more about ClearCare, check out their website.

DNAnexus (Mountain View, CA)

DNAnexus is accelerating the application of genomic data in precision medicine by providing a cloud-based platform for sharing and managing genomic and biomedical data and analysis tools. The company was founded in 2009 by Stanford graduate student Andreas Sundquist and two Stanford professors Arend Sidow and Serafim Batzoglou, to address the need for scaling secondary analysis of next-generation sequencing (NGS) data in the cloud. The founders quickly learned that users needed a flexible solution to build complex analysis workflows and tools that enable them to share and manage large volumes of data. DNAnexus is optimized to address the challenges of security, scalability, and collaboration for organizations that are pursuing genomic-based approaches to health, both in clinics and research labs. DNAnexus has a global customer base – spanning North America, Europe, Asia-Pacific, South America, and Africa – that runs a million jobs each month and is doubling their storage year-over-year. The company currently stores more than 10 petabytes of biomedical and genomic data. That is equivalent to approximately 100,000 genomes, or in simpler terms, over 50 billion Facebook photos!

DNAnexus is working with its customers to help expand their translational informatics research, which includes expanding into clinical trial genomic services. This will help companies developing different medicines to better stratify clinical trial populations and develop companion tests that enable the right patient to get the right medicine. In collaboration with Janssen Human Microbiome Institute, DNAnexus is also launching Mosaic – a community platform for microbiome research.

AWS provides DNAnexus and its customers the flexibility to grow and scale research programs. Building the technology infrastructure required to manage these projects in-house is expensive and time-consuming. DNAnexus removes that barrier for labs of any size by using AWS scalable cloud resources. The company deploys its customers’ genomic pipelines on Amazon EC2, using Amazon S3 for high-performance, high-durability storage, and Amazon Glacier for low-cost data archiving. DNAnexus is also an AWS Life Sciences Competency Partner.

Learn more about DNAnexus here.

-Tina

Russian Site-Blocking Chiefs Under Investigation For Fraud

Post Syndicated from Andy original https://torrentfreak.com/russian-site-blocking-chiefs-under-investigation-for-fraud-171024/

Over the past several years, Rozcomnadzor has become a highly controversial government body in Russia. With responsibility for ordering web-blockades against sites the country deems disruptive, it’s effectively Russia’s online censorship engine.

In total, Rozcomnadzor has ordered the blocking of more than 82,000 sites. Within that total, at least 4,000 have been rendered inaccessible on copyright grounds, with an additional 41,000 innocent platforms blocked as collateral damage.

This massive over-blocking has been widely criticized in Russia but until now, Rozcomnadzor has appeared pretty much untouchable. However, a scandal is now engulfing the organization after at least four key officials were charged with fraud offenses.

News that something was potentially amiss began leaking out two weeks ago, when Russian publication Vedomosti reported on a court process in which the initials of the defendants appeared to coincide with officials at Rozcomnadzor.

The publication suspected that three men were involved; Roskomnadzor spokesman Vadim Ampelonsky, head of the legal department Boris Yedidin, and Alexander Veselchakov, who acts as an advisor to the head of the department monitoring radio frequencies.

The prosecution’s case indicated that the defendants were involved in “fraud committed by an organized group either on an especially large scale or entailing the deprivation of citizen’s rights.” Indeed, no further details were made available, with the head of Rozcomnadzor Alexander Zharov claiming he knew nothing about a criminal case and refusing to answer questions.

It later transpired that four employees had been charged with fraud, including Anastasiya Zvyagintseva, who acts as the general director of CRFC, an agency under the control of Rozcomnadzor.

According to Kommersant, Zvyagintseva’s involvement is at the core of the matter. She claims to have been forced to put “ghost employees” on the payroll, whose salaries were then paid to existing employees in order to increase their salaries.

The investigation into the scandal certainly runs deep. It’s reported that FSB officers have been spying on Rozcomnadzor officials for six months, listening to their phone conversations, monitoring their bank accounts, and even watching the ATM machines they used.

Local media reports indicate that the illegal salary scheme ran from 2012 until February 2017 and involved some 20 million rubles ($347,000) of illegal payments. These were allegedly used to retain ‘valuable’ employees when their regular salaries were not lucrative enough to keep them at the site-blocking body.

While Zvyagintseva has been released pending trial, Ampelonsky, Yedidin, and Veselchakov have been placed under house arrest by the Chertanovsky Court of Moscow until November 7.

Rozcomnadzor’s website is currently inaccessible.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Manufacturing Astro Pi case replicas

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/astro-pi-case-guest-post/

Tim Rowledge produces and sells wonderful replicas of the cases which our Astro Pis live in aboard the International Space Station. Here is the story of how he came to do this. Over to you, Tim!

When the Astro Pi case was first revealed a couple of years ago, the collective outpouring of ‘Squee!’ it elicited may have been heard on board the ISS itself. People wanted to buy it or build it at home, and someone wanted to know whether it would blend. (There’s always one.)

The complete Astro Pi

The Sense HAT and its Pi tucked snugly in the original Astro Pi flight case — gorgeous, isn’t it?

Replicating the Astro Pi case

Some months later the STL files for printing your own Astro Pi case were released, and people jumped at the chance to use them. Soon reports appeared saying you had to make quite a few attempts before getting a good print — normal for any complex 3D-printing project. A fellow member of my local makerspace successfully made a couple of cases, but it took a lot of time, filament, and post-print finishing work. And of course, a plastic Astro Pi case simply doesn’t look or feel like the original made of machined aluminium — or ‘aluminum’, as they tend to say over here in North America.

Batch of tops of Astro Pi case replicas by Tim Rowledge

A batch of tops designed by Tim

I wanted to build an Astro Pi case which would more closely match the original. Fortunately, someone else at my makerspace happens to have some serious CNC machining equipment at his small manufacturing company. Therefore, I focused on creating a case design that could be produced with his three-axis device. This meant simplifying some parts to avoid expensive, slow, complex multi-fixture work. It took us a while, but we ended up with a design we can efficiently make using his machine.

Lasered Astro Pi case replica by Tim Rowledge

Tim’s first lasered case

And the resulting case looks really, really like the original — in fact, upon receiving one of the final prototypes, Eben commented:

“I have to say, at first glance they look spectacular: unless you hold them side by side with the originals, it’s hard to pinpoint what’s changed. I’m looking forward to seeing one built up and then seeing them in the wild.”

Inside the Astro Pi case

Making just the bare case is nice, but there are other parts required to recreate a complete Astro Pi unit. Thus I got my local electronics company to design a small HAT to provide much the same support the mezzanine board offers: an RTC and nice, clean connections to the six buttons. We also added well-labelled, grouped pads for all the other GPIO lines, along with space for an ADC. If you’re making your own Astro Pi replica, you might like the Switchboard.

The electronics supply industry just loves to offer *some* of what you need, so that one supplier never has everything: we had to obtain the required stand-offs, screws, spacers, and JST wires from assorted other sources. Jeff at my nearby Industrial Paint & Plastics took on the laser engraving of our cases, leaving out copyrighted logos etcetera.

Lasering the top of an Astro Pi case replica by Tim Rowledge

Lasering the top of a case

Get your own Astro Pi case

Should you like to buy one of our Astro Pi case kits, pop over to www.astropicase.com, and we’ll get it on its way to you pronto. If you’re an institutional or corporate customer, the fully built option might make more sense for you — ordering the Pi and other components, and having a staff member assemble it all, may well be more work than is sensible.

Astro Pi case replica Tim Rowledge

Tim’s first full Astro Pi case replica, complete with shiny APEM buttons

To put the kit together yourself, all you need to do is add a Pi, Sense HAT, Camera Module, and RTC battery, and choose your buttons. An illustrated manual explains the process step by step. Our version of the Astro Pi case uses the same APEM buttons as the units in orbit, and whilst they are expensive, just clicking them is a source of great joy. It comes in a nice travel case too.

Tim Rowledge holding up a PCB

This is Tim. Thanks, Tim!

Take part in Astro Pi

If having an Astro Pi replica is not enough for you, this is your chance: the 2017-18 Astro Pi challenge is open! Do you know a teenager who might be keen to design a experiment to run on the Astro Pis in space? Are you one yourself? You have until 29 October to send us your Mission Space Lab entry and become part of the next generation of space scientists? Head over to the Astro Pi website to find out more.

The post Manufacturing Astro Pi case replicas appeared first on Raspberry Pi.

How to Automatically Revert and Receive Notifications About Changes to Your Amazon VPC Security Groups

Post Syndicated from Rob Barnes original https://aws.amazon.com/blogs/security/how-to-automatically-revert-and-receive-notifications-about-changes-to-your-amazon-vpc-security-groups/

In a previous AWS Security Blog post, Jeff Levine showed how you can monitor changes to your Amazon EC2 security groups. The methods he describes in that post are examples of detective controls, which can help you determine when changes are made to security controls on your AWS resources.

In this post, I take that approach a step further by introducing an example of a responsive control, which you can use to automatically respond to a detected security event by applying a chosen security mitigation. I demonstrate a solution that continuously monitors changes made to an Amazon VPC security group, and if a new ingress rule (the same as an inbound rule) is added to that security group, the solution removes the rule and then sends you a notification after the changes have been automatically reverted.

The scenario

Let’s say you want to reduce your infrastructure complexity by replacing your Secure Shell (SSH) bastion hosts with Amazon EC2 Systems Manager (SSM). SSM allows you to run commands on your hosts remotely, removing the need to manage bastion hosts or rely on SSH to execute commands. To support this objective, you must prevent your staff members from opening SSH ports to your web server’s Amazon VPC security group. If one of your staff members does modify the VPC security group to allow SSH access, you want the change to be automatically reverted and then receive a notification that the change to the security group was automatically reverted. If you are not yet familiar with security groups, see Security Groups for Your VPC before reading the rest of this post.

Solution overview

This solution begins with a directive control to mandate that no web server should be accessible using SSH. The directive control is enforced using a preventive control, which is implemented using a security group rule that prevents ingress from port 22 (typically used for SSH). The detective control is a “listener” that identifies any changes made to your security group. Finally, the responsive control reverts changes made to the security group and then sends a notification of this security mitigation.

The detective control, in this case, is an Amazon CloudWatch event that detects changes to your security group and triggers the responsive control, which in this case is an AWS Lambda function. I use AWS CloudFormation to simplify the deployment.

The following diagram shows the architecture of this solution.

Solution architecture diagram

Here is how the process works:

  1. Someone on your staff adds a new ingress rule to your security group.
  2. A CloudWatch event that continually monitors changes to your security groups detects the new ingress rule and invokes a designated Lambda function (with Lambda, you can run code without provisioning or managing servers).
  3. The Lambda function evaluates the event to determine whether you are monitoring this security group and reverts the new security group ingress rule.
  4. Finally, the Lambda function sends you an email to let you know what the change was, who made it, and that the change was reverted.

Deploy the solution by using CloudFormation

In this section, you will click the Launch Stack button shown below to launch the CloudFormation stack and deploy the solution.

Prerequisites

  • You must have AWS CloudTrail already enabled in the AWS Region where you will be deploying the solution. CloudTrail lets you log, continuously monitor, and retain events related to API calls across your AWS infrastructure. See Getting Started with CloudTrail for more information.
  • You must have a default VPC in the region in which you will be deploying the solution. AWS accounts have one default VPC per AWS Region. If you’ve deleted your VPC, see Creating a Default VPC to recreate it.

Resources that this solution creates

When you launch the CloudFormation stack, it creates the following resources:

  • A sample VPC security group in your default VPC, which is used as the target for reverting ingress rule changes.
  • A CloudWatch event rule that monitors changes to your AWS infrastructure.
  • A Lambda function that reverts changes to the security group and sends you email notifications.
  • A permission that allows CloudWatch to invoke your Lambda function.
  • An AWS Identity and Access Management (IAM) role with limited privileges that the Lambda function assumes when it is executed.
  • An Amazon SNS topic to which the Lambda function publishes notifications.

Launch the CloudFormation stack

The link in this section uses the us-east-1 Region (the US East [N. Virginia] Region). Change the region if you want to use this solution in a different region. See Selecting a Region for more information about changing the region.

To deploy the solution, click the following Launch Stack button to launch the stack. After you click the button, you must sign in to the AWS Management Console if you have not already done so.

Click this "Launch Stack" button

Then:

  1. Choose Next to proceed to the Specify Details page.
  2. On the Specify Details page, type your email address in the Send notifications to box. This is the email address to which change notifications will be sent. (After the stack is launched, you will receive a confirmation email that you must accept before you can receive notifications.)
  3. Choose Next until you get to the Review page, and then choose the I acknowledge that AWS CloudFormation might create IAM resources check box. This confirms that you are aware that the CloudFormation template includes an IAM resource.
  4. Choose Create. CloudFormation displays the stack status, CREATE_COMPLETE, when the stack has launched completely, which should take less than two minutes.Screenshot showing that the stack has launched completely

Testing the solution

  1. Check your email for the SNS confirmation email. You must confirm this subscription to receive future notification emails. If you don’t confirm the subscription, your security group ingress rules still will be automatically reverted, but you will not receive notification emails.
  2. Navigate to the EC2 console and choose Security Groups in the navigation pane.
  3. Choose the security group created by CloudFormation. Its name is Web Server Security Group.
  4. Choose the Inbound tab in the bottom pane of the page. Note that only one rule allows HTTPS ingress on port 443 from 0.0.0.0/0 (from anywhere).Screenshot showing the "Inbound" tab in the bottom pane of the page
  1. Choose Edit to display the Edit inbound rules dialog box (again, an inbound rule and an ingress rule are the same thing).
  2. Choose Add Rule.
  3. Choose SSH from the Type drop-down list.
  4. Choose My IP from the Source drop-down list. Your IP address is populated for you. By adding this rule, you are simulating one of your staff members violating your organization’s policy (in this blog post’s hypothetical example) against allowing SSH access to your EC2 servers. You are testing the solution created when you launched the CloudFormation stack in the previous section. The solution should remove this newly created SSH rule automatically.
    Screenshot of editing inbound rules
  5. Choose Save.

Adding this rule creates an EC2 AuthorizeSecurityGroupIngress service event, which triggers the Lambda function created in the CloudFormation stack. After a few moments, choose the refresh button ( The "refresh" icon ) to see that the new SSH ingress rule that you just created has been removed by the solution you deployed earlier with the CloudFormation stack. If the rule is still there, wait a few more moments and choose the refresh button again.

Screenshot of refreshing the page to see that the SSH ingress rule has been removed

You should also receive an email to notify you that the ingress rule was added and subsequently reverted.

Screenshot of the notification email

Cleaning up

If you want to remove the resources created by this CloudFormation stack, you can delete the CloudFormation stack:

  1. Navigate to the CloudFormation console.
  2. Choose the stack that you created earlier.
  3. Choose the Actions drop-down list.
  4. Choose Delete Stack, and then choose Yes, Delete.
  5. CloudFormation will display a status of DELETE_IN_PROGRESS while it deletes the resources created with the stack. After a few moments, the stack should no longer appear in the list of completed stacks.
    Screenshot of stack "DELETE_IN_PROGRESS"

Other applications of this solution

I have shown one way to use multiple AWS services to help continuously ensure that your security controls haven’t deviated from your security baseline. However, you also could use the CIS Amazon Web Services Foundations Benchmarks, for example, to establish a governance baseline across your AWS accounts and then use the principles in this blog post to automatically mitigate changes to that baseline.

To scale this solution, you can create a framework that uses resource tags to identify particular resources for monitoring. You also can use a consolidated monitoring approach by using cross-account event delivery. See Sending and Receiving Events Between AWS Accounts for more information. You also can extend the principle of automatic mitigation to detect and revert changes to other resources such as IAM policies and Amazon S3 bucket policies.

Summary

In this blog post, I demonstrated how you can automatically revert changes to a VPC security group and have a notification sent about the changes. You can use this solution in your own AWS accounts to enforce your security requirements continuously.

If you have comments about this blog post or other ideas for ways to use this solution, submit a comment in the “Comments” section below. If you have implementation questions, start a new thread in the EC2 forum or contact AWS Support.

– Rob

Bringing Clean and Safe Drinking Water to Developing Countries

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/keeping-charity-water-data-safe/

image of a cup filling with water

If you’d like to read more about charity: water‘s use of Backblaze for Business, visit backblaze.com/charitywater/

charity: water  + Backblaze for Business

Considering that charity: water sends workers with laptop computers to rural communities in 24 countries around the world, it’s not surprising that computer backup is needed on every computer they have. It’s so essential that Matt Ward, System Administrator for charity: water, says it’s a standard part of employee on-boarding.

charity: water, based in New York City, is a non-profit organization that is working to bring clean water to the nearly one in ten people around the world who live without it — a situation that affects not only health, but education and income.

“We have people constantly traveling all over the world, so a cloud-based service makes sense whether the user is in New York or Malawi. Most of our projects and beneficiaries are in Sub Saharan Africa and Southern/Southeast Asia,” explains Matt. “Water scarcity and poor water quality are a problem here, and in so many countries around the world.”

charity: water in Rwanda

To achieve their mission, charity: water works through implementing organizations on the ground within the targeted communities. The people in these communities must spend hours every day walking to collect water for their families. It’s a losing proposition, as the time they spend walking takes away from education, earning money, and generally limits the opportunities for improving their lives.

charity: water began using Backblaze for Business before Matt came on a year ago. They started with a few licenses, but quickly decided to deploy Backblaze to every computer in the organization.

“We’ve lost computers plenty of times,” he says, “but, because of Backblaze, there’s never been a case where we lost the computer’s data.”

charity: water has about 80 staff computer users, and adds ten to twenty interns each season. Each staff member or intern has at least one computer. “Our IT department is two people, me and my director,” explains Matt, “and we have to support everyone, so being super simple to deploy is valuable to us.”

“When a new person joins us, we just send them an invitation to join the Group on Backblaze, and they’re all set. Their data is automatically backed up whenever they’re connected to the internet, and I can see their current status on the management console. [Backblaze] really nailed the user interface. You can show anyone the interface, even on their first day, and they get it because it’s simple and easy to understand.”

young girl drinkng clean water

One of the frequent uses for Backblaze for Business is when Matt off-boards users, such as all the interns at the end of the season. He starts a restore through the Backblaze admin console even before he has the actual computer. “I know I have a reliable archive in the restore from Backblaze, and it’s easier than doing it directly from the laptop.”

Matt is an enthusiastic user of the features designed for business users, especially Backblaze’s Groups feature, which has enabled charity: water to centralize billing and computer management for their worldwide team. Businesses can create groups to cluster job functions, employee locations, or any other criteria.

charity: water delivery clean water to children

“It saves me time to be able to see the status of any user’s backups, such as the last time the data was backed up” explains Matt. Before Backblaze, charity: water was writing documentation for workers, hoping they would follow backup protocols. Now, Matt knows what’s going on in real time — a valuable feature when the laptops are dispersed around the world.

“Backblaze for Business is an essential element in any organization’s IT continuity plan,” says Matt. “You need to be sure that there is a backup solution for your data should anything go wrong.”

To learn more about how charity: water uses Backblaze for Business, visit backblaze.com/charitywater/.

Matt Ward of charity: water

Matt Ward, System Administrator for charity: water

The post Bringing Clean and Safe Drinking Water to Developing Countries appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

White House Chief of Staff John Kelly’s Cell Phone was Tapped

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/white_house_chi.html

Politico reports that White House Chief of Staff John Kelly’s cell phone was compromised back in December.

I know this is news because of who he is, but I hope every major government official of any country assumes that their commercial off-the-shelf cell phone is compromised. Even allies spy on allies; remember the reports that the NSA tapped the cell phone of German Chancellor Angela Merkel?

Evergreen 3.0.0 released

Post Syndicated from ris original https://lwn.net/Articles/735379/rss

The Evergreen community has announced the
release
of Evergreen 3.0.0, software for libraries. This release
includes community support of the web staff client for production use,
serials and offline circulation modules for the web staff client,
improvements to the display of headings in the public catalog browse list,
and more.

Inside the MPAA, Netflix & Amazon Global Anti-Piracy Alliance

Post Syndicated from Andy original https://torrentfreak.com/inside-the-mpaa-netflix-amazon-global-anti-piracy-alliance-170918/

The idea of collaboration in the anti-piracy arena isn’t new but an announcement this summer heralded what is destined to become the largest project the entertainment industry has ever seen.

The Alliance for Creativity and Entertainment (ACE) is a coalition of 30 companies that reads like a who’s who of the global entertainment market. In alphabetical order its members are:

Amazon, AMC Networks, BBC Worldwide, Bell Canada and Bell Media, Canal+ Group, CBS Corporation, Constantin Film, Foxtel, Grupo Globo, HBO, Hulu, Lionsgate, Metro-Goldwyn-Mayer (MGM), Millennium Media, NBCUniversal, Netflix, Paramount Pictures, SF Studios, Sky, Sony Pictures Entertainment, Star India, Studio Babelsberg, STX Entertainment, Telemundo, Televisa, Twentieth Century Fox, Univision Communications Inc., Village Roadshow, The Walt Disney Company, and Warner Bros. Entertainment Inc.

The aim of the project is clear. Instead of each company considering its anti-piracy operations as a distinct island, ACE will bring them all together while presenting a united front to decision and lawmakers. At the core of the Alliance will be the MPAA.

“ACE, with its broad coalition of creators from around the world, is designed, specifically, to leverage the best possible resources to reduce piracy,”
outgoing MPAA chief Chris Dodd said in June.

“For decades, the MPAA has been the gold standard for antipiracy enforcement. We are proud to provide the MPAA’s worldwide antipiracy resources and the deep expertise of our antipiracy unit to support ACE and all its initiatives.”

Since then, ACE and its members have been silent on the project. Today, however, TorrentFreak can pull back the curtain, revealing how the agreement between the companies will play out, who will be in control, and how much the scheme will cost.

Power structure: Founding Members & Executive Committee Members

Netflix, Inc., Amazon Studios LLC, Paramount Pictures Corporation, Sony Pictures Entertainment, Inc., Twentieth Century Fox Film Corporation, Universal City Studios LLC, Warner Bros. Entertainment Inc., and Walt Disney Studios Motion Pictures, are the ‘Founding Members’ (Governing Board) of ACE.

These companies are granted full voting rights on ACE business, including the approval of initiatives and public policy, anti-piracy strategy, budget-related matters, plus approval of legal action. Not least, they’ll have the power to admit or expel ACE members.

All actions taken by the Governing Board (never to exceed nine members) need to be approved by consensus, with each Founding Member able to vote for or against decisions. Members are also allowed to abstain but one persistent objection will be enough to stop any matter being approved.

The second tier – ‘Executive Committee Members’ – is comprised of all the other companies in the ACE project (as listed above, minus the Governing Board). These companies will not be allowed to vote on ACE initiatives but can present ideas and strategies. They’ll also be allowed to suggest targets for law enforcement action while utilizing the MPAA’s anti-piracy resources.

Rights of all members

While all members of ACE can utilize the alliance’s resources, none are barred from simultaneously ‘going it alone’ on separate anti-piracy initiatives. None of these strategies and actions need approval from the Founding Members, provided they’re carried out in a company’s own name and at its own expense.

Information obtained by TorrentFreak indicates that the MPAA also reserves the right to carry out anti-piracy actions in its own name or on behalf of its member studios. The pattern here is different, since the MPAA’s global anti-piracy resources are the same resources being made available to the ACE alliance and for which members have paid to share.

Expansion of ACE

While ACE membership is already broad, the alliance is prepared to take on additional members, providing certain criteria are met. Crucially, any prospective additions must be owners or producers of movies and/or TV shows. The Governing Board will then vet applicants to ensure that they meet the criteria for acceptance as a new Executive Committee Members.

ACE Operations

The nine Governing Board members will meet at least four times a year, with each nominating a senior executive to serve as its representative. The MPAA’s General Counsel will take up the position of non-voting member of the Governing Board and will chair its meetings.

Matters to be discussed include formulating and developing the alliance’s ‘Global Anti-Piracy Action Plan’ and approving and developing the budget. ACE will also form an Anti-Piracy Working Group, which is scheduled to meet at least once a month.

On a daily basis, the MPAA and its staff will attend to the business of the ACE alliance. The MPAA will carry out its own work too but when presenting to outside third parties, it will clearly state which “hat” it is currently wearing.

Much deliberation has taken place over who should be the official spokesperson for ACE. Documents obtained by TF suggest that the MPAA planned to hire a consulting firm to find a person for the role, seeking a professional with international experience who had never been previously been connected with the MPAA.

They appear to have settled on Zoe Thorogood, who previously worked for British Prime Minister David Cameron.

Money, money, money

Of course, the ACE program isn’t going to fund itself, so all members are required to contribute to the operation. The MPAA has opened a dedicated bank account under its control specifically for the purpose, with members contributing depending on status.

Founding/Governing Board Members will be required to commit $5m each annually. However, none of the studios that are MPAA members will have to hand over any cash, since they already fund the MPAA, whose anti-piracy resources ACE is built.

“Each Governing Board Member will contribute annual dues in an amount equal to $5 million USD. Payment of dues shall be made bi-annually in equal shares, payable at
the beginning of each six (6) month period,” the ACE agreement reads.

“The contribution of MPAA personnel, assets and resources…will constitute and be considered as full payment of each MPAA Member Studio’s Governing Board dues.”

That leaves just Netflix and Amazon paying the full amount of $5m in cash each.

From each company’s contribution, $1m will be paid into legal trust accounts allocated to each Governing Board member. If ACE-agreed litigation and legal expenses exceed that amount for the year, members will be required to top up their accounts to cover their share of the costs.

For the remaining 21 companies on the Executive Committee, annual dues are $200,000 each, to be paid in one installment at the start of the financial year – $4.2m all in. Of all dues paid by all members from both tiers, half will be used to boost anti-piracy resources, over and above what the MPAA will spend on the same during 2017.

“Fifty percent (50%) of all dues received from Global Alliance Members other than
the MPAA Member Studios…shall, as agreed by the Governing Board, be used (a) to increase the resources spent on online antipiracy over and above….the amount of MPAA’s 2017 Content Protection Department budget for online antipiracy initiatives/operations,” an internal ACE document reads.

Intellectual property

As the project moves forward, the Alliance expects to gain certain knowledge and experience. On the back of that, the MPAA hopes to grow its intellectual property portfolio.

“Absent written agreement providing otherwise, any and all data, intellectual property, copyrights, trademarks, or know-how owned and/or contributed to the Global Alliance by MPAA, or developed or created by the MPAA or the Global Alliance during the Term of this Charter, shall remain and/or become the exclusive property of the MPAA,” the ACE agreement reads.

That being said, all Governing Board Members will also be granted “perpetual, irrevocable, non-exclusive licenses” to use the same under certain rules, even in the event they leave the ACE initiative.

Terms and extensions

Any member may withdraw from the Alliance at any point, but there will be no refunds. Additionally, any financial commitment previously made to litigation will have to be honored by the member.

The ACE agreement has an initial term of two years but Governing Board Members will meet not less than three months before it is due to expire to vote on any extension.

To be continued……

With the internal structure of ACE now revealed, all that remains is to discover the contents of the initiative’s ‘Global Anti-Piracy Action Plan’. To date, that document has proven elusive but with an operation of such magnitude, future leaks are a distinct possibility.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NSA Spied on Early File-Sharing Networks, Including BitTorrent

Post Syndicated from Andy original https://torrentfreak.com/nsa-spied-on-early-file-sharing-networks-including-bittorrent-170914/

In the early 2000s, when peer-to-peer (P2P) file-sharing was in its infancy, the majority of users had no idea that their activities could be monitored by outsiders. The reality was very different, however.

As few as they were, all of the major networks were completely open, with most operating a ‘shared folder’ type system that allowed any network participant to see exactly what another user was sharing. Nevertheless, with little to no oversight, file-sharing at least felt like a somewhat private affair.

As user volumes began to swell, software such as KaZaA (which utilized the FastTrack network) and eDonkey2000 (eD2k network) attracted attention from record labels, who were desperate to stop the unlicensed sharing of copyrighted content. The same held true for the BitTorrent networks that arrived on the scene a couple of years later.

Through the rise of lawsuits against consumers, the general public began to learn that their activities on P2P networks were not secret and they were being watched for some, if not all, of the time by copyright holders. Little did they know, however, that a much bigger player was also keeping a watchful eye.

According to a fascinating document just released by The Intercept as part of the Edward Snowden leaks, the National Security Agency (NSA) showed a keen interest in trying to penetrate early P2P networks.

Initially published by internal NSA news site SIDToday in June 2005, the document lays out the aims of a program called FAVA – File-Sharing Analysis and Vulnerability Assessment.

“One question that naturally arises after identifying file-sharing traffic is whether or not there is anything of intelligence value in this traffic,” the NSA document begins.

“By searching our collection databases, it is clear that many targets are using popular file sharing applications; but if they are merely sharing the latest release of their favorite pop star, this traffic is of dubious value (no offense to Britney Spears intended).”

Indeed, the vast majority of users of these early networks were only been interested in sharing relatively small music files, which were somewhat easy to manage given the bandwidth limitations of the day. However, the NSA still wanted to know what was happening on a broader scale, so that meant decoding their somewhat limited encryption.

“As many of the applications, such as KaZaA for example, encrypt their traffic, we first had to decrypt the traffic before we could begin to parse the messages. We have developed the capability to decrypt and decode both KaZaA and eDonkey traffic to determine which files are being shared, and what queries are being performed,” the NSA document reveals.

Most progress appears to have been made against KaZaA, with the NSA revealing the use of tools to parse out registry entries on users’ hard drives. This information gave up users’ email addresses, country codes, user names, the location of their stored files, plus a list of recent searches.

This gave the NSA the ability to look deeper into user behavior, which revealed some P2P users going beyond searches for basic run-of-the-mill multimedia content.

“[We] have discovered that our targets are using P2P systems to search for and share files which are at the very least somewhat surprising — not simply harmless music and movie files. With more widespread adoption, these tools will allow us to regularly assimilate data which previously had been passed over; giving us a more complete picture of our targets and their activities,” the document adds.

Today, more than 12 years later, with KaZaA long dead and eDonkey barely alive, scanning early pirate activities might seem a distant act. However, there’s little doubt that similar programs remain active today. Even in 2005, the FAVA program had lofty ambitions, targeting other networks and protocols including DirectConnect, Freenet, Gnutella, Gnutella2, JoltID, MSN Messenger, Windows Messenger and……BitTorrent.

“If you have a target using any of these applications or using some other application which might fall into the P2P category, please contact us,” the NSA document urges staff. “We would be more than happy to help.”

Confirming the continued interest in BitTorrent, The Intercept has published a couple of further documents which deal with the protocol directly.

The first details an NSA program called GRIMPLATE, which aimed to study how Department of Defense employees were using BitTorrent and whether that constituted a risk.

The second relates to P2P research carried out by Britain’s GCHQ spy agency. It details DIRTY RAT, a web application which gave the government to “the capability to identify users sharing/downloading files of interest on the eMule (Kademlia) and BitTorrent networks.”

The SIDToday document detailing the FAVA program can be viewed here

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.