Tag Archives: NAB

Backblaze’s Must See List for NAB 2019

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/what-not-to-miss-nab2019/

Collage of logos from Backblaze B2 cloud storage partners

With NAB 2019 only days away, the Backblaze team is excited to launch into the world’s largest event for creatives, and our biggest booth yet!

Must See — Backblaze Booth

This year we’ll be celebrating some of the phenomenal creative work by our customers, including American Public Television, Crisp Video, Falcons’ Digital Creative, WunderVu, and many more.

We’ll have workflow experts standing by to chat with you about your workflow frustrations, and how Backblaze B2 Cloud Storage can be the key to unlocking efficiency and solving storage challenges throughout your entire workflow: From Action! To Archive. With B2, you can focus on creating and managing content, not managing storage.

Create: Bring Your Story to Life

Stop by our booth and we can show you how you can protect your content from ingest through work-in-process by syncing seamlessly to the cloud. We can also detail how you can improve team collaboration and increase content reuse by organizing your content with one of our MAM integrations.

Distribute: Share Your Story With the World

Our experts can show you how B2 can help you scale your content library instantly and indefinitely, and avoid the hassle and expense of on-premises storage. We can demonstrate how everything in your content library can be served directly from your B2 account or through our content delivery partners like Cloudflare.

Preserve: Make Sure Your Story Lives Forever

Want to see the math behind the first cloud storage that’s more affordable than LTO? We can step through the numbers. We can also show you how B2 will keep your archived content accessible, anytime, and anywhere, through a web browser, API calls, or one of our integrated applications listed below.

Must See — Workflow Integrations You Can Count On

Our fantastic workflow partners are a critical part of your creative workflow backed by Backblaze — and there’s a lot of partner news to catch up on!

Drop by our booth to pick up a handy map to help you find Backblaze partners on the show floor including:

Backup and Archive Workflow Integrations

Archiware P5, booth SL15416
SyncBackPro, Wynn Salon — J

File Transfer Acceleration, Data Wrangling, Data Movement

FileCatalyst, booth SL12116
Hedge, booth SL14805

Asset and Collaboration Managers

axle ai, booth SL15116
Cantemo iconik, booth SL6021
Cantemo (Portal), booth SL6021
CatDV, booth SL5421
Cubix (Ortana Media Group), booth SL5922
eMAM, booth SL10224

Workflow Storage

Facilis, booth SL6321
GB Labs, booth SL5324
ProMAX, booth SL6313
Scale Logic, booth SL11109
Tiger Technology, booth SL8505
QNAP, booth SL15716
Seagate, booth SL8511
StorageDNA, booth SL11810

Must See — Backblaze Events during NAB

Monday morning we’re delivering a presentation in the Scale Logic Knowledge Zone, and Tuesday night of NAB we’re honored to help sponsor the all-new Faster Together event that replaces the long-standing Las Vegas Creative User Supermeet event.

We’ll be raffling off a Hover2 4K drone powered by AI to help you get that perfect drone shot for your next creative film! So after the NAB show wraps up on Tuesday, head over to the Rio main ballroom for a night of mingling with creatives and amazing talks by some of the top editors, colorists, and VFX artists in the industry.

ProVideoTech and Backblaze at Scale Logic Knowledge Zone
Monday April 8 at 11 AM
Scale Logic Knowledge Zone, NAB Booth SL111109
Monday of NAB, Backblaze and PVT will deliver a live presentation for NAB attendees on how to build hybrid-cloud workflows with Cantemo and Backblaze.
Scale Logic Media Management Knowledge Zone

Backblaze at The Faster Together Stage
Tuesday, April 9
Rio Las Vegas Hotel and Casino
Doors open at 4:30 PM, stage presentations begin at 7:00 PM
Reserve Tickets for the Faster Together event

If you haven’t yet, be sure to sign up and reserve your meeting time with the Backblaze team, and add us to your Map My Show NAB plan and we’ll see you there!

  NAB 2019 is just a few days away. NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post Backblaze’s Must See List for NAB 2019 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Migrating Your Legacy Archive to Future-Ready Architecture

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/ortana-cubix-core-media-archive/

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

Guest blog post by James Gibson, Founder & CEO of Ortana Media Group

There’s a wide range of reasons why businesses want to migrate away from their current archive solution, ranging from managing risk, concerns over legacy hardware, media degradation and format support. Many businesses also find themselves stuck with closed format solutions that are based on legacy middleware with escalating support costs. It is a common problem that we at Ortana have helped many clients overcome through smart and effective use of the many storage solutions available on the market today. As founder and CEO of Ortana, I want to share some of our collective experience around this topic and how we have found success for our clients.

First, we often forget how quickly the storage landscape changes. Let’s take a typical case.

It’s Christmas 2008 and a CTO has just finalised the order on their new enterprise-grade hierarchical storage management (HSM) system with an LTO-4 tape robot. Beyonce’s Single Ladies is playing on the radio, GPS on phones has just started to be rolled out, and there is this new means of deploying mobile apps called the Apple™ App Store! The system purchased is from a well established, reputable company and provides peace of mind and scalability — what more can you ask for? The CTO goes home for the festive season — job well done — and hopes Santa brings him one of the new Android phones that have just launched.

Ten years on, the world is very different and Moore’s law tells us that the pace of technological change is only set to increase. That growing archive has remained on the same hardware, controlled by the same HSM and has gone through one or two expensive LTO format changes. “These migrations had to happen,” the CTO concedes, as support for the older LTO formats was being dropped by the hardware supplier. Their whole content library had to be restored and archived back to the new tapes. New LTO formats also required new versions of the HSM, and whilst these often included new features — over codec support, intelligent repacking and reporting — the fundamentals of the system remained: closed format, restricted accessibility, and expensive. Worse still, the annual support costs are increasing whilst the new feature development has ground to a halt. Sure the archive still works, but for how much longer?

Decisions, Decisions, So Many Migration Decisions

As businesses make the painful decision to migrate their legacy archive, the choices of what, where, and how become overwhelming. The storage landscape today is a completely different picture from when closed format solutions went live. This change alone offers significant opportunities to businesses. By combining the right storage solutions with seamless architecture and with lights out orchestration driving the entire process, businesses can flourish by allowing their storage to react to the needs of the business, not constrain them. Ortana has purposefully ensured Cubix (our asset management, automation, and orchestration platform) is as storage agnostic as possible by integrating a range of on-premises and cloud-based solutions, and built an orchestration engine that is fully abstracted from this integration layer. The end result is that workflow changes can be done in seconds without affecting the storage.

screenshot of Cubix workflow
Cubix’s orchestration platform includes a Taskflow engine for creating customized workflow paths

As our example CTO would say (shaking their head no doubt whilst saying it), a company’s main priority is to not-be-here-again, and the key is to store media in an open format, not bound to any one vendor, but also accessible to the business needs both today and tomorrow. The cost of online cloud storage such as Backblaze has now made storing content in the cloud more cost effective than LTO and this cost is only set to reduce further. This, combined with the ample internet bandwidth that has become ubiquitous, makes cloud storage an obvious primary storage target. Entirely agnostic to the format and codec of content you are storing, aligned with MPAA best practices and easily integrated to any on-premise or cloud-based workflows, cloud storage removes many of the issues faced by closed-format HSMs deployed in so many facilities today. It also begins to change the dialogue over main vs DR storage, since it’s no longer based at a facility within the business.

Cloud Storage Opens Up New Capabilities

Sometimes people worry that cloud storage will be too slow. Where this is true, it is almost always due to poor cloud implementation. B2 is online, meaning that the time-to-first-byte is almost zero, whereas other cloud solutions such as Amazon Glacier are cold storage, meaning that the time-to-first-byte ranges from at best one to two hours, but in general six to twelve hours. Anything that is to replace an LTO solution needs to match or beat the capacity and speed of the incumbent solution, and good workflow design can ensure that restores are done as promptly as possible and direct to where the media is needed.

But what about those nasty egress costs? People can get caught off guard when this is not budgeted for correctly, or when their workflow does not make good use of simple solutions such as proxies. Regardless of whether your archive is located on LTO or in the cloud, proxies are critical to keeping accessibility up and costs and restore times down. By default, when we deploy Cubix for clients we always generate a frame accurate proxy for video content, often devalued through the use of burnt-in timecode (BITC), logos, and overlays. Generated using open source transcoders, they are incredibly cost effective to generate and are often only a fraction of the size of the source files. These proxies, which can also be stored and served directly from B2 storage, are then used throughout all our portals to allow users to search, find, and view content. This avoids the time and cost required to restore the high resolution master files. Only when the exact content required is found is a restore submitted for the full-resolution masters.

Multiple Copies Stored at Multiple Locations by Multiple Providers

Moving content to the cloud doesn’t remove the risk of working with a single provider, however. No matter how good or big they are, it’s always a wise idea to ensure an active disaster recovery solution is present within your workflows. This last resort copy does not need all the capabilities of the primary storage, and can even be more punitive when it comes to restore costs and times. But it should be possible to enable in moments, and be part of the orchestration engine rather than being a manual process.

The need to de-risk that single provider, or for workflows where 30-40% of the original content has to be regularly restored (as proxies do not meet the needs of the workflow), on premise archive solutions still can be deployed without being caught in the issues discussed earlier. Firstly, LTO now offers portability benefits through LTFS, an easy to use open format, which critically has its specification and implementation within the public domain. This ensures it is easily supported by many vendors and guarantees support longevity for on-premises storage. Ortana with its Cubix platform supports many HSMs that can write content in native LTFS format that can be read by any standalone drive from any vendor supporting LTFS.

Also, with 12 TB hard drives now standard in the marketplace, nearline based storage has also become a strong contender for content when combined with intelligent storage tiering to the cloud or LTO. Cubix can fully automate this process, especially when complemented by such vendors as GB Labs’ wide range of hardware solutions. This mix of cloud, nearline and LTO — being driven by an intelligent MAM and orchestration platform like Cubix to manage content in the most efficient means possible on a per workflow basis — blurs the lines between primary storage, DR, and last resort copies.

Streamlining the Migration Process

Once you have your storage mix agreed upon and in place, now your fraught task is getting your existing library onto the new solution whilst not impacting access to the business. Some HSM vendors suggest swapping your LTO tapes by physically removing them from one library and inserting them into another. Ortana knows that libraries are often the linchpin of the organisation and any downtime has significant negative impact that can fill media managers with dread, especially since these one shot, one direction migrations can easily go wrong. Moreover, when following this route, simply moving tapes does not persist any editorial metadata or resolve many of the objectives around making content more available. Cubix not only manages the media and the entire transformation process, but also retains the editorial metadata from the existing archive also.

screenshot of Cubix search results
During the migration process, content can be indexed via AI-powered speech to text and image recognition

Given the high speeds that LTO delivers, combined with the scalability of Cubix, the largest libraries can be migrated in short timescales, whilst having zero downtime on the archive. Whilst the content is being migrated to the defined mix of storage targets, Cubix can perform several tasks on the content to further augment the metadata, including basics such as proxy and waveform generation, through to AI based image detection and speech to text. Such processes only further reduce the time spent by staff looking for content, and further refine the search capability to ensure only that content required is restored — translating directly to reduced restore times and egress costs.

A Real-World Customer Example

Many of the above concerns and considerations led a large broadcaster to Ortana for a large-scale migration project. The broadcaster produces in-house news and post production with multi-channel linear playout and video-on-demand (VoD). Their existing archive was 3 PB of media across two generations of LTO tape managed by Oracle™ DIVArchive & DIVADirector. They were concerned about on-going support for DIVA and wanted to fully migrate all tape and disk-based content to a new HSM in an expedited manner, making full use of the dedicated drive resources available.

Their primary goal was to fully migrate all editorial metadata into Cubix, including all ancillary files (subtitles, scripts, etc.), and index all media using AI-powered content discovery to reduce searching times for news, promos /and sports departments at the same time. They also wanted to replace the legacy Windows Media Video (WMV) proxy with new full HD H264 frame accurate proxy, and provide the business secure, group-based access to the content. Finally, they wanted all the benefits of cloud storage, whilst keeping costs to a minimum.

With Ortana’s Cubix Core, the broadcaster was able to safely migrate their DIVAarchive to two storage platforms: LTFS with a Quantum HSM system and Backblaze B2 cloud storage. Their content was indexed via AI powered image recognition (Google Vision) and speech to text (Speechmatics) during the migration process, and the Cubix UI replaced existing archive as media portal for both internal and external stakeholders.

The new solution has vastly reduced the timescales for content processing across all departments, and has led to a direct reduction in staff costs. Researchers report a 50-70% reduction in time spent searching for content, and the archive shows a 40% reduction in restore requests. By having the content located in two distinct geographical locations they’ve entirely removed their business risk of having their archive with a single vendor and in a single location. Most importantly, their archived content is more active than ever and they can be sure it will stay alive for the future.

How exactly did Ortana help them do it? Join our webinar Evading Extinction: Migrating Legacy Archives on Thursday, March 28, 2019. We’ll detail all the steps we took in the process and include a live demo of Cubix. We’ll show you how straightforward and painless the archive migration can be with the right strategy, the right tools, and the right storage.

— James Gibson, Founder & CEO, Ortana Media Group

•  •  •

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post Migrating Your Legacy Archive to Future-Ready Architecture appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A Workflow Playbook for Migrating Your Media Assets to a MAM

Post Syndicated from Skip Levens original https://www.backblaze.com/blog/workflow-playbook-migrating-your-media-assets-to-a-mam/

Asset > Metadata > Database > Media Asset Manager > Backblaze Fireball > Backblaze B2 Cloud Storage

This is one in a series of posts on professional media management leading up to NAB 2019 in Las Vegas, April 8 to 11.
–Editor

Whatever your creative venture, the byproduct of all your creative effort is assets. Whether you produce music, images, or video, as you produce more and more of these valuable assets, they tend to pile up and become difficult to manage, organize, and protect. As your creative practice evolves to meet new demands, and the scale of your business grows, you’ll often find that your current way of organizing and retrieving assets can’t keep up with the pace of your production.

For example, if you’ve been managing files by placing them in carefully named folders, getting those assets into a media asset management system will make them far easier to navigate and much easier to pull out exactly the media you need for a new project. Your team will be more efficient and you can deliver your finished content faster.

As we’ve covered before, putting your assets in a type of storage like B2 Cloud Storage ensures that they will be protected in a highly durable and highly available way that lets your entire team be productive.

You can learn about some of the new capabilities of the latest cloud-based collaboration tools here:

With some smart planning, and a little bit of knowledge, you can be prepared to get the most of your assets as you move them into an asset management system, or when migrating from an older or less capable system into a new one.

Assets and Metadata

Before we can build some playbooks to get the most from your creative assets, let’s review a few key concepts.

Asset — a rich media file with intrinsic metadata.

An asset is simply a file that is the result of your creative operation, and most often a rich media file like an image or a video. Typically, these files are captured or created in a raw state, then your creative team adds value to that raw asset by editing it together with other assets to create a finished story that in turn, becomes another asset to manage.

Metadata — Information about a file, either embedded within the file itself or associated with the file by another system, typically a media asset management (MAM) application.

The file carries information about itself that can be understood by your laptop or workstation’s operating system. Some of these seem obvious, like the name of the file, how much storage space it occupies, when it was first created, and when it was last modified. These would all be helpful ways to try to find one particular file you are looking for among thousands just using the tools available in your OS’s file manager.

File Metadata

There’s usually another level of metadata embedded in media files that is not so obvious but potentially enormously useful: metadata embedded in the file when it’s created by a camera, film scanner, or output by a program.

Results of a file inspected by an operating system's file manager
An example of metadata embedded in a rich media file

For example, this image taken in Backblaze’s data center a few years ago carries all kinds of interesting information. For example, when I inspect the file on macOS’s Finder with Get Info, a wealth of information is revealed. I can now not only tell the image’s dimensions and when the image was taken, but also exactly what kind of camera took this picture and the lens settings that were used, as well.

As you can see, this metadata could be very useful if you want to find all images taken on that day, or even images taken with that same camera, focal length, F-stop, or exposure.

When a File and Folder System Can’t Keep Up

Inspecting files one at a time is useful, but a very slow way to determine if a file is the one you need for a new project. Yet many creative environments that don’t have a formal asset management system get by with an ad hoc system of file and folder structures, often kept on the same storage used for production or even on an external hard drive.

Teams quickly outgrow that system when they find that their work spills over to multiple hard drives, or takes up too much space on their production storage. Worst of all, assets kept on a single hard drive are vulnerable to disk damage, or to being accidentally copied or overwritten.

Why Your Assets Need to be Managed

To meet this challenge, creative teams have often turned to a class of application called a Media Asset Manager (MAM). A MAM automatically extracts all their assets’ inherent metadata, helps move files to protected storage, and makes them instantly available to their entire team. In a way, these media asset managers become a private media search engine where any file attribute can be a search query to instantly uncover the file they need in even the largest media asset libraries.

Beyond that, asset management systems are rapidly becoming highly effective collaboration and workflow tools. For example, tagging a series of files as Field Interviews — April 2019, or flagging an edited piece of content as HOLD — do not show customer can be very useful indeed.

The Inner Workings of a Media Asset Manager

When you add files into an asset management system, the application inspects each file, extracting every available bit of information about the file, noting the file’s location on storage, and often creating a smaller stand-in or proxy version of the file that is easier to present to users.

To keep track of this information, asset manager applications employ a database and keep information about your files in it. This way, when you’re searching for a particular set of files among your entire asset library, you can simply make a query of your asset manager’s database in an instant rather than rifling through your entire asset library storage system. The application takes the results of that database query and retrieves the files you need.

The Asset Migration Playbook

Whether you need to move from a file and folder based system to a new asset manager, or have been using an older system and want to move to a new one without losing all of the metadata that you have painstakingly developed, a sound playbook for migrating your assets can help guide you.

Play 1 — Getting Assets in Files and Folders Protected Without an Asset Management System

In this scenario, your assets are in a set of files and folders, and you aren’t ready to implement your asset management system yet.

The first consideration is for the safety of the assets. Files on a single hard drive are vulnerable, so if you are not ready to choose an asset manager your first priority should be to get those files into a secure cloud storage service like Backblaze B2.

We invite you to read our post: How Backup and Archive are Different for Professional Media Workflows

Then, when you have chosen an asset management system, you can simply point the system at your cloud-based asset storage to extract the metadata of the files and populate the asset information in your asset manager.

  1. Get assets archived or moved to cloud storage
  2. Choose your asset management system
  3. Ingest assets directly from your cloud storage

Play 2 — Getting Assets in Files and Folders into Your Asset Management System Backed by Cloud Storage

In this scenario, you’ve chosen your asset management system, and need to get your local assets in files and folders ingested and protected in the most efficient way possible.

You’ll ingest all of your files into your asset manager from local storage, then archive them to cloud storage. Once your asset manager has been configured with your cloud storage credentials, it can automatically move a copy of local files to the cloud for you. Later, when you have confirmed that the file has been copied to the cloud, you can safely delete the local copy.

  1. Ingest assets from local storage directly into your asset manager system
  2. From within your asset manager system archive a copy of files to your cloud storage
  3. Once safely archived, the local copy can be deleted

Play 3 — Getting a Lot of Assets on Local Storage into Your Asset Management System Backed by Cloud Storage

If you have a lot of content, more than say, 20 terabytes, you will want to use a rapid ingest service similar to Backblaze’s Fireball system. You copy the files to Fireball, Backblaze puts them directly into your asset management bucket, and the asset manager is then updated with the file’s new location in your Backblaze B2 account.

This can be a manual process, or can be done with scripting to make the process faster.

You can read about one such migration using this play here:
iconik and Backblaze — The Cloud Production Solution You’ve Always Wanted

  1. Ingest assets from local storage directly into your asset manager system
  2. Archive your local assets to Fireball (up to 70 TB at a time)
  3. Once the files have been uploaded by Backblaze, relink the new location of the cloud copy in your asset management system

You can read more about Backblaze Fireball on our website.

Play 4 — Moving from One Asset Manager System to a New One Without Losing Metadata

In this scenario you have an existing asset management system and need to move to a new one as efficiently as possible to not only take advantage of your new system’s features and get files protected in cloud storage, but also to do it in a way that does not impact your existing production.

Some asset management systems will allow you to export the database contents in a format that can be imported by a new system. Some older systems may not have that luxury and will require the expertise of a database expert to manually extract the metadata. Either way, you can expect to need to map the fields from the old system to the fields in the new system.

Making a copy of old database is a must. Don’t work on the primary copy, and be sure to conduct tests on small groups of files as you’re migrating from the older system to the new. You need to ensure that the metadata is correct in the new system, with special attention that the actual file location is mapped properly. It’s wise to keep the old system up and running for a while before completely phasing it out.

  1. Export the database from the old system
  2. Import the records into the new system
  3. Ensure that the metadata is correct in the new system and file locations are working properly
  4. Make archive copies of your files to cloud storage
  5. Once the new system has been running through a few production cycles, it’s safe to power down the old system

Play 5 — Moving Quickly from an Asset Manager System on Local Storage to a Cloud-based System

In this variation of Play 4, you can move content to object storage with a rapid ingest service like Backblaze Fireball at the same time that you migrate to a cloud-based system. This step will benefit from scripting to create records in your new system with all of your metadata, then relink with the actual file location in your cloud storage all in one pass.

You should test that your asset management system can recognize a file already in the system without creating a duplicate copy of the file. This is done differently by each asset management system.

  1. Export the database from the old system
  2. Import the records into the new system while creating placeholder records with the metadata only
  3. Archive your local assets to Fireball (up to 70 TB at a time)
  4. Once the files have been uploaded by Backblaze, relink the cloud based location to the asset record

Wrapping Up

Every production environment is different, but we all need the same thing: to be able to find and organize our content so that we can be more productive and rest easy knowing that our content is protected.

These plays will help you take that step and be ready for any future production challenges and opportunities.

If you’d like more information about media asset manager migration, join us for our webinar on March 15, 2019:

Backblaze Webinar:  Evolving for Intelligence: MAM to MAM Migration

•  •  •

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post A Workflow Playbook for Migrating Your Media Assets to a MAM appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: DAM vs MAM

Post Syndicated from Janet Lafleur original https://www.backblaze.com/blog/whats-the-diff-dam-vs-mam/

What's the Diff: DAM vs MAM

There’s a reason digital asset management (DAM) and media asset management (MAM) seem to be used interchangeably. Both help organizations centrally organize and manage assets —  images, graphics, documents, video, audio — so that teams can create content efficiently and securely. Both simplify managing those assets through the content life cycle, from raw source files through editing, to distribution, to archive. And, as a central repository, they enable teams to collaborate by giving team members direct access to shared assets.

A quick answer to the difference is that MAM is considered a subset of the broader DAM, with MAMs providing more video capabilities. But since most DAMs can manage videos, and MAMs vary widely in what kind of video-oriented features they offer, it’s worth diving deeper to understand these different asset management solutions.

What to Expect From Any Asset Manager

Before we focus on the differences, let’s outline the basic structure and the capabilities of any asset manager.  The best place to start is with the understanding that any given asset a team might want to work with — a video clip, a document, an image —  is usually presented by the asset manager as a single item to the user, but is actually composed of three elements: the master source file, a thumbnail or proxy that’s displayed, and metadata about the object itself. Note that in the context of asset management, metadata is more than simple file attributes (i.e. owner, date created, last modified date, size). It’s a broader set of attributes, including details about the actual content of the file. We’ll spell out more on that later. As far as capabilities, any DAM or MAM worth being called an asset manager should offer:

  • Collaboration — Members of content creation teams all should have direct access to assets in the asset management system from their own workstations.
  • Access control — Access to specific assets or groups of assets should be allowed or restricted based on the user’s rights and permission settings. This is particularly important if teams work in different departments or for different external clients.
  • Browse — Assets should be easily identifiable by more than their file name, such as thumbnails or proxies for videos, and browsable in the asset manager’s graphical interface.
  • Metadata search —  Assets should be searchable by attributes assigned to them, known as metadata. Metadata assignment capabilities should be flexible and extensible over time.
  • Preview — For larger or archived assets, a preview or quick review capability should be provided, such as playing video proxies or mouse-over zoom for thumbnails.
  • Versions — Based on permissions, team members should be able to add new versions of existing assets or add new assets so that material can be easily repurposed for future projects.

Why Metadata Matters So Much

Metadata is a critical element that distinguishes asset managers from file browsers. Without metadata, file names end up doing the heavy lifting with long names like 20190118-gbudman-broll-01-lv-0001.mp4, which strings together a shoot date, subject, camera number, clip number, and more. Structured file naming is not a bad practice, but it doesn’t scale easily to larger teams of contributors and creators. And metadata is not used only to search for assets, it can be fed into other workflow applications integrated with the asset manager for use there.

Metadata is particularly important for images and video because, unlike text-based documents, they can’t be searched for keywords. Metadata can describe in detail what’s in the image or video. For example, metadata for an image could be: male, beard, portrait, blue shirt, dark hair, fair skin, middle-aged, outdoors. And since videos are streams of images, their metadata goes one step further to describe elements at precise moments or ranges of time in the video, known as timecodes. For example, video of a football game could include metadata tags such as 00:10:30 kickoff, 00:15:37 interception, and 00:21:04 touchdown.

iconik MAM example displaying meta data for a BMW M635CSi

iconik MAM

Workflow Integration and Archive Support

More robust DAMs and MAMs go beyond the basic capabilities and offer a range of advanced features that simplify or otherwise support the creation process, also known as the workflow. These can include features for editorial review, automated metadata extraction (e.g. transcription for facial recognition), multilingual support, automated transcode, and much, much more. This is where different asset management solutions diverge the most and show their customization for a particular type of workflow or industry.

Regardless of whether you need all the bells and whistles in your asset manager, as your content library grows it will need storage management features, starting with archive. Archiving completed projects and assets that are infrequently used can conserve disk space on your server by moving them off to less expensive storage, such as cloud storage or digital tape. In particular, images and video are huge storage hogs, and the higher the resolution, the more storage capacity they consume. Regular archiving can keep costs down and keep you from having to upgrade your expensive storage server every year.

Asset managers with built-in archiving make moving content into and out of an archive seamless and straightforward. For most asset managers, assets can be archived directly from the graphical interface. After archive, the thumbnails or proxies of the archived assets continue to appear as before, with a visual indication that they’re archived on secondary storage. Users can retrieve the asset as before, albeit with some time delay that depends on the archive storage and network connection chosen.

A good asset manager will offer multiple choices for archive storage, from cloud storage to LTO tape to inexpensive disk, and from different vendors.  An excellent one will let you automatically make multiple copies to different archive storage for added data protection.

What is a MAM?

With all these common characteristics, what makes a media asset manager different than other asset managers is that it’s created for video production. While DAMs can generally manage video assets, and MAMs can manage images and documents, MAMs are designed from the ground up for creating and managing video content in a video production workflow. That means metadata creation and management, application integrations, and workflow orchestration are all video-oriented.

Metadata for video starts when it’s shot, with camera data, shoot notes or basic logging captured on set.  More detailed metadata cataloging happens when the content is ingested from the camera into the MAM for post-production. Nearly all MAMs offer some type of manual logging to create timecode-based metadata. MAMs built for live broadcast events like sports provide shortcut buttons for key events, such as a face off or slap shot in a hockey game.

More advanced systems offer additional tools for automated metadata extraction. For example, some will use facial recognition to automatically identify actors or public figures.

There is also metadata related to how, where, and how many times the asset has been used and what kinds of edits have been made from the original. There’s no end to what you can describe and categorize with metadata. Defining it for a content library of any reasonable size can be a major undertaking.

MAMs Integrate Video Production Applications

Unlike the more general-purpose DAMs, MAMs will integrate tools built specifically for video production. These widely ranging integrated applications include ingest tools, video editing suites, visual effects, graphics tools, transcode, quality assurance, file transport, specific distribution systems, and much more.

Modern MAM solutions integrate cloud storage throughout the workflow, and not just for archive, but also for creating content through proxy editing. In proxy editing, video editors work using a lower-resolution of the video stored locally, then those edits are applied later to the full-resolution version stored in the cloud when the final cut in rendered.

MAMs May be Tailored for Specific Industry Niches and Workflows

To sum up, the longer explanation for DAM vs MAM is that MAMs focus on video production, with better MAMs offering all the integrations needed for complex video workflows. And because video workflows are as varied as they are complex, MAMs often fall into specific niches within the industry: news, sports, post-production, film production, etc. The size of the organization or team matters too. To stay within their budget, a small post house may select a MAM with fewer of the advanced features that may be basic requirements for a larger multinational post-production facility.

That’s why there are so many MAMs on the market, and why choosing one can be a daunting task with a long evaluation process. And it’s why migrating from one asset manager to another is more common than you’d think. Pro tip: working with a trusted system integrator that serves your industry niche can save you a lot of heartache and money in the long run.

Finally, keep in mind that for legacy reasons, sometimes what’s marketed as a DAM will have all the video capabilities you’d expect from MAM.  So don’t let the name throw you off. Instead, look for an asset manager that fits your workflow with the features and integrated tools you need today, while also providing the  flexibility you need as your business changes in the future.

Backblaze will be exhibiting at NAB 2019 in Las Vegas on April 8-11, 2019.NABShow logoSchedule a meeting with our cloud storage experts to learn how B2 Cloud Storage can streamline your workflow today!

The post What’s the Diff: DAM vs MAM appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Build your own weather station with our new guide!

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/build-your-own-weather-station/

One of the most common enquiries I receive at Pi Towers is “How can I get my hands on a Raspberry Pi Oracle Weather Station?” Now the answer is: “Why not build your own version using our guide?”

Build Your Own weather station kit assembled

Tadaaaa! The BYO weather station fully assembled.

Our Oracle Weather Station

In 2016 we sent out nearly 1000 Raspberry Pi Oracle Weather Station kits to schools from around the world who had applied to be part of our weather station programme. In the original kit was a special HAT that allows the Pi to collect weather data with a set of sensors.

The original Raspberry Pi Oracle Weather Station HAT – Build Your Own Raspberry Pi weather station

The original Raspberry Pi Oracle Weather Station HAT

We designed the HAT to enable students to create their own weather stations and mount them at their schools. As part of the programme, we also provide an ever-growing range of supporting resources. We’ve seen Oracle Weather Stations in great locations with a huge differences in climate, and they’ve even recorded the effects of a solar eclipse.

Our new BYO weather station guide

We only had a single batch of HATs made, and unfortunately we’ve given nearly* all the Weather Station kits away. Not only are the kits really popular, we also receive lots of questions about how to add extra sensors or how to take more precise measurements of a particular weather phenomenon. So today, to satisfy your demand for a hackable weather station, we’re launching our Build your own weather station guide!

Build Your Own Raspberry Pi weather station

Fun with meteorological experiments!

Our guide suggests the use of many of the sensors from the Oracle Weather Station kit, so can build a station that’s as close as possible to the original. As you know, the Raspberry Pi is incredibly versatile, and we’ve made it easy to hack the design in case you want to use different sensors.

Many other tutorials for Pi-powered weather stations don’t explain how the various sensors work or how to store your data. Ours goes into more detail. It shows you how to put together a breadboard prototype, it describes how to write Python code to take readings in different ways, and it guides you through recording these readings in a database.

Build Your Own Raspberry Pi weather station on a breadboard

There’s also a section on how to make your station weatherproof. And in case you want to move past the breadboard stage, we also help you with that. The guide shows you how to solder together all the components, similar to the original Oracle Weather Station HAT.

Who should try this build

We think this is a great project to tackle at home, at a STEM club, Scout group, or CoderDojo, and we’re sure that many of you will be chomping at the bit to get started. Before you do, please note that we’ve designed the build to be as straight-forward as possible, but it’s still fairly advanced both in terms of electronics and programming. You should read through the whole guide before purchasing any components.

Build Your Own Raspberry Pi weather station – components

The sensors and components we’re suggesting balance cost, accuracy, and easy of use. Depending on what you want to use your station for, you may wish to use different components. Similarly, the final soldered design in the guide may not be the most elegant, but we think it is achievable for someone with modest soldering experience and basic equipment.

You can build a functioning weather station without soldering with our guide, but the build will be more durable if you do solder it. If you’ve never tried soldering before, that’s OK: we have a Getting started with soldering resource plus video tutorial that will walk you through how it works step by step.

Prototyping HAT for Raspberry Pi weather station sensors

For those of you who are more experienced makers, there are plenty of different ways to put the final build together. We always like to hear about alternative builds, so please post your designs in the Weather Station forum.

Our plans for the guide

Our next step is publishing supplementary guides for adding extra functionality to your weather station. We’d love to hear which enhancements you would most like to see! Our current ideas under development include adding a webcam, making a tweeting weather station, adding a light/UV meter, and incorporating a lightning sensor. Let us know which of these is your favourite, or suggest your own amazing ideas in the comments!

*We do have a very small number of kits reserved for interesting projects or locations: a particularly cool experiment, a novel idea for how the Oracle Weather Station could be used, or places with specific weather phenomena. If have such a project in mind, please send a brief outline to [email protected], and we’ll consider how we might be able to help you.

The post Build your own weather station with our new guide! appeared first on Raspberry Pi.

When Joe Public Becomes a Commercial Pirate, a Little Knowledge is Dangerous

Post Syndicated from Andy original https://torrentfreak.com/joe-public-becomes-commercial-pirate-little-knowledge-dangerous-180603/

Back in March and just a few hours before the Anthony Joshua v Joseph Parker fight, I got chatting with some fellow fans in the local pub. While some were intending to pay for the fight, others were going down the Kodi route.

Soon after the conversation switched to IPTV. One of the guys had a subscription and he said that his supplier would be along shortly if anyone wanted a package to watch the fight at home. Of course, I was curious to hear what he had to say since it’s not often this kind of thing is offered ‘offline’.

The guy revealed that he sold more or less exclusively on eBay and called up the page on his phone to show me. The listing made interesting reading.

In common with hundreds of similar IPTV subscription offers easily findable on eBay, the listing offered “All the sports and films you need plus VOD and main UK channels” for the sum of just under £60 per year, which is fairly cheap in the current market. With a non-committal “hmmm” I asked a bit more about the guy’s business and surprisingly he was happy to provide some details.

Like many people offering such packages, the guy was a reseller of someone else’s product. He also insisted that selling access to copyrighted content is OK because it sits in a “gray area”. It’s also easy to keep listings up on eBay, he assured me, as long as a few simple rules are adhered to. Right, this should be interesting.

First of all, sellers shouldn’t be “too obvious” he advised, noting that individual channels or channel lists shouldn’t be listed on the site. Fair enough, but then he said the most important thing of all is to have a disclaimer like his in any listing, written as follows:

“PLEASE NOTE EBAY: THIS IS NOT A DE SCRAMBLER SERVICE, I AM NOT SELLING ANY ILLEGAL CHANNELS OR CHANNEL LISTS NOR DO I REPRESENT ANY MEDIA COMPANY NOR HAVE ACCESS TO ANY OF THEIR CONTENTS. NO TRADEMARK HAS BEEN INFRINGED. DO NOT REMOVE LISTING AS IT IS IN ACCORDANCE WITH EBAY POLICIES.”

Apparently, this paragraph is crucial to keeping listings up on eBay and is the equivalent of kryptonite when it comes to deflecting copyright holders, police, and Trading Standards. Sure enough, a few seconds with Google reveals the same wording on dozens of eBay listings and those offering IPTV subscriptions on external platforms.

It is, of course, absolutely worthless but the IPTV seller insisted otherwise, noting he’d sold “thousands” of subscriptions through eBay without any problems. While a similar logic can be applied to garlic and vampires, a second disclaimer found on many other illicit IPTV subscription listings treads an even more bizarre path.

“THE PRODUCTS OFFERED CAN NOT BE USED TO DESCRAMBLE OR OTHERWISE ENABLE ACCESS TO CABLE OR SATELLITE TELEVISION PROGRAMS THAT BYPASSES PAYMENT TO THE SERVICE PROVIDER. RECEIVING SUBSCRIPTION/BASED TV AIRTIME IS ILLEGAL WITHOUT PAYING FOR IT.”

This disclaimer (which apparently no sellers displaying it have ever read) seems to be have been culled from the Zgemma site, which advertises a receiving device which can technically receive pirate IPTV services but wasn’t designed for the purpose. In that context, the disclaimer makes sense but when applied to dedicated pirate IPTV subscriptions, it’s absolutely ridiculous.

It’s unclear why so many sellers on eBay, Gumtree, Craigslist and other platforms think that these disclaimers are useful. It leads one to the likely conclusion that these aren’t hardcore pirates at all but regular people simply out to make a bit of extra cash who have received bad advice.

What is clear, however, is that selling access to thousands of otherwise subscription channels without permission from copyright owners is definitely illegal in the EU. The European Court of Justice says so (1,2) and it’s been backed up by subsequent cases in the Netherlands.

While the odds of getting criminally prosecuted or sued for reselling such a service are relatively slim, it’s worrying that in 2018 people still believe that doing so is made legal by the inclusion of a paragraph of text. It’s even more worrying that these individuals apparently have no idea of the serious consequences should they become singled out for legal action.

Even more surprisingly, TorrentFreak spoke with a handful of IPTV suppliers higher up the chain who also told us that what they are doing is legal. A couple claimed to be protected by communication intermediary laws, others didn’t want to go into details. Most stopped responding to emails on the topic. Perhaps most tellingly, none wanted to go on the record.

The big take-home here is that following some important EU rulings, knowingly linking to copyrighted content for profit is nearly always illegal in Europe and leaves people open for targeting by copyright holders and the authorities. People really should be aware of that, especially the little guy making a little extra pocket money on eBay.

Of course, people are perfectly entitled to carry on regardless and test the limits of the law when things go wrong. At this point, however, it’s probably worth noting that IPTV provider Ace Hosting recently handed over £600,000 rather than fight the Premier League (1,2) when they clearly had the money to put up a defense.

Given their effectiveness, perhaps they should’ve put up a disclaimer instead?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

ISP Questions Impartiality of Judges in Copyright Troll Cases

Post Syndicated from Andy original https://torrentfreak.com/isp-questions-impartiality-of-judges-in-copyright-troll-cases-180602/

Following in the footsteps of similar operations around the world, two years ago the copyright trolling movement landed on Swedish shores.

The pattern was a familiar one, with trolls harvesting IP addresses from BitTorrent swarms and tracing them back to Internet service providers. Then, after presenting evidence to a judge, the trolls obtained orders that compelled ISPs to hand over their customers’ details. From there, the trolls demanded cash payments to make supposed lawsuits disappear.

It’s a controversial business model that rarely receives outside praise. Many ISPs have tried to slow down the flood but most eventually grow tired of battling to protect their customers. The same cannot be said of Swedish ISP Bahnhof.

The ISP, which is also a strong defender of privacy, has become known for fighting back against copyright trolls. Indeed, to thwart them at the very first step, the company deletes IP address logs after just 24 hours, which prevents its customers from being targeted.

Bahnhof says that the copyright business appeared “dirty and corrupt” right from the get go, so it now operates Utpressningskollen.se, a web portal where the ISP publishes data on Swedish legal cases in which copyright owners demand customer data from ISPs through the Patent and Market Courts.

Over the past two years, Bahnhof says it has documented 76 cases of which six are still ongoing, 11 have been waived and a majority 59 have been decided in favor of mainly movie companies. Bahnhof says that when it discovered that 59 out of the 76 cases benefited one party, it felt a need to investigate.

In a detailed report compiled by Bahnhof Communicator Carolina Lindahl and sent to TF, the ISP reveals that it examined the individual decision-makers in the cases before the Courts and found five judges with “questionable impartiality.”

“One of the judges, we can call them Judge 1, has closed 12 of the cases, of which two have been waived and the other 10 have benefitted the copyright owner, mostly movie companies,” Lindahl notes.

“Judge 1 apparently has written several articles in the magazine NIR – Nordiskt Immateriellt Rättsskydd (Nordic Intellectual Property Protection) – which is mainly supported by Svenska Föreningen för Upphovsrätt, the Swedish Association for Copyright (SFU).

“SFU is a member-financed group centered around copyright that publishes articles, hands out scholarships, arranges symposiums, etc. On their website they have a public calendar where Judge 1 appears regularly.”

Bahnhof says that the financiers of the SFU are Sveriges Television AB (Sweden’s national public TV broadcaster), Filmproducenternas Rättsförening (a legally-oriented association for filmproducers), BMG Chrysalis Scandinavia (a media giant) and Fackförbundet för Film och Mediabranschen (a union for the movie and media industry).

“This means that Judge 1 is involved in a copyright association sponsored by the film and media industry, while also judging in copyright cases with the film industry as one of the parties,” the ISP says.

Bahnhof’s also has criticism for Judge 2, who participated as an event speaker for the Swedish Association for Copyright, and Judge 3 who has written for the SFU-supported magazine NIR. According to Lindahl, Judge 4 worked for a bureau that is partly owned by a board member of SFU, who also defended media companies in a “high-profile” Swedish piracy case.

That leaves Judge 5, who handled 10 of the copyright troll cases documented by Bahnhof, waiving one and deciding the remaining nine in favor of a movie company plaintiff.

“Judge 5 has been questioned before and even been accused of bias while judging a high-profile piracy case almost ten years ago. The accusations of bias were motivated by the judge’s membership of SFU and the Swedish Association for Intellectual Property Rights (SFIR), an association with several important individuals of the Swedish copyright community as members, who all defend, represent, or sympathize with the media industry,” Lindahl says.

Bahnhof hasn’t named any of the judges nor has it provided additional details on the “high-profile” case. However, anyone who remembers the infamous trial of ‘The Pirate Bay Four’ a decade ago might recall complaints from the defense (1,2,3) that several judges involved in the case were members of pro-copyright groups.

While there were plenty of calls to consider them biased, in May 2010 the Supreme Court ruled otherwise, a fact Bahnhof recognizes.

“Judge 5 was never sentenced for bias by the court, but regardless of the court’s decision this is still a judge who shares values and has personal connections with [the media industry], and as if that weren’t enough, the judge has induced an additional financial aspect by participating in events paid for by said party,” Lindahl writes.

“The judge has parties and interest holders in their personal network, a private engagement in the subject and a financial connection to one party – textbook characteristics of bias which would make anyone suspicious.”

The decision-makers of the Patent and Market Court and their relations.

The ISP notes that all five judges have connections to the media industry in the cases they judge, which isn’t a great starting point for returning “objective and impartial” results. In its summary, however, the ISP is scathing of the overall system, one in which court cases “almost looked rigged” and appear to be decided in favor of the movie company even before reaching court.

In general, however, Bahnhof says that the processes show a lack of individual attention, such as the court blindly accepting questionable IP address evidence supplied by infamous anti-piracy outfit MaverickEye.

“The court never bothers to control the media company’s only evidence (lists generated by MaverickMonitor, which has proven to be an unreliable software), the court documents contain several typos of varying severity, and the same standard texts are reused in several different cases,” the ISP says.

“The court documents show a lack of care and control, something that can easily be taken advantage of by individuals with shady motives. The findings and discoveries of this investigation are strengthened by the pure numbers mentioned in the beginning which clearly show how one party almost always wins.

“If this is caused by bias, cheating, partiality, bribes, political agenda, conspiracy or pure coincidence we can’t say for sure, but the fact that this process has mainly generated money for the film industry, while citizens have been robbed of their personal integrity and legal certainty, indicates what forces lie behind this machinery,” Bahnhof’s Lindahl concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

timeShift(GrafanaBuzz, 1w) Issue 47

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/06/01/timeshiftgrafanabuzz-1w-issue-47/

Welcome to TimeShift We cover a lot of ground this week with posts on general monitoring principles, home automation, how CERN uses open source projects in their particle acceleration work, and more. Have an article you’d like highlighted here? Get in touch.
We’re excited to be a sponsor of Monitorama PDX June 4-6. If you’re going, please be sure and say hello! Latest Release: Grafana 5.1.3 This latest point release fixes a scrolling issue that was reported in Firefox.

New – Pay-per-Session Pricing for Amazon QuickSight, Another Region, and Lots More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-pay-per-session-pricing-for-amazon-quicksight-another-region-and-lots-more/

Amazon QuickSight is a fully managed cloud business intelligence system that gives you Fast & Easy to Use Business Analytics for Big Data. QuickSight makes business analytics available to organizations of all shapes and sizes, with the ability to access data that is stored in your Amazon Redshift data warehouse, your Amazon Relational Database Service (RDS) relational databases, flat files in S3, and (via connectors) data stored in on-premises MySQL, PostgreSQL, and SQL Server databases. QuickSight scales to accommodate tens, hundreds, or thousands of users per organization.

Today we are launching a new, session-based pricing option for QuickSight, along with additional region support and other important new features. Let’s take a look at each one:

Pay-per-Session Pricing
Our customers are making great use of QuickSight and take full advantage of the power it gives them to connect to data sources, create reports, and and explore visualizations.

However, not everyone in an organization needs or wants such powerful authoring capabilities. Having access to curated data in dashboards and being able to interact with the data by drilling down, filtering, or slicing-and-dicing is more than adequate for their needs. Subscribing them to a monthly or annual plan can be seen as an unwarranted expense, so a lot of such casual users end up not having access to interactive data or BI.

In order to allow customers to provide all of their users with interactive dashboards and reports, the Enterprise Edition of Amazon QuickSight now allows Reader access to dashboards on a Pay-per-Session basis. QuickSight users are now classified as Admins, Authors, or Readers, with distinct capabilities and prices:

Authors have access to the full power of QuickSight; they can establish database connections, upload new data, create ad hoc visualizations, and publish dashboards, all for $9 per month (Standard Edition) or $18 per month (Enterprise Edition).

Readers can view dashboards, slice and dice data using drill downs, filters and on-screen controls, and download data in CSV format, all within the secure QuickSight environment. Readers pay $0.30 for 30 minutes of access, with a monthly maximum of $5 per reader.

Admins have all authoring capabilities, and can manage users and purchase SPICE capacity in the account. The QuickSight admin now has the ability to set the desired option (Author or Reader) when they invite members of their organization to use QuickSight. They can extend Reader invites to their entire user base without incurring any up-front or monthly costs, paying only for the actual usage.

To learn more, visit the QuickSight Pricing page.

A New Region
QuickSight is now available in the Asia Pacific (Tokyo) Region:

The UI is in English, with a localized version in the works.

Hourly Data Refresh
Enterprise Edition SPICE data sets can now be set to refresh as frequently as every hour. In the past, each data set could be refreshed up to 5 times a day. To learn more, read Refreshing Imported Data.

Access to Data in Private VPCs
This feature was launched in preview form late last year, and is now available in production form to users of the Enterprise Edition. As I noted at the time, you can use it to implement secure, private communication with data sources that do not have public connectivity, including on-premises data in Teradata or SQL Server, accessed over an AWS Direct Connect link. To learn more, read Working with AWS VPC.

Parameters with On-Screen Controls
QuickSight dashboards can now include parameters that are set using on-screen dropdown, text box, numeric slider or date picker controls. The default value for each parameter can be set based on the user name (QuickSight calls this a dynamic default). You could, for example, set an appropriate default based on each user’s office location, department, or sales territory. Here’s an example:

To learn more, read about Parameters in QuickSight.

URL Actions for Linked Dashboards
You can now connect your QuickSight dashboards to external applications by defining URL actions on visuals. The actions can include parameters, and become available in the Details menu for the visual. URL actions are defined like this:

You can use this feature to link QuickSight dashboards to third party applications (e.g. Salesforce) or to your own internal applications. Read Custom URL Actions to learn how to use this feature.

Dashboard Sharing
You can now share QuickSight dashboards across every user in an account.

Larger SPICE Tables
The per-data set limit for SPICE tables has been raised from 10 GB to 25 GB.

Upgrade to Enterprise Edition
The QuickSight administrator can now upgrade an account from Standard Edition to Enterprise Edition with a click. This enables provisioning of Readers with pay-per-session pricing, private VPC access, row-level security for dashboards and data sets, and hourly refresh of data sets. Enterprise Edition pricing applies after the upgrade.

Available Now
Everything I listed above is available now and you can start using it today!

You can try QuickSight for 60 days at no charge, and you can also attend our June 20th Webinar.

Jeff;

 

Majority of Canadians Consume Online Content Legally, Survey Finds

Post Syndicated from Andy original https://torrentfreak.com/majority-of-canadians-consume-online-content-legally-survey-finds-180531/

Back in January, a coalition of companies and organizations with ties to the entertainment industries called on local telecoms regulator CRTC to implement a national website blocking regime.

Under the banner of Fairplay Canada, members including Bell, Cineplex, Directors Guild of Canada, Maple Leaf Sports and Entertainment, Movie Theatre Association of Canada, and Rogers Media, spoke of an industry under threat from marauding pirates. But just how serious is this threat?

The results of a new survey commissioned by Innovation Science and Economic Development Canada (ISED) in collaboration with the Department of Canadian Heritage (PCH) aims to shine light on the problem by revealing the online content consumption habits of citizens in the Great White North.

While there are interesting findings for those on both sides of the site-blocking debate, the situation seems somewhat removed from the Armageddon scenario predicted by the entertainment industries.

Carried out among 3,301 Canadians aged 12 years and over, the Kantar TNS study aims to cover copyright infringement in six key content areas – music, movies, TV shows, video games, computer software, and eBooks. Attitudes and behaviors are also touched upon while measuring the effectiveness of Canada’s copyright measures.

General Digital Content Consumption

In its introduction, the report notes that 28 million Canadians used the Internet in the three-month study period to November 27, 2017. Of those, 22 million (80%) consumed digital content. Around 20 million (73%) streamed or accessed content, 16 million (59%) downloaded content, while 8 million (28%) shared content.

Music, TV shows and movies all battled for first place in the consumption ranks, with 48%, 48%, and 46% respectively.

Copyright Infringement

According to the study, the majority of Canadians do things completely by the book. An impressive 74% of media-consuming respondents said that they’d only accessed material from legal sources in the preceding three months.

The remaining 26% admitted to accessing at least one illegal file in the same period. Of those, just 5% said that all of their consumption was from illegal sources, with movies (36%), software (36%), TV shows (34%) and video games (33%) the most likely content to be consumed illegally.

Interestingly, the study found that few demographic factors – such as gender, region, rural and urban, income, employment status and language – play a role in illegal content consumption.

“We found that only age and income varied significantly between consumers who infringed by downloading or streaming/accessing content online illegally and consumers who did not consume infringing content online,” the report reads.

“More specifically, the profile of consumers who downloaded or streamed/accessed infringing content skewed slightly younger and towards individuals with household incomes of $100K+.”

Licensed services much more popular than pirate haunts

It will come as no surprise that Netflix was the most popular service with consumers, with 64% having used it in the past three months. Sites like YouTube and Facebook were a big hit too, visited by 36% and 28% of content consumers respectively.

Overall, 74% of online content consumers use licensed services for content while 42% use social networks. Under a third (31%) use a combination of peer-to-peer (BitTorrent), cyberlocker platforms, or linking sites. Stream-ripping services are used by 9% of content consumers.

“Consumers who reported downloading or streaming/accessing infringing content only are less likely to use licensed services and more likely to use peer-to-peer/cyberlocker/linking sites than other consumers of online content,” the report notes.

Attitudes towards legal consumption & infringing content

In common with similar surveys over the years, the Kantar research looked at the reasons why people consume content from various sources, both legal and otherwise.

Convenience (48%), speed (36%) and quality (34%) were the most-cited reasons for using legal sources. An interesting 33% of respondents said they use legal sites to avoid using illegal sources.

On the illicit front, 54% of those who obtained unauthorized content in the previous three months said they did so due to it being free, with 40% citing convenience and 34% mentioning speed.

Almost six out of ten (58%) said lower costs would encourage them to switch to official sources, with 47% saying they’d move if legal availability was improved.

Canada’s ‘Notice-and-Notice’ warning system

People in Canada who share content on peer-to-peer systems like BitTorrent without permission run the risk of receiving an infringement notice warning them to stop. These are sent by copyright holders via users’ ISPs and the hope is that the shock of receiving a warning will turn consumers back to the straight and narrow.

The study reveals that 10% of online content consumers over the age of 12 have received one of these notices but what kind of effect have they had?

“Respondents reported that receiving such a notice resulted in the following: increased awareness of copyright infringement (38%), taking steps to ensure password protected home networks (27%), a household discussion about copyright infringement (27%), and discontinuing illegal downloading or streaming (24%),” the report notes.

While these are all positives for the entertainment industries, Kantar reports that almost a quarter (24%) of people who receive a notice simply ignore them.

Stream-ripping

Once upon a time, people obtaining music via P2P networks was cited as the music industry’s greatest threat but, with the advent of sites like YouTube, so-called stream-ripping is the latest bogeyman.

According to the study, 11% of Internet users say they’ve used a stream-ripping service. They are most likely to be male (62%) and predominantly 18 to 34 (52%) years of age.

“Among Canadians who have used a service to stream-rip music or entertainment, nearly half (48%) have used stream-ripping sites, one-third have used downloader apps (38%), one-in-seven (14%) have used a stream-ripping plug-in, and one-in-ten (10%) have used stream-ripping software,” the report adds.

Set-Top Boxes and VPNs

Few general piracy studies would be complete in 2018 without touching on set-top devices and Virtual Private Networks and this report doesn’t disappoint.

More than one in five (21%) respondents aged 12+ reported using a VPN, with the main purpose of securing communications and Internet browsing (57%).

A relatively modest 36% said they use a VPN to access free content while 32% said the aim was to access geo-blocked content unavailable in Canada. Just over a quarter (27%) said that accessing content from overseas at a reasonable price was the main motivator.

One in ten (10%) of respondents reported using a set-top box, with 78% stating they use them to access paid-for content. Interestingly, only a small number say they use the devices to infringe.

“A minority use set-top boxes to access other content that is not legal or they are unsure if it is legal (16%), or to access live sports that are not legal or they are unsure if it is legal (11%),” the report notes.

“Individuals who consumed a mix of legal and illegal content online are more likely to use VPN services (42%) or TV set-top boxes (21%) than consumers who only downloaded or streamed/accessed legal content.”

Kantar says that the findings of the report will be used to help policymakers evaluate how Canada’s Copyright Act is coping with a changing market and technological developments.

“This research will provide the necessary information required to further develop copyright policy in Canada, as well as to provide a foundation to assess the effectiveness of the measures to address copyright infringement, should future analysis be undertaken,” it concludes.

The full report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Amazon Neptune Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-neptune-generally-available/

Amazon Neptune is now Generally Available in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland). Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. At the core of Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with millisecond latencies. Neptune supports two popular graph models, Property Graph and RDF, through Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune can be used to power everything from recommendation engines and knowledge graphs to drug discovery and network security. Neptune is fully-managed with automatic minor version upgrades, backups, encryption, and fail-over. I wrote about Neptune in detail for AWS re:Invent last year and customers have been using the preview and providing great feedback that the team has used to prepare the service for GA.

Now that Amazon Neptune is generally available there are a few changes from the preview:

Launching an Amazon Neptune Cluster

Launching a Neptune cluster is as easy as navigating to the AWS Management Console and clicking create cluster. Of course you can also launch with CloudFormation, the CLI, or the SDKs.

You can monitor your cluster health and the health of individual instances through Amazon CloudWatch and the console.

Additional Resources

We’ve created two repos with some additional tools and examples here. You can expect continuous development on these repos as we add additional tools and examples.

  • Amazon Neptune Tools Repo
    This repo has a useful tool for converting GraphML files into Neptune compatible CSVs for bulk loading from S3.
  • Amazon Neptune Samples Repo
    This repo has a really cool example of building a collaborative filtering recommendation engine for video game preferences.

Purpose Built Databases

There’s an industry trend where we’re moving more and more onto purpose-built databases. Developers and businesses want to access their data in the format that makes the most sense for their applications. As cloud resources make transforming large datasets easier with tools like AWS Glue, we have a lot more options than we used to for accessing our data. With tools like Amazon Redshift, Amazon Athena, Amazon Aurora, Amazon DynamoDB, and more we get to choose the best database for the job or even enable entirely new use-cases. Amazon Neptune is perfect for workloads where the data is highly connected across data rich edges.

I’m really excited about graph databases and I see a huge number of applications. Looking for ideas of cool things to build? I’d love to build a web crawler in AWS Lambda that uses Neptune as the backing store. You could further enrich it by running Amazon Comprehend or Amazon Rekognition on the text and images found and creating a search engine on top of Neptune.

As always, feel free to reach out in the comments or on twitter to provide any feedback!

Randall

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.

Summary

SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see:

FCC Asks Amazon & eBay to Help Eliminate Pirate Media Box Sales

Post Syndicated from Andy original https://torrentfreak.com/fcc-asks-amazon-ebay-to-help-eliminate-pirate-media-box-sales-180530/

Over the past several years, anyone looking for a piracy-configured set-top box could do worse than search for one on Amazon or eBay.

Historically, people deploying search terms including “Kodi” or “fully-loaded” were greeted by page after page of Android-type boxes, each ready for illicit plug-and-play entertainment consumption following delivery.

Although the problem persists on both platforms, people are now much less likely to find infringing devices than they were 12 to 24 months ago. Under pressure from entertainment industry groups, both Amazon and eBay have tightened the screws on sellers of such devices. Now, however, both companies have received requests to stem sales from a completetey different direction.

In a letter to eBay CEO Devin Wenig and Amazon CEO Jeff Bezos first spotted by Ars, FCC Commissioner Michael O’Rielly calls on the platforms to take action against piracy-configured boxes that fail to comply with FCC equipment authorization requirements or falsely display FCC logos, contrary to United States law.

“Disturbingly, some rogue set-top box manufacturers and distributors are exploiting the FCC’s trusted logo by fraudulently placing it on devices that have not been approved via the Commission’s equipment authorization process,” O’Rielly’s letter reads.

“Specifically, nine set-top box distributors were referred to the FCC in October for enabling the unlawful streaming of copyrighted material, seven of which displayed the FCC logo, although there was no record of such compliance.”

While O’Rielly admits that the copyright infringement aspects fall outside the jurisdiction of the FCC, he says it’s troubling that many of these devices are used to stream infringing content, “exacerbating the theft of billions of dollars in American innovation and creativity.”

As noted above, both Amazon and eBay have taken steps to reduce sales of pirate boxes on their respective platforms on copyright infringement grounds, something which is duly noted by O’Rielly. However, he points out that devices continue to be sold to members of the public who may believe that the devices are legal since they’re available for sale from legitimate companies.

“For these reasons, I am seeking your further cooperation in assisting the FCC in taking steps to eliminate the non-FCC compliant devices or devices that fraudulently bear the FCC logo,” the Commissioner writes (pdf).

“Moreover, if your company is made aware by the Commission, with supporting evidence, that a particular device is using a fraudulent FCC label or has not been appropriately certified and labeled with a valid FCC logo, I respectfully request that you commit to swiftly removing these products from your sites.”

In the event that Amazon and eBay take action under this request, O’Rielly asks both platforms to hand over information they hold on offending manufacturers, distributors, and suppliers.

Amazon was quick to respond to the FCC. In a letter published by Ars, Amazon’s Public Policy Vice President Brian Huseman assured O’Rielly that the company is not only dedicated to tackling rogue devices on copyright-infringement grounds but also when there is fraudulent use of the FCC’s logos.

Noting that Amazon is a key member of the Alliance for Creativity and Entertainment (ACE) – a group that has been taking legal action against sellers of infringing streaming devices (ISDs) and those who make infringing addons for Kodi-type systems – Huseman says that dealing with the problem is a top priority.

“Our goal is to prevent the sale of ISDs anywhere, as we seek to protect our customers from the risks posed by these devices, in addition to our interest in protecting Amazon Studios content,” Huseman writes.

“In 2017, Amazon became the first online marketplace to prohibit the sale of streaming media players that promote or facilitate piracy. To prevent the sale of these devices, we proactively scan product listings for signs of potentially infringing products, and we also invest heavily in sophisticated, automated real-time tools to review a variety of data sources and signals to identify inauthentic goods.

“These automated tools are supplemented by human reviewers that conduct manual investigations. When we suspect infringement, we take immediate action to remove suspected listings, and we also take enforcement action against sellers’ entire accounts when appropriate.”

Huseman also reveals that since implementing a proactive policy against such devices, “tens of thousands” of listings have been blocked from Amazon. In addition, the platform has been making criminal referrals to law enforcement as well as taking civil action (1,2,3) as part of ACE.

“As noted in your letter, we would also appreciate the opportunity to collaborate further with the FCC to remove non-compliant devices that improperly use the FCC logo or falsely claim FCC certification. If any FCC non-compliant devices are identified, we seek to work with you to ensure they are not offered for sale,” Huseman concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

A set of Git security releases

Post Syndicated from corbet original https://lwn.net/Articles/755935/rss

Git versions v2.17.1, v2.13.7, v2.14.4, v2.15.2 and v2.16.4 have all been
released with fixes to a couple of security issues. The nastier of the two
(CVE-2018-11235) enables arbitrary code execution controlled by a hostile
repository. See this
Microsoft blog entry
for more details — after updating.

Hong Kong Customs Arrest Pirate Streaming Device Vendors

Post Syndicated from Andy original https://torrentfreak.com/hong-kong-customs-arrest-pirate-streaming-device-vendors-180529/

As Internet-capable set-top boxes pour into homes across all populated continents, authorities seem almost powerless to come up with a significant response to the growing threat.

In standard form these devices, which are often Android-based, are entirely legal. However, when configured with specialist software they become piracy powerhouses providing access to all content imaginable, often at copyright holders’ expense.

A large proportion of these devices come from Asia, China in particular, but it’s relatively rare to hear of enforcement action in that part of the world. That changed this week with an announcement from Hong Kong customs detailing a series of raids in the areas of Sham Shui Po and Wan Chai.

After conducting an in-depth investigation with the assistance of copyright holders, on May 25 and 26 Customs and Excise officers launched Operation Trojan Horse, carrying out a series of raids on four premises selling suspected piracy-configured set-top boxes.

During the operation, officers arrested seven men and one woman aged between 18 and 45. Four of them were shop owners and the other four were salespeople. Around 354 suspected ‘pirate’ boxes were seized with an estimated market value of HK$320,000 (US$40,700).

“In the past few months, the department has stepped up inspections of hotspots for TV set-top boxes,” a statement from authorities reads.

“We have discovered that some shops have sold suspected illegal set-top boxes that bypass the copyright protection measures imposed by copyright holders of pay television programs allowing people to watch pay television programs for free.”

Some of the devices seized by Hong Kong Customs

During a press conference yesterday, a representative from the Customs Copyright and Trademark Investigations (Action) Division said that in the run up to the World Cup in 2018, measures against copyright infringement will be strengthened both on and online.

The announcement was welcomed by the Cable and Satellite Broadcasting Association of Asia’s (CASBAA) Coalition Against Piracy, which is back by industry heavyweights including Disney, Fox, HBO Asia, NBCUniversal, Premier League, Turner Asia-Pacific, A&E Networks, Astro, BBC Worldwide, National Basketball Association, TV5MONDE, Viacom International, and others.

“We commend the great work of Hong Kong Customs in clamping down on syndicates who profit from the sale of Illicit Streaming Devices,” said General Manager Neil Gane.

“The prevalence of ISDs in Hong Kong and across South East Asia is staggering. The criminals who sell ISDs, as well as those who operate the ISD networks and pirate websites, are profiting from the hard work of talented creators, seriously damaging the legitimate content ecosystem as well as exposing consumers to dangerous malware.”

Malware warnings are very prevalent these days but it’s not something the majority of set-top box owners have a problem with. Indeed, a study carried by Sycamore Research found that pirates aren’t easily deterred by such warnings.

Nevertheless, there are definite risks for individuals selling devices when they’re configured for piracy.

Recent cases, particularly in the UK, have shown that hefty jail sentences can hit offenders while over in the United States (1,2,3), lawsuits filed by the Alliance for Creativity and Entertainment (ACE) have the potential to end in unfavorable rulings for multiple defendants.

Although rarely reported, offenders in Hong Kong also face stiff sentences for this kind of infringement including large fines and custodial sentences of up to four years.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/how-to-wipe-a-mac-hard-drive/

erasing a hard drive and a solid state drive

What do I do with a Mac that still has personal data on it? Do I take out the disk drive and smash it? Do I sweep it with a really strong magnet? Is there a difference in how I handle a hard drive (HDD) versus a solid-state drive (SSD)? Well, taking a sledgehammer or projectile weapon to your old machine is certainly one way to make the data irretrievable, and it can be enormously cathartic as long as you follow appropriate safety and disposal protocols. But there are far less destructive ways to make sure your data is gone for good. Let me introduce you to secure erasing.

Which Type of Drive Do You Have?

Before we start, you need to know whether you have a HDD or a SSD. To find out, or at least to make sure, you click on the Apple menu and select “About this Mac.” Once there, select the “Storage” tab to see which type of drive is in your system.

The first example, below, shows a SATA Disk (HDD) in the system.

SATA HDD

In the next case, we see we have a Solid State SATA Drive (SSD), plus a Mac SuperDrive.

Mac storage dialog showing SSD

The third screen shot shows an SSD, as well. In this case it’s called “Flash Storage.”

Flash Storage

Make Sure You Have a Backup

Before you get started, you’ll want to make sure that any important data on your hard drive has moved somewhere else. OS X’s built-in Time Machine backup software is a good start, especially when paired with Backblaze. You can learn more about using Time Machine in our Mac Backup Guide.

With a local backup copy in hand and secure cloud storage, you know your data is always safe no matter what happens.

Once you’ve verified your data is backed up, roll up your sleeves and get to work. The key is OS X Recovery — a special part of the Mac operating system since OS X 10.7 “Lion.”

How to Wipe a Mac Hard Disk Drive (HDD)

NOTE: If you’re interested in wiping an SSD, see below.

    1. Make sure your Mac is turned off.
    2. Press the power button.
    3. Immediately hold down the command and R keys.
    4. Wait until the Apple logo appears.
    5. Select “Disk Utility” from the OS X Utilities list. Click Continue.
    6. Select the disk you’d like to erase by clicking on it in the sidebar.
    7. Click the Erase button.
    8. Click the Security Options button.
    9. The Security Options window includes a slider that enables you to determine how thoroughly you want to erase your hard drive.

There are four notches to that Security Options slider. “Fastest” is quick but insecure — data could potentially be rebuilt using a file recovery app. Moving that slider to the right introduces progressively more secure erasing. Disk Utility’s most secure level erases the information used to access the files on your disk, then writes zeroes across the disk surface seven times to help remove any trace of what was there. This setting conforms to the DoD 5220.22-M specification.

  1. Once you’ve selected the level of secure erasing you’re comfortable with, click the OK button.
  2. Click the Erase button to begin. Bear in mind that the more secure method you select, the longer it will take. The most secure methods can add hours to the process.

Once it’s done, the Mac’s hard drive will be clean as a whistle and ready for its next adventure: a fresh installation of OS X, being donated to a relative or a local charity, or just sent to an e-waste facility. Of course you can still drill a hole in your disk or smash it with a sledgehammer if it makes you happy, but now you know how to wipe the data from your old computer with much less ruckus.

The above instructions apply to older Macintoshes with HDDs. What do you do if you have an SSD?

Securely Erasing SSDs, and Why Not To

Most new Macs ship with solid state drives (SSDs). Only the iMac and Mac mini ship with regular hard drives anymore, and even those are available in pure SSD variants if you want.

If your Mac comes equipped with an SSD, Apple’s Disk Utility software won’t actually let you zero the hard drive.

Wait, what?

In a tech note posted to Apple’s own online knowledgebase, Apple explains that you don’t need to securely erase your Mac’s SSD:

With an SSD drive, Secure Erase and Erasing Free Space are not available in Disk Utility. These options are not needed for an SSD drive because a standard erase makes it difficult to recover data from an SSD.

In fact, some folks will tell you not to zero out the data on an SSD, since it can cause wear and tear on the memory cells that, over time, can affect its reliability. I don’t think that’s nearly as big an issue as it used to be — SSD reliability and longevity has improved.

If “Standard Erase” doesn’t quite make you feel comfortable that your data can’t be recovered, there are a couple of options.

FileVault Keeps Your Data Safe

One way to make sure that your SSD’s data remains secure is to use FileVault. FileVault is whole-disk encryption for the Mac. With FileVault engaged, you need a password to access the information on your hard drive. Without it, that data is encrypted.

There’s one potential downside of FileVault — if you lose your password or the encryption key, you’re screwed: You’re not getting your data back any time soon. Based on my experience working at a Mac repair shop, losing a FileVault key happens more frequently than it should.

When you first set up a new Mac, you’re given the option of turning FileVault on. If you don’t do it then, you can turn on FileVault at any time by clicking on your Mac’s System Preferences, clicking on Security & Privacy, and clicking on the FileVault tab. Be warned, however, that the initial encryption process can take hours, as will decryption if you ever need to turn FileVault off.

With FileVault turned on, you can restart your Mac into its Recovery System (by restarting the Mac while holding down the command and R keys) and erase the hard drive using Disk Utility, once you’ve unlocked it (by selecting the disk, clicking the File menu, and clicking Unlock). That deletes the FileVault key, which means any data on the drive is useless.

FileVault doesn’t impact the performance of most modern Macs, though I’d suggest only using it if your Mac has an SSD, not a conventional hard disk drive.

Securely Erasing Free Space on Your SSD

If you don’t want to take Apple’s word for it, if you’re not using FileVault, or if you just want to, there is a way to securely erase free space on your SSD. It’s a little more involved but it works.

Before we get into the nitty-gritty, let me state for the record that this really isn’t necessary to do, which is why Apple’s made it so hard to do. But if you’re set on it, you’ll need to use Apple’s Terminal app. Terminal provides you with command line interface access to the OS X operating system. Terminal lives in the Utilities folder, but you can access Terminal from the Mac’s Recovery System, as well. Once your Mac has booted into the Recovery partition, click the Utilities menu and select Terminal to launch it.

From a Terminal command line, type:

diskutil secureErase freespace VALUE /Volumes/DRIVE

That tells your Mac to securely erase the free space on your SSD. You’ll need to change VALUE to a number between 0 and 4. 0 is a single-pass run of zeroes; 1 is a single-pass run of random numbers; 2 is a 7-pass erase; 3 is a 35-pass erase; and 4 is a 3-pass erase. DRIVE should be changed to the name of your hard drive. To run a 7-pass erase of your SSD drive in “JohnB-Macbook”, you would enter the following:

diskutil secureErase freespace 2 /Volumes/JohnB-Macbook

And remember, if you used a space in the name of your Mac’s hard drive, you need to insert a leading backslash before the space. For example, to run a 35-pass erase on a hard drive called “Macintosh HD” you enter the following:

diskutil secureErase freespace 3 /Volumes/Macintosh\ HD

Something to remember is that the more extensive the erase procedure, the longer it will take.

When Erasing is Not Enough — How to Destroy a Drive

If you absolutely, positively need to be sure that all the data on a drive is irretrievable, see this Scientific American article (with contributions by Gleb Budman, Backblaze CEO), How to Destroy a Hard Drive — Permanently.

The post Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.