Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/long-range_fami.html
Good article on using long-range familial searching — basically, DNA matching of distant relatives — as a police forensics tool.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/long-range_fami.html
Good article on using long-range familial searching — basically, DNA matching of distant relatives — as a police forensics tool.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/mailing_tech_su.html
I understand his frustration, but this is extreme:
When police asked Cryptopay what could have motivated Salonen to send the company a pipe bomb or, rather, two pipe bombs, which is what investigators found when they picked apart the explosive package the only thing the company could think of was that it had declined his request for a password change.
In August 2017, Salonen, a customer of Cryptopay, emailed their customer services team to ask for a new password. They refused, given that it was against the company’s privacy policy.
A fair point, as it’s never a good idea to send a new password in an email. A password-reset link is safer all round, although it’s not clear if Cryptopay offered this option to Salonen.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/how_dna_databas.html
If you’re an American of European descent, there’s a 60% chance you can be uniquely identified by public information in DNA databases. This is not information that you have made public; this is information your relatives have made public.
Research paper:
“Identity inference of genomic data using long-range familial searches.”
Abstract: Consumer genomics databases have reached the scale of millions of individuals. Recently, law enforcement authorities have exploited some of these databases to identify suspects via distant familial relatives. Using genomic data of 1.28 million individuals tested with consumer genomics, we investigated the power of this technique. We project that about 60% of the searches for individuals of European-descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers. Moreover, the technique could implicate nearly any US-individual of European-descent in the near future. We demonstrate that the technique can also identify research participants of a public sequencing project. Based on these results, we propose a potential mitigation strategy and policy implications to human subject research.
A good news article.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/friday_squid_bl_627.html
Maybe not DNA, but biological somethings.
“Cause of Cambrian explosion — Terrestrial or Cosmic?“:
Abstract: We review the salient evidence consistent with or predicted by the Hoyle-Wickramasinghe (H-W) thesis of Cometary (Cosmic) Biology. Much of this physical and biological evidence is multifactorial. One particular focus are the recent studies which date the emergence of the complex retroviruses of vertebrate lines at or just before the Cambrian Explosion of ~500 Ma. Such viruses are known to be plausibly associated with major evolutionary genomic processes. We believe this coincidence is not fortuitous but is consistent with a key prediction of H-W theory whereby major extinction-diversification evolutionary boundaries coincide with virus-bearing cometary-bolide bombardment events. A second focus is the remarkable evolution of intelligent complexity (Cephalopods) culminating in the emergence of the Octopus. A third focus concerns the micro-organism fossil evidence contained within meteorites as well as the detection in the upper atmosphere of apparent incoming life-bearing particles from space. In our view the totality of the multifactorial data and critical analyses assembled by Fred Hoyle, Chandra Wickramasinghe and their many colleagues since the 1960s leads to a very plausible conclusion — life may have been seeded here on Earth by life-bearing comets as soon as conditions on Earth allowed it to flourish (about or just before 4.1 Billion years ago); and living organisms such as space-resistant and space-hardy bacteria, viruses, more complex eukaryotic cells, fertilised ova and seeds have been continuously delivered ever since to Earth so being one important driver of further terrestrial evolution which has resulted in considerable genetic diversity and which has led to the emergence of mankind.
This is almost certainly not true.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/kidnapping_frau.html
Fake kidnapping fraud:
“Most commonly we have unsolicited calls to potential victims in Australia, purporting to represent the people in authority in China and suggesting to intending victims here they have been involved in some sort of offence in China or elsewhere, for which they’re being held responsible,” Commander McLean said.
The scammers threaten the students with deportation from Australia or some kind of criminal punishment.
The victims are then coerced into providing their identification details or money to get out of the supposed trouble they’re in.
Commander McLean said there are also cases where the student is told they have to hide in a hotel room, provide compromising photos of themselves and cut off all contact.
This simulates a kidnapping.
“So having tricked the victims in Australia into providing the photographs, and money and documents and other things, they then present the information back to the unknowing families in China to suggest that their children who are abroad are in trouble,” Commander McLean said.
“So quite circular in a sense…very skilled, very cunning.”
Post Syndicated from corbet original https://lwn.net/Articles/755238/rss
During KubeCon
+ CloudNativeCon Europe 2018, Justin Cormack and Nassim Eddequiouaq presented
a proposal to simplify the setting of security parameters for containerized
applications.
Containers depend on a large set of intricate security primitives that can
have weird interactions. Because they are so hard to use, people often just
turn the whole thing off. The goal of the proposal is to make those
controls easier to understand and use; it is partly inspired by mobile apps
on iOS and Android platforms, an idea that trickled back into Microsoft and
Apple desktops. The time seems ripe to improve the field of
container security, which is in desperate need of simpler controls.
Post Syndicated from corbet original https://lwn.net/Articles/754443/rss
“Security is hard” is a tautology, especially in the fast-moving world
of container orchestration. We have previously covered various aspects of
Linux container
security through, for example, the Clear Containers implementation
or the broader question of Kubernetes and
security, but those are mostly concerned with container isolation; they do not address the
question of trusting a container’s contents. What is a container running?
Who built it and when? Even assuming we have good programmers and solid
isolation layers, propagating that good code around a Kubernetes cluster
and making strong assertions on the integrity of that supply chain is far
from trivial. The 2018 KubeCon
+ CloudNativeCon Europe event featured some projects that could
eventually solve that problem.
Post Syndicated from corbet original https://lwn.net/Articles/754433/rss
At KubeCon
+ CloudNativeCon Europe 2018, several talks explored the topic of
container isolation and security. The last year saw the release of Kata Containers which, combined with
the CRI-O project, provided strong isolation
guarantees for containers using a hypervisor. During the conference, Google
released its own hypervisor called gVisor, adding yet another
possible solution for this problem. Those new developments prompted the
community to work on integrating the concept of “secure containers”
(or “sandboxed containers”) deeper
into Kubernetes. This work is now coming to fruition; it prompts us to look
again at how Kubernetes tries to keep the bad guys from wreaking havoc once
they break into a container.
Post Syndicated from corbet original https://lwn.net/Articles/754153/rss
Technologies like containers, clusters, and Kubernetes offer the prospect
of rapidly scaling the available computing resources to match variable demands
placed on the system. Actually implementing that scaling can be a
challenge, though.
During KubeCon
+ CloudNativeCon Europe 2018,
Frederic Branczyk from CoreOS (now
part of Red Hat) held a packed session
to introduce a standard and officially recommended way to scale workloads
automatically in Kubernetes
clusters.
Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-fleet-manage-thousands-of-on-demand-and-spot-instances-with-one-request/
EC2 Spot Fleets are really cool. You can launch a fleet of Spot Instances that spans EC2 instance types and Availability Zones without having to write custom code to discover capacity or monitor prices. You can set the target capacity (the size of the fleet) in units that are meaningful to your application and have Spot Fleet create and then maintain the fleet on your behalf. Our customers are creating Spot Fleets of all sizes. For example, one financial service customer runs Monte Carlo simulations across 10 different EC2 instance types. They routinely make requests for hundreds of thousands of vCPUs and count on Spot Fleet to give them access to massive amounts of capacity at the best possible price.
EC2 Fleet
Today we are extending and generalizing the set-it-and-forget-it model that we pioneered in Spot Fleet with EC2 Fleet, a new building block that gives you the ability to create fleets that are composed of a combination of EC2 On-Demand, Reserved, and Spot Instances with a single API call. You tell us what you need, capacity and instance-wise, and we’ll handle all the heavy lifting. We will launch, manage, monitor and scale instances as needed, without the need for scaffolding code.
You can specify the capacity of your fleet in terms of instances, vCPUs, or application-oriented units, and also indicate how much of the capacity should be fulfilled by Spot Instances. The application-oriented units allow you to specify the relative power of each EC2 instance type in a way that directly maps to the needs of your application. All three capacity specification options (instances, vCPUs, and application-oriented units) are known as weights.
I think you’ll find a number ways this feature makes managing a fleet of instances easier, and believe that you will also appreciate the team’s near-term feature roadmap of interest (more on that in a bit).
Using EC2 Fleet
There are a number of ways that you can use this feature, whether you’re running a stateless web service, a big data cluster or a continuous integration pipeline. Today I’m going to describe how you can use EC2 Fleet for genomic processing, but this is similar to workloads like risk analysis, log processing or image rendering. Modern DNA sequencers can produce multiple terabytes of raw data each day, to process that data into meaningful information in a timely fashion you need lots of processing power. I’ll be showing you how to deploy a “grid” of worker nodes that can quickly crunch through secondary analysis tasks in parallel.
Projects in genomics can use the elasticity EC2 provides to experiment and try out new pipelines on hundreds or even thousands of servers. With EC2 you can access as many cores as you need and only pay for what you use. Prior to today, you would need to use the RunInstances
API or an Auto Scaling group for the On-Demand & Reserved Instance portion of your grid. To get the best price performance you’d also create and manage a Spot Fleet or multiple Spot Auto Scaling groups with different instance types if you wanted to add Spot Instances to turbo-boost your secondary analysis. Finally, to automate scaling decisions across multiple APIs and Auto Scaling groups you would need to write Lambda functions that periodically assess your grid’s progress & backlog, as well as current Spot prices – modifying your Auto Scaling Groups and Spot Fleets accordingly.
You can now replace all of this with a single EC2 Fleet, analyzing genomes at scale for as little as $1 per analysis. In my grid, each step in in the pipeline requires 1 vCPU and 4 GiB of memory, a perfect match for M4 and M5 instances with 4 GiB of memory per vCPU. I will create a fleet using M4 and M5 instances with weights that correspond to the number of vCPUs on each instance:
This is expressed in a template that looks like this:
By default, EC2 Fleet will select the most cost effective combination of instance types and Availability Zones (both specified in the template) using the current prices for the Spot Instances and public prices for the On-Demand Instances (if you specify instances for which you have matching RIs, your discounts will apply). The default mode takes weights into account to get the instances that have the lowest price per unit. So for my grid, fleet will find the instance that offers the lowest price per vCPU.
Now I can request capacity in terms of vCPUs, knowing EC2 Fleet will select the lowest cost option using only the instance types I’ve defined as acceptable. Also, I can specify how many vCPUs I want to launch using On-Demand or Reserved Instance capacity and how many vCPUs should be launched using Spot Instance capacity:
The above means that I want a total of 2880 vCPUs, with 960 vCPUs fulfilled using On-Demand and 1920 using Spot. The On-Demand price per vCPU is lower for m5.24xlarge than the On-Demand price per vCPU for m4.16xlarge, so EC2 Fleet will launch 10 m5.24xlarge instances to fulfill 960 vCPUs. Based on current Spot pricing (again, on a per-vCPU basis), EC2 Fleet will choose to launch 30 m4.16xlarge instances or 20 m5.24xlarges, delivering 1920 vCPUs either way.
Putting it all together, I have a single file (fl1.json) that describes my fleet:
I can launch my fleet with a single command:
My entire fleet is created within seconds and was built using 10 m5.24xlarge On-Demand Instances and 30 m4.16xlarge Spot Instances, since the current Spot price was 1.5¢ per vCPU for m4.16xlarge and 1.6¢ per vCPU for m5.24xlarge.
Now lets imagine my grid has crunched through its backlog and no longer needs the additional Spot Instances. I can then modify the size of my fleet by changing the target capacity in my fleet specification, like this:
Since 960 was equal to the amount of On-Demand vCPUs I had requested, when I describe my fleet I will see all of my capacity being delivered using On-Demand capacity:
When I no longer need my fleet I can delete it and terminate the instances in it like this:
Earlier I described how RI discounts apply when EC2 Fleet launches instances for which you have matching RIs, so you might be wondering how else RI customers benefit from EC2 Fleet. Let’s say that I own regional RIs for M4 instances. In my EC2 Fleet I would remove m5.24xlarge and specify m4.10xlarge and m4.16xlarge. Then when EC2 Fleet creates the grid, it will quickly find M4 capacity across the sizes and AZs I’ve specified, and my RI discounts apply automatically to this usage.
In the Works
We plan to connect EC2 Fleet and EC2 Auto Scaling groups. This will let you create a single fleet that mixed instance types and Spot, Reserved and On-Demand, while also taking advantage of EC2 Auto Scaling features such as health checks and lifecycle hooks. This integration will also bring EC2 Fleet functionality to services such as Amazon ECS, Amazon EKS, and AWS Batch that build on and make use of EC2 Auto Scaling for fleet management.
Available Now
You can create and make use of EC2 Fleets today in all public AWS Regions!
— Jeff;
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/oblivious_dns.html
Interesting idea:
…we present Oblivious DNS (ODNS), which is a new design of the DNS ecosystem that allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest. In the ODNS system, both the client is modified with a local resolver, and there is a new authoritative name server for .odns. To prevent an eavesdropper from learning information, the DNS query must be encrypted; the client generates a request for www.foo.com, generates a session key k, encrypts the requested domain, and appends the TLD domain .odns, resulting in {www.foo.com}k.odns. The client forwards this, with the session key encrypted under the .odns authoritative server’s public key ({k}PK) in the “Additional Information” record of the DNS query to the recursive resolver, which then forwards it to the authoritative name server for .odns. The authoritative server decrypts the session key with his private key, and then subsequently decrypts the requested domain with the session key. The authoritative server then forwards the DNS request to the appropriate name server, acting as a recursive resolver. While the name servers see incoming DNS requests, they do not know which clients they are coming from; additionally, an eavesdropper cannot connect a client with her corresponding DNS queries.
News article.
Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/backblaze-b2-drops-download-price-in-half/
Backblaze is pleased to announce that, effective immediately, we are reducing the price of Backblaze B2 Cloud Storage downloads by 50%. This means that B2 download pricing drops from $0.02 to $0.01 per GB. As always, the first gigabyte of data downloaded each day remains free.
If some of this sounds familiar, that’s because a little under a year ago, we dropped our download price from $0.05 to $0.02. While that move solidified our position as the affordability leader in the high performance cloud storage space, we continue to innovate on our platform and are excited to provide this additional value to our customers.
This price reduction applies immediately to all existing and new customers. In keeping with Backblaze’s overall approach to providing services, there are no tiers or minimums. It’s automatic and it starts today.
Because it makes cloud storage more useful for more people.
When we decided to use Backblaze B2 as our cloud storage service, their download pricing at the time enabled us to offer our broadcasters unlimited audio uploads so they can upload past decades of preaching to our extensive library for streaming and downloading. With Backblaze cutting the bandwidth prices 50% to just one penny a gigabyte, we are excited about offering much higher quality video. — Ian Wagner, Senior Developer, Sermon Audio
Since our founding in 2007, Backblaze’s mission has been to make storing data astonishingly easy and affordable. We have a well documented, relentless pursuit of lowering storage costs — it starts with our storage pods and runs through everything we do. Today, we have over 500 petabytes of customer data stored. B2’s storage pricing already being 1⁄4 that of Amazon’s S3 has certainly helped us get there. Today’s pricing reduction puts our download pricing 1⁄5 that of S3. The “affordable” part of our story is well established.
I’d like to take a moment to discuss the “easy” part. Our industry has historically done a poor job of putting ourselves in our customers’ shoes. When customers are faced with the decision of where to put their data, price is certainly a factor. But it’s not just the price of storage that customers must consider. There’s a cost to download your data. The business need for providers to charge for this is reasonable — downloading data requires bandwidth, and bandwidth costs money. We discussed that in a prior post on the Cost of Cloud Storage.
But there’s a difference between the costs of bandwidth and what the industry is charging today. There’s a joke that some of the storage clouds are competing to become “Hotel California” — you can check out anytime you want, but your data can never leave.1 Services that make it expensive to restore data or place time lag impediments to data access are reducing the usefulness of your data. Customers should not have to wonder if they can afford to access their own data.
When replacing LTO with StarWind VTL and cloud storage, our customers had only one concern left: the possible cost of data retrieval. Backblaze just wiped this concern out of the way by lowering that cost to just one penny per gig. — Max Kolomyeytsev, Director of Product Management, StarWind
Many businesses have not yet been able to back up their data to the cloud because of the costs. Many of those companies are forced to continue backing up to tape. That tape is an inefficient means for data storage is clear. Solution providers like StarWind VTL specialize in helping businesses move off of antiquated tape libraries. However, as Max Kolomyeytsev, Director of Product Management at StarWind points out, “When replacing LTO with StarWind VTL and cloud storage our customers had only one concern left: the possible cost of data retrieval. Backblaze just wiped this concern out of the way by lowering that cost to just one penny per gig.”
Customers that have already adopted the cloud often are forced to make difficult tradeoffs between data they want to access and the cost associated with that access. Surrendering the use of your own data defeats many of the benefits that “the cloud” brings in the first place. Because of B2’s download price, Ian Wagner, a Senior Developer at Sermon Audio, is able to lower his costs and expand his product offering. “When we decided to use Backblaze B2 as our cloud storage service, their download pricing at the time enabled us to offer our broadcasters unlimited audio uploads so they can upload past decades of preaching to our extensive library for streaming and downloading. With Backblaze cutting the bandwidth prices 50% to just one penny a gigabyte, we are excited about offering much higher quality video.”
Many organizations use third party applications or devices to help manage their workflows. Those applications are the hub for customers getting their data to where it needs to go. Leaders in verticals like Media Asset Management, Server & NAS Backup, and Enterprise Storage have already chosen to integrate with B2.
With Backblaze lowering their download price to an amazing one penny a gigabyte, our CloudNAS is even a better fit for photographers, videographers and business owners who need to have their files at their fingertips, with an easy, reliable, low cost way to use Backblaze for unlimited primary storage and active archive. — Paul Tian, CEO, Morro Data
For Paul Tian, founder of Ready NAS and CEO of Morro Data, reasonable download pricing also helps his company better serve its customers. “With Backblaze lowering their download price to an amazing one penny a gigabyte, our CloudNAS is even a better fit for photographers, videographers and business owners who need to have their files at their fingertips, with an easy, reliable, low cost way to use Backblaze for unlimited primary storage and active archive.”
If you use an application that hasn’t yet integrated with B2, please ask your provider to add B2 Cloud Storage and mention the application in the comments below.
Not only is Backblaze B2 storage 1⁄4 the price of Amazon S3, Google Cloud, or Azure, but our download pricing is now 1⁄5 their price as well.
Pricing Tier | Backblaze B2 | Amazon S3 | Microsoft Azure | Google Cloud | |
---|---|---|---|---|---|
First 1 TB | $0.01 | $0.09 | $0.09 | $0.12 | |
Next 9 TB | $0.01 | $0.09 | $0.09 | $0.11 | |
Next 40 TB | $0.01 | $0.085 | $0.09 | $0.08 | |
Next 100 TB | $0.01 | $0.07 | $0.07 | $0.08 | |
Next 350 TB+ | $0.01 | $0.05 | $0.05 | $0.08 |
Using the chart above, let’s compute a few examples of download costs…
Data | Backblaze B2 | Amazon S3 | Microsoft Azure | Google Cloud | |
---|---|---|---|---|---|
1 terabyte | $10 | $90 | $90 | $120 | |
10 terabytes | $100 | $900 | $900 | $1,200 | |
50 terabytes | $500 | $4,300 | $4,500 | $4,310 | |
500 terabytes | $5,000 | $28,800 | $29,000 | $40,310 |
Not only is Backblaze B2 pricing dramatically lower cost, it’s also simple — one price for any amount of data downloaded to anywhere. In comparison, to compute the cost of downloading 500 TB of data with S3 you start with the following formula: (($0.09 * 10) + ($0.085 * 40) + ($0.07 * 100) + ($0.05 * 350)) * 1,000 Want to see this comparison for the amount of data you manage? Use our cloud storage calculator. |
Halving the price of downloads is a crazy move — the kind of crazy our customers will be excited about. When using our Transmit 5 app on the Mac to upload their data to B2 Cloud Storage, our users can sleep soundly knowing they’ll be getting a truly affordable price when they need to restore that data. Cool beans, Backblaze. — Cabel Sasser, Co-Founder, Panic
As the cloud storage industry grows, customers are increasingly concerned with getting locked in to one vendor. No business wants to be fully dependent on one vendor for anything. In addition, customers want multiple copies of their data to mitigate against a vendor outage or other issues.
Many vendors offer the ability for customers to replicate data across “regions.” This enables customers to store data in two physical locations of the customer’s choosing. Of course, customers pay for storing both copies of the data and for the data transfer between regions.
At 1¢ per GB, transferring data out of Backblaze is more affordable than transferring data between most other vendor regions. For example, if a customer is storing data in Amazon S3’s Northern California region (US West) and wants to replicate data to S3 in Northern Virginia (US East), she will pay 2¢ per GB to simply move the data.
However, if that same customer wanted to replicate data from Backblaze B2 to S3 in Northern Virginia, she would pay 1¢ per GB to move the data. She can achieve her replication strategy while also mitigating against vendor risk — all while cutting the bandwidth bill by 50%. Of course, this is also before factoring the savings on her storage bill as B2 storage is 1⁄4 of the price of S3.
Simple. We just changed our pricing table and updated our website.
The longer answer is that the cost of bandwidth is a function of a few factors, including how it’s being used and the volume of usage. With another year of data for B2, over a decade of experience in the cloud storage industry, and data growth exceeding 100 PB per quarter, we know we can sustainably offer this pricing to our customers; we also know how better download pricing can make our customers and partners more effective in their work. So it is an easy call to make.
Our pricing is simple. Storage is $0.005/GB/Month, Download costs are $0.01/GB. There are no tiers or minimums and you can get started any time you wish.
Our desire is to provide a great service at a fair price. We’re proud to be the affordability leader in the Cloud Storage space and hope you’ll give us the opportunity to show you what B2 Cloud Storage can enable for you.
Enjoy the service and I’d love to hear what this price reduction does for you in the comments below…or, if you are attending NAB this year, come by to visit and tell us in person!
1 For those readers who don’t get the Eagles reference there, please click here…I promise you won’t regret the next 7 minutes of your life.
The post Backblaze Cuts B2 Download Price In Half appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.
Post Syndicated from jake original https://lwn.net/Articles/748106/rss
Should we host in the cloud or on our own servers? This question was
at the center of Dmytro Dyachuk’s talk, given
during KubeCon +
CloudNativeCon last November. While many services
simply launch in the cloud without the organizations behind them
considering other options, large
content-hosting services have actually
moved back to their own data centers: Dropbox
migrated in 2016
and Instagram
in 2014. Because such transitions can be expensive
and risky, understanding the economics of hosting is a critical part
of launching a new service. Actual hosting costs are often
misunderstood, or secret, so it is sometimes difficult to get the
numbers right. In this article, we’ll use Dyachuk’s talk to try to
answer the “million dollar question”: “buy or rent?”
Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/digital-media-management/
This post summarizes the responses we received to our November 28 post asking our readers how they handle the challenge of digital asset management (DAM). You can read the previous posts in this series below:
Use the Join button above to receive notification of future posts on this topic.
This past November, we published a blog post entitled What’s the Best Solution for Managing Digital Photos and Videos? We asked our readers to tell us how they’re currently backing up their digital media assets and what their ideal system might be. We posed these questions:
We were thrilled to receive a large number of responses from readers. What was clear from the responses is that there is no consensus on solutions for either amateur or professional, and that users had many ideas for how digital media management could be improved to meet their needs.
We asked our readers to contribute to this dialog for a number of reasons. As a cloud backup and cloud storage service provider, we want to understand how our users are working with digital media so we know how to improve our services. Also, we want to participate in the digital media community, and hope that sharing the challenges our readers are facing and the solutions they are using will make a contribution to that community.
While a few readers told us they had settled on a system that worked for them, most said that they were still looking for a better solution. Many expressed frustration with dealing with the growing amount of data for digital photos and videos that is only getting larger with the increasing resolution of still and video cameras. Amateurs are making do with a number of consumer services, while professionals employ a wide range of commercial, open source, or jury rigged solutions for managing data and maintaining its integrity.
I’ve summarized the responses we received in three sections on, 1) what readers are doing today, 2) common wishes they have for improvements, and 3) concerns that were expressed by a number of respondents.
We heard from a wide range of smartphone users, DSLR and other format photographers, and digital video creators. Speed of operation, the ability to share files with collaborators and clients, and product feature sets were frequently cited as reasons for selecting their particular solution. Also of great importance was protecting the integrity of media through the entire capture, transfer, editing, and backup workflow.
Avid Media Composer
Adobe still rules for many users for photo editing. Some expressed interest in alternatives from Phase One, Skylum (formerly Macphun), ON1, and DxO.
Adobe Lightroom
Luminar 2018 DAM preview
While some of our respondents are casual or serious amateur digital media users, others make a living from digital photography and videography. A number of our readers report having hundreds of thousands of files and many terabytes of data — even approaching one petabyte of data for one professional who responded. Whether amateur or professional, all shared the desire to preserve their digital media assets for the future. Consequently, they want to be able to attach metadata quickly and easily, and search for and retrieve files from wherever they are stored when necessary.
Photo Mechanic 5
Our readers came through with numerous suggestions for how digital media management could be improved. There were a number of common themes centered around bigger and better storage, faster broadband or other ways to get data into the cloud, managing metadata, and ensuring integrity of their data.
Over and over again our readers expressed similar concerns about the state of digital asset management.
As a cloud backup and storage provider, your contributions were of great interest to us. A number of readers made suggestions for how we can improve or augment our services to increase the options for digital media management. We listened and are considering your comments. They will be included in our discussions and planning for possible future services and offerings from Backblaze. We thank everyone for your contributions.
Digital media management
Were you surprised by any of the responses? Do you have something further to contribute? This is by no means the end of our exploration of how to better serve media professionals, so let’s keep the lines of communication open.
Bring it on in the comments!
The post Our Readers Respond on the Best Solution for Managing Digital Photos and Videos appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.
Post Syndicated from corbet original https://lwn.net/Articles/744721/rss
2017 was a big year for the Prometheus project, as it published
its 2.0 release in November. The new release ships numerous
bug fixes, new features, and, notably, a new storage engine that brings major
performance improvements. This comes at the cost of incompatible changes to
the storage and configuration-file formats. An overview of
Prometheus and its new release was presented to the Kubernetes community in a talk
held during KubeCon
+ CloudNativeCon. This article covers what changed in this new release
and what is brewing next in the Prometheus community; it is a companion to
this article, which provided a general
introduction to monitoring with Prometheus.
Post Syndicated from jake original https://lwn.net/Articles/741841/rss
The Docker (now Moby) project has
done a lot to popularize containers in recent years. Along the way,
though, it has generated concerns about its concentration of functionality
into a single, monolithic system under the control of a single daemon
running with root privileges: dockerd. Those concerns were
reflected in a talk
by Dan Walsh, head of the container team at Red Hat, at KubeCon +
CloudNativeCon. Walsh spoke about the work the container team is doing
to replace Docker with a set of smaller, interoperable components. His rallying cry is “no big fat
daemons” as he finds them to be contrary to the venerated Unix philosophy.
Post Syndicated from jake original https://lwn.net/Articles/741897/rss
As we briefly mentioned in our overview article about
KubeCon + CloudNativeCon, there are multiple container “runtimes”, which are
programs that can create and execute containers that are typically fetched
from online
images. That space is slowly reaching maturity both in terms
of standards and implementation: Docker’s containerd 1.0 was released
during KubeCon, CRI-O 1.0 was released a few months ago, and rkt is
also still in the game. With all of those runtimes, it may be a confusing
time for those looking at deploying their own container-based system
or Kubernetes cluster from
scratch. This article will try to explain
what container runtimes are, what they do, how they compare with each other, and
how to choose the right one. It also provides a primer on container
specifications and standards.
Post Syndicated from jake original https://lwn.net/Articles/741301/rss
The Cloud
Native Computing Foundation (CNCF) held its conference,
KubeCon + CloudNativeCon,
in December 2017. There were 4000 attendees at this gathering in Austin, Texas,
more than
all the previous KubeCons before, which shows the rapid growth of the
community building around the tool that was announced by Google in
2014. Large corporations are also taking a larger part in the
community, with major players in the industry joining the CNCF, which is a project of the Linux
Foundation. The CNCF now features three of the largest cloud
hosting businesses (Amazon, Google, and Microsoft), but also emerging
companies from Asia like Baidu and Alibaba.
Post Syndicated from Eevee original https://eev.ee/blog/2017/12/05/game-night-1-lisa-lisa-moop/
For the last few weeks, glip (my partner) and I have spent a couple hours most nights playing indie games together. We started out intending to play a short list of games that had been recommended to glip, but this turns out to be a nice way to wind down, so we’ve been keeping it up and clicking on whatever looks interesting in the itch app.
Most of the games are small and made by one or two people, so they tend to be pretty tightly scoped and focus on a few particular kinds of details. I’ve found myself having brain thoughts about all that, so I thought I’d write some of them down.
I also know that some people (cough) tend not to play games they’ve never heard of, even if they want something new to play. If that’s you, feel free to play some of these, now that you’ve heard of them!
Also, I’m still figuring the format out here, so let me know if this is interesting or if you hope I never do it again!
First up:
These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.
(cw: basically everything??)
Lisa: The Painful is true to its name. I hesitate to describe it as fun, exactly, but I’m glad we played it.
Everything about the game is dark. It’s a (somewhat loose) sequel to another game called Lisa, whose titular character ultimately commits suicide; her body hanging from a noose is the title screen for this game.
Ah, but don’t worry, it gets worse. This game takes place in a post-apocalyptic wasteland, where every female human — women, children, babies — is dead. You play as Brad (Lisa’s brother), who has discovered the lone exception: a baby girl he names Buddy and raises like a daughter. Now, Buddy has been kidnapped, and you have to go rescue her, presumably from being raped.
Ah, but don’t worry, it gets worse.
I’ve had a hard time putting my thoughts in order here, because so much of what stuck with me is the way the game entangles the plot with the mechanics.
I love that kind of thing, but it’s so hard to do well. I can’t really explain why, but I feel like most attempts to do it fall flat — they have a glimmer of an idea, but they don’t integrate it well enough, or they don’t run nearly as far as they could have. I often get the same feeling as, say, a hyped-up big moral choice that turns out to be picking “yes” or “no” from a menu. The idea is there, but the execution is so flimsy that it leaves no impact on me at all.
An obvious recent success here is Undertale, where the entire story is about violence and whether you choose to engage or avoid it (and whether you can do that). If you choose to eschew violence, not only does the game become more difficult, it arguably becomes a different game entirely. Granted, the contrast is lost if you (like me) tried to play as a pacifist from the very beginning. I do feel that you could go further with the idea than Undertale, but Undertale itself doesn’t feel incomplete.
Christ, I’m not even talking about the right game any more.
Okay, so: this game is a “classic” RPG, by which I mean, it was made with RPG Maker. (It’s kinda funny that RPG Maker was designed to emulate a very popular battle style, and now the only games that use that style are… made with RPG Maker.) The main loop, on the surface, is standard RPG fare: you walk around various places, talk to people, solve puzzles, recruit party members, and get into turn-based fights.
Now, Brad is addicted to a drug called Joy. He will regularly go into withdrawal, which manifests in the game as a status effect that cuts his stats (even his max HP!) dramatically.
It is really, really, incredibly inconvenient. And therein lies the genius here. The game could have simply told me that Brad is an addict, and I don’t think I would’ve cared too much. An addiction to a fantasy drug in a wasteland doesn’t mean anything to me, especially about this tiny sprite man I just met, so I would’ve filed this away as a sterile fact and forgotten about it. By making his addiction affect me, I’m now invested in it. I wish Brad weren’t addicted, even if only because it’s annoying. I found a party member once who turned out to have the same addiction, and I felt dread just from seeing the icon for the status effect. I’ve been looped into the events of this story through the medium I use to interact with it: the game.
It’s a really good use of games as a medium. Even before I’m invested in the characters, I’m invested in what’s happening to them, because it impacts the game!
Incidentally, you can get Joy as an item, which will temporarily cure your withdrawal… but you mostly find it by looting the corpses of grotesque mutant flesh horrors you encounter. I don’t think the game would have the player abruptly mutate out of nowhere, but I wasn’t about to find out, either. We never took any.
Virtually every staple of the RPG genre has been played with in some way to tie it into the theme/setting. I love it, and I think it works so well precisely because it plays with expectations of how RPGs usually work.
Most obviously, the game is a sidescroller, not top-down. You can’t jump freely, but you can hop onto one-tile-high boxes and climb ropes. You can also drop off off ledges… but your entire party will take fall damage, which gets rapidly more severe the further you fall.
This wouldn’t be too much of a problem, except that healing is hard to come by for most of the game. Several hub areas have campfires you can sleep next to to restore all your health and MP, but when you wake up, something will have happened to you. Maybe just a weird cutscene, or maybe one of your party members has decided to leave permanently.
Okay, so use healing items instead? Good luck; money is also hard to come by, and honestly so are shops, and many of the healing items are woefully underpowered.
Grind for money? Good luck there, too! While the game has plenty of battles, virtually every enemy is a unique overworld human who only appears once, and then is dead, because you killed him. Only a handful of places have unlimited random encounters, and grinding is not especially pleasant.
The “best” way to get a reliable heal is to savescum — save the game, sleep by the campfire, and reload if you don’t like what you wake up to.
In a similar vein, there’s a part of the game where you’re forced to play Russian Roulette. You choose a party member; he and an opponent will take turns shooting themselves in the head until someone finds a loaded chamber. If your party member loses, he is dead. And you have to keep playing until you win three times, so there’s no upper limit on how many people you might lose. I couldn’t find any way to influence who won, so I just had to savescum for a good half hour until I made it through with minimal losses.
It was maddening, but also a really good idea. Games don’t often incorporate the existence of saves into the gameplay, and when they do, they usually break the fourth wall and get all meta about it. Saves are never acknowledged in-universe here (aside from the existence of save points), but surely these parts of the game were designed knowing that the best way through them is by reloading. It’s rarely done, it can easily feel unfair, and it drove me up the wall — but it was certainly painful, as intended, and I kinda love that.
(Naturally, I’m told there’s a hard mode, where you can only use each save point once.)
The game also drives home the finality of death much better than most. It’s not hard to overlook the death of a redshirt, a character with a bit part who simply doesn’t appear any more. This game permanently kills your party members. Russian Roulette isn’t even the only way you can lose them! Multiple cutscenes force you to choose between losing a life or some other drastic consequence. (Even better, you can try to fight the person forcing this choice on you, and he will decimate you.) As the game progresses, you start to encounter enemies who can simply one-shot murder your party members.
It’s such a great angle. Just like with Brad’s withdrawal, you don’t want to avoid their deaths because it’d be emotional — there are dozens of party members you can recruit (though we only found a fraction of them), and most of them you only know a paragraph about — but because it would inconvenience you personally. Chances are, you have your strongest dudes in your party at any given time, so losing one of them sucks. And with few random encounters, you can’t just grind someone else up to an appropriate level; it feels like there’s a finite amount of XP in the game, and if someone high-level dies, you’ve lost all the XP that went into them.
The battles themselves are fairly straightforward. You can attack normally or use a special move that costs MP. SP? Some kind of points.
Two things in particular stand out. One I mentioned above: the vast majority of the encounters are one-time affairs against distinct named NPCs, who you then never see again, because they are dead, because you killed them.
The other is the somewhat unusual set of status effects. The staples like poison and sleep are here, but don’t show up all that often; more frequent are statuses like weird, drunk, stink, or cool. If you do take Joy (which also cures depression), you become joyed for a short time.
The game plays with these in a few neat ways, besides just Brad’s withdrawal. Some party members have a status like stink or cool permanently. Some battles are against people who don’t want to fight at all — and so they’ll spend most of the battle crying, purely for flavor impact. Seeing that for the first time hit me pretty hard; until then we’d only seen crying as a mechanical side effect of having sand kicked in one’s face.
The game does drag on a bit. I think we poured 10 in-game hours into it, which doesn’t count time spent reloading. It doesn’t help that you walk not super fast.
My biggest problem was with getting my bearings; I’m sure we spent a lot of that time wandering around accomplishing nothing. Most of the world is focused around one of a few hub areas, and once you’ve completed one hub, you can move onto the next one. That’s fine. Trouble is, you can go any of a dozen different directions from each hub, and most of those directions will lead you to very similar-looking hills built out of the same tiny handful of tiles. The connections between places are mostly cave entrances, which also largely look the same. Combine that with needing to backtrack for puzzle or progression reasons, and it’s incredibly difficult to keep track of where you’ve been, what you’ve done, and where you need to go next.
I don’t know that the game is wrong here; the aesthetic and world layout are fantastic at conveying a desolate wasteland. I wouldn’t even be surprised if the navigation were deliberately designed this way. (On the other hand, assuming every annoyance in a despair-ridden game is deliberate might be giving it too much credit.) But damn it’s still frustrating.
I felt a little lost in the battle system, too. Towards the end of the game, Brad in particular had over a dozen skills he could use, but I still couldn’t confidently tell you which were the strongest. New skills sometimes appear in the middle of the list or cost less than previous skills, and the game doesn’t outright tell you how much damage any of them do. I know this is the “classic RPG” style, and I don’t think it was hugely inconvenient, but it feels weird to barely know how my own skills work. I think this puts me off getting into new RPGs, just generally; there’s a whole new set of things I have to learn about, and games in this style often won’t just tell me anything, so there’s this whole separate meta-puzzle to figure out before I can play the actual game effectively.
Also, the sound could use a little bit of… mastering? Some music and sound effects are significantly louder and screechier than others. Painful, you could say.
The world is full of side characters with their own stuff going on, which is also something I love seeing in games; too often, the whole world feels like an obstacle course specifically designed for you.
Also, many of those characters are, well, not great people. Really, most of the game is kinda fucked up. Consider: the weird status effect is most commonly inflicted by the “Grope” skill. It makes you feel weird, you see. Oh, and the currency is porn magazines.
And then there are the gangs, the various spins on sex clubs, the forceful drug kingpins, and the overall violence that permeates everything (you stumble upon an alarming number of corpses). The game neither condones nor condemns any of this; it simply offers some ideas of how people might behave at the end of the world. It’s certainly the grittiest interpretation I’ve seen.
I don’t usually like post-apocalypses, because they try to have these very hopeful stories, but then at the end the world is still a blighted hellscape so what was the point of any of that? I like this game much better for being a blighted hellscape throughout. The story is worth following to see where it goes, not just because you expect everything wrapped up neatly at the end.
…I realize I’ve made this game sound monumentally depressing throughout, but it manages to pack in a lot of funny moments as well, from the subtle to the overt. In retrospect, it’s actually really good at balancing the mood so it doesn’t get too depressing. If nothing else, it’s hilarious to watch this gruff, solemn, battle-scarred, middle-aged man pedal around on a kid’s bike he found.
An obvious theme of the game is despair, but the more I think about it, the more I wonder if ambiguity is a theme as well. It certainly fits the confusing geography.
Even the premise is a little ambiguous. Is/was Olathe a city, a country, a whole planet? Did the apocalypse affect only Olathe, or the whole world? Does it matter in an RPG, where the only world that exists is the one mapped out within the game?
Towards the end of the game, you catch up with Buddy, but she rejects you, apparently resentful that you kept her hidden away for her entire life. Brad presses on anyway, insisting on protecting her.
At that point I wasn’t sure I was still on Brad’s side. But he’s not wrong, either. Is he? Maybe it depends on how old Buddy is — but the game never tells us. Her sprite is a bit smaller than the men’s, but it’s hard to gauge much from small exaggerated sprites, and she might just be shorter. In the beginning of the game, she was doing kid-like drawings, but we don’t know how much time passed after that. Everyone seems to take for granted that she’s capable of bearing children, and she talks like an adult. So is she old enough to be making this decision, or young enough for parent figure Brad to overrule her? What is the appropriate age of agency, anyway, when you’re the last girl/woman left more than a decade after the end of the world?
Can you repopulate a species with only one woman, anyway?
Well, that went on a bit longer than I intended. This game has a lot of small touches that stood out to me, and they all wove together very well.
Should you play it? I have absolutely no idea.
FINAL SCORE: 1 out of 6 chambers
Surprise! There’s a third game to round out this trilogy.
Lisa: The Joyful is much shorter, maybe three hours long — enough to be played in a night rather than over the better part of a week.
This one picks up immediately after the end of Painful, with you now playing as Buddy. It takes a drastic turn early on: Buddy decides that, rather than hide from the world, she must conquer it. She sets out to murder all the big bosses and become queen.
The battle system has been inherited from the previous game, but battles are much more straightforward this time around. You can’t recruit any party members; for much of the game, it’s just you and a sword.
There is a catch! Of course.
The catch is that you do not have enough health to survive most boss battles without healing. With no party members, you cannot heal via skills. I don’t think you could buy healing items anywhere, either. You have a few when the game begins, but once you run out, that’s it.
Except… you also have… some Joy. Which restores you to full health and also makes you crit with every hit. And drops off of several enemies.
We didn’t even recognize Joy as a healing item at first, since we never used it in Painful; it’s description simply says that it makes you feel nothing, and we’d assumed the whole point of it was to stave off withdrawal, which Buddy doesn’t experience. Luckily, the game provided a hint in the form of an NPC who offers to switch on easy mode:
What’s that? Bad guys too tough? Not enough jerky? You don’t want to take Joy!? Say no more, you’ve come to the right place!
So the game is aware that it’s unfairly difficult, and it’s deliberately forcing you to take Joy, and it is in fact entirely constructed around this concept. I guess the title is a pretty good hint, too.
I don’t feel quite as strongly about Joyful as I do about Painful. (Admittedly, I was really tired and starting to doze off towards the end of Joyful.) Once you get that the gimmick is to force you to use Joy, the game basically reduces to a moderate-difficulty boss rush. Other than that, the only thing that stood out to me mechanically was that Buddy learns a skill where she lifts her shirt to inflict flustered as a status effect — kind of a lingering echo of how outrageous the previous game could be.
You do get a healthy serving of plot, which is nice and ties a few things together. I wouldn’t say it exactly wraps up the story, but it doesn’t feel like it’s missing anything either; it’s exactly as murky as you’d expect.
I think it’s worth playing Joyful if you’ve played Painful. It just didn’t have the same impact on me. It probably doesn’t help that I don’t like Buddy as a person. She seems cold, violent, and cruel. Appropriate for the world and a product of her environment, I suppose.
FINAL SCORE: 300 Mags
Finally, as something of a palate cleanser, we have MOOP: a delightful and charming little inventory game.
I don’t think “inventory game” is a real genre, but I mean the kind of game where you go around collecting items and using them in the right place. Puzzle-driven, but with “puzzles” that can largely be solved by simply trying everything everywhere. I’d put a lot of point and click adventures in the same category, despite having a radically different interface. Is that fair? Yes, because it’s my blog.
MOOP was almost certainly also made in RPG Maker, but it breaks the mold in a very different way by not being an RPG. There are no battles whatsoever, only interactions on the overworld; you progress solely via dialogue and puzzle-solving. Examining something gives you a short menu of verbs — use, talk, get — reminiscent of interactive fiction, or perhaps the graphical “adventure” games that took inspiration from interactive fiction. (God, “adventure game” is the worst phrase. Every game is an adventure! It doesn’t mean anything!)
Everything about the game is extremely chill. I love the monochrome aesthetic combined with a large screen resolution; it feels like I’m peeking into an alternate universe where the Game Boy got bigger but never gained color. I played halfway through the game before realizing that the protagonist (Moop) doesn’t have a walk animation; they simply slide around. Somehow, it works.
The puzzles are a little clever, yet low-pressure; the world is small enough that you can examine everything again if you get stuck, and there’s no way to lose or be set back. The music is lovely, too. It just feels good to wander around in a world that manages to make sepia look very pretty.
The story manages to pack a lot into a very short time. It’s… gosh, I don’t know. It has a very distinct texture to it that I’m not sure I’ve seen before. The plot weaves through several major events that each have very different moods, and it moves very quickly — but it’s well-written and doesn’t feel rushed or disjoint. It’s lighthearted, but takes itself seriously enough for me to get invested. It’s fucking witchcraft.
I think there was even a non-binary character! Just kinda nonchalantly in there. Awesome.
What a happy, charming game. Play if you would like to be happy and charmed.
FINAL SCORE: 1 waxing moon
Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/managing-digital-photos-and-videos/
NAS + CLOUD GIVEAWAY FROM MORRO DATA AND BACKBLAZE
Backblaze and Morro Data have teamed up to offer a hardware and software package giveaway that combines the best of NAS and the cloud for managing your photos and videos. You’ll find information about how to enter this promotion at the end of this post.
Whether you’re a serious amateur photographer, an Instagram fanatic, or a professional videographer, you’ve encountered the challenge of accessing, organizing, and storing your growing collection of digital photos and videos. The problems are similar for both amateur and professional — they vary chiefly in scale and cost — and the choices for addressing this challenge increase in number and complexity every day.
In this post we’ll be talking about the basics of managing digital photos and videos and trying to define the goals for a good digital asset management system (DAM). There’s a lot to cover, and we can’t get to all of it in one post. We will write more on this topic in future posts.
To start off, what is digital asset management (DAM)? In his book, The DAM Book: Digital Asset Management for Photographers, author Peter Krogh describes DAM as a term that refers to your entire digital photography ecosystem and how you work with it. It comprises the choices you make about every component of your digital photography practice.
Anyone considering how to manage their digital assets will need to consider the following questions:
Tell us what you’re using for digital media management
Earlier this week we published a post entitled What’s the Best Solution for Managing Digital Photos and Videos? in which we asked our readers to tell us how they manage their media files and what they would like to have in an ideal system. We’ll write a post after the first of the year based on the replies we receive. We encourage you to visit this week’s post and contribute your comments to the conversation.
Whether you have hundreds, thousands, or millions of digital media files, you’re going to need a plan on how to manage them. Let’s start with the goals for what a good digital media management plan should look like.
Photographers and videographers differ in aspects of their workflow, and amateurs and professionals have different needs and options, but there are some common elements that are typically found in a digital media workflow:
These days, most of our digital media devices have multiple options for getting the digital media out of the camera. Those options can include Wi-Fi, direct cable connection, or one of a number of types and makes of memory cards. If your digital media device of choice is a smartphone, then you’re used to syncing your recent photos with your computer or a cloud service. If you sync with Apple Photos/iCloud or Google Photos, then one of those services may fulfill just about all your needs for managing your digital media.
If you’re a serious amateur or professional, your solution is more complex. You likely transfer your media from the camera to a computer or storage device (perhaps waiting to erase the memory cards until you’re sure you’ve safely got multiple copies of your files). The computer might already contain your image or video editing tools, or you might use it as a device to get your media back to your home or studio.
If you’ve got a fast internet connection, you might transfer your files to the cloud for safekeeping, to send them to a co-worker so she can start working on them, or to give your client a preview of what you’ve got. The cloud is also useful if you need the media to be accessible from different locations or on various devices.
If you’ve been working for a while, you might have data stored in some older formats such as CD, DVD, DVD-RAM, Zip, Jaz, or other format. Besides the inevitable degradation that occurs with older media, just finding a device to read the data can be a challenge, and it doesn’t get any easier as time passes. If you have data in older formats that you wish to save, you should transfer and preserve that data as soon as possible.
Let’s address the different types of storage devices and approaches.
DAS includes any type of drive that is internal to your computer and connected via the host bus adapter (HBA), and using a common bus protocol such as ATA, SATA, or SCSI; or externally connected to the computer through, for example, USB or Thunderbolt.
Solid-state drives (SSD) are popular these days for their speed and reliability. In a system with different types of drives, it’s best to put your OS, applications, and video files on the fastest drive (typically the SSD), and use the slower drives when speed is not as critical.
A DAS device is directly accessible only from the host to which the DAS is attached, and only when the host is turned on, as the DAS incorporates no networking hardware or environment. Data on DAS can be shared on a network through capabilities provided by the operating system used on the host.
DAS can include a single drive attached via a single cable, multiple drives attached in a series, or multiple drives combined into a virtual unit by hardware and software, an example of which is RAID (Redundant Array of Inexpensive [or Independent] Disks). Storage virtualization such as RAID combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both.
A popular option these days is the use of network-attached storage (NAS) for storing working data, backing up data, and sharing data with co-workers. Compared to general purpose servers, NAS can offer several advantages, including faster data access, easier administration, and simple configuration through a web interface.
Users have the choice of a wide number of NAS vendors and storage approaches from vendors such as Morro Data, QNAP, Synology, Drobo, and many more.
NAS uses file-based protocols such as NFS (popular on UNIX systems), SMB/CIFS (Server Message Block/Common Internet File System used with MS Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare). Multiple protocols are often supported by a single NAS device. NAS devices frequently include RAID or similar capability, providing virtualized storage and often performance improvements.
NAS devices are popular for digital media files due to their large capacities, data protection capabilities, speed, expansion options through adding more and bigger drives, and the ability to share files on a local office or home network or more widely on the internet. NAS devices often include the capability to back up the data on the NAS to another NAS or to the cloud, making them a great hub for a digital media management system.
The cloud is becoming increasingly attractive as a component of a digital asset management system due to a number of inherent advantages:
Anyone working with digital media will tell you that the biggest challenge with the cloud is the large amount of data that must be transferred to the cloud, especially if someone already has a large library of media that exists on drives that they want to put into the cloud. Internet access speeds are getting faster, but not fast enough for users like Drew Geraci (known for his incredible time lapse photography and other work, including the opening to Netflix’s House of Cards), who told me he can create one terabyte of data in just five minutes when using nine 8K cameras simultaneously.
While we wait for everyone to get 10GB broadband transfer speeds, there are other options, such as Backblaze’s Fireball, which enables B2 Cloud Storage users to copy up to 40TB of data to a drive and send it directly to Backblaze.
There are technologies available that can accelerate internet TCP/IP speeds and enable faster data transfers to and from cloud storage such as Backblaze B2. We’ll be writing about these technologies in a future post.
A recent entry into the storage space is Morro Data and their CloudNAS solution. Files are stored in the cloud, cached locally on a CloudNAS device as needed, and synced globally among the other CloudNAS systems in a given organization. To the user, all of their files are listed in one catalog, but they could be stored locally or in the cloud. Another advantage is that uploads to the cloud are done behind the scenes as time and network permit. A file stays local until such time as it it safely stored in the B2 Cloud then it is removed from the CloudNAS device, depending on how often it is accessed. There are more details on the CloudNAS solution in our A New Twist on Data Backup: CloudNAS blog post. (See below for how to enter our Backblaze/Morro Data giveaway.)
A key component of any DAM system is the ability to find files when you need them. You’ll want the ability to catalog all of your digital media, assign keywords and metadata that make sense for the way you work, and have that catalog available and searchable even when the digital files themselves are located on various drives, in the cloud, or even disconnected from your current system.
Adobe’s Lightroom is a popular application for cataloging and managing image workflow. Lightroom can handle an enormous number of files, and has a flexible catalog that can be stored locally and used to search for files that have been archived to different storage devices. Users debate whether one master catalog or multiple catalogs are the best way to work in Lightroom. In any case, it’s critical that you back up your DAM catalogs as diligently as you back up your digital media.
The latest version of Lightroom, Lightroom CC (distinguished from Lightroom CC Classic), is coupled with Adobe’s Creative Cloud service. In addition to the subscription plan for Lightroom and other Adobe Suite applications, you’ll need to choose and pay a subscription fee for how much storage you wish to use in Adobe’s Creative Cloud. You don’t get a choice of other cloud vendors.
Another popular option for image editing is Phase One Capture One, and Phase One Media Pro SE for cataloging and management. Macphun’s Luminar is available for both Macintosh and Windows. Macphun has announced that will launch a digital asset manager component for Luminar in 2018 that will compete with Adobe’s offering for a complete digital image workflow.
Peter Krogh’s book, The DAM Book: Digital Asset Management for Photographers, and his other books on using Lightroom for DAM, outline an approach for creating a folder hierarchy, assigning keywords and metadata, and using collections to manage your photos. You can view a YouTube video on his recommendations at Get Your DAM Workflow Under Control with Peter Krogh.
Any media management system needs to include or work seamlessly with the editing and enhancement tools you use for photos or videos. We’re already talked about some cataloging solutions that include image editing, as well. Some of the mainstream photo apps, such as Google Photos and Apple Photos include rudimentary to mid-level editing tools. It’s up to the more capable applications to deliver the power needed for real photo or video editing, e.g. Adobe Photoshop, Adobe Lightroom, Macphun’s Luminar, and Phase One Capture One for photography, and Adobe Premiere, AppleFinal Cut Pro, or Avid Media Composer (among others) for video editing.
Images come out of your camera in a variety of formats. Camera makers have their proprietary raw file formats (CR2 from Canon, NEF from Nikon, for example), and Adobe has a proprietary, but open, standard for digital images called DNG (Digital Negative) that is used in Lightroom and products from other vendors, as well.
Whichever you choose, be aware that you are betting that whichever format you use will be supported years down the road when you go back to your files and want to open a file with whatever will be your future photo/video editing setup. So always think of the future and consider the solution that is most likely to still be supported in future applications.
There are myriad aspects to a digital asset management system, and as we said at the outset, many choices to make. We hope you’ll take us up on our request to tell us what you’re using to manage your photos and videos and what an ideal system for you would look like. We want to make Backblaze Backup and B2 Cloud Storage more useful to our customers, and your input will help us do that.
In the meantime, why not enter the Backblaze + Morro Data Promotion described below. You could win!
ENTER TO WIN A DREAM DIGITAL MEDIA COMBO
Morro Data and Backblaze Team Up to Deliver the Dream Digital Media Backup Solution
![]() | + | ![]() |
Visit Dream Photo Backup to learn about this combination of NAS, software, and the cloud that provides a complete solution for managing, archiving, and accessing your digital media files. You’ll have the opportunity to win Morro Data’s CacheDrive G40 (with 1TB of HDD cache), an annual subscription to CloudNAS Basic Global File Services, and $100 of Backblaze B2 Cloud Storage. The total value of this package is greater than $700. Enter at Dream Photo Backup.
The post An Introduction to Managing Digital Photos and Videos appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.