We’re usually averse to buzzwords at HackSpace magazine, but not this month: in issue 7, we’re taking a deep dive into the Internet of Things.
Internet of Things (IoT)
To many people, IoT is a shady term used by companies to sell you something you already own, but this time with WiFi; to us, it’s a way to make our builds smarter, more useful, and more connected. In HackSpace magazine #7, you can join us on a tour of the boards that power IoT projects, marvel at the ways in which other makers are using IoT, and get started with your first IoT project!
Awesome projects
DIY retro computing: this issue, we’re taking our collective hat off to Spencer Owen. He stuck his home-brew computer on Tindie thinking he might make a bit of beer money — now he’s paying the mortgage with his making skills and inviting others to build modules for his machine. And if that tickles your fancy, why not take a crack at our Z80 tutorial? Get out your breadboard, assemble your jumper wires, and prepare to build a real-life computer!
Shameless patriotism: combine Lego, Arduino, and the car of choice for 1960 gold bullion thieves, and you’ve got yourself a groovy weekend project. We proudly present to you one man’s epic quest to add LED lights (controllable via a smartphone!) to his daughter’s LEGO Mini Cooper.
Makerspaces
Patriotism intensifies: for the last 200-odd years, the Black Country has been a hotbed of making. Urban Hax, based in Walsall, is the latest makerspace to show off its riches in the coveted Space of the Month pages. Every space has its own way of doing things, but not every space has a portrait of Rob Halford on the wall. All hail!
Diversity: advice on diversity often boils down to ‘Be nice to people’, which might feel more vague than actionable. This is where we come in to help: it is truly worth making the effort to give people of all backgrounds access to your makerspace, so we take a look at why it’s nice to be nice, and at the ways in which one makerspace has put niceness into practice — with great results.
And there’s more!
We also show you how to easily calculate the size and radius of laser-cut gears, use a bank of LEDs to etch PCBs in your own mini factory, and use chemistry to mess with your lunch menu.
All this plus much, much more waits for you in HackSpace magazine issue 7!
Get your copy of HackSpace magazine
If you like the sound of that, you can find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.
Last year’s haul sank 15% to 53,000 tons, according to the JF Zengyoren national federation of fishing cooperatives. The squid catch has fallen by half in just two years. The previous low was plumbed in 2016.
Lighter catches have been blamed on changing sea temperatures, which impedes the spawning and growth of the squid. Critics have also pointed to overfishing by North Korean and Chinese fishing boats.
Wholesale prices of flying squid have climbed as a result. Last year’s average price per kilogram came to 564 yen, a roughly 80% increase from two years earlier, according to JF Zengyoren.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
“It’s great to eat hotaruika around when the seasons change, which is when people tend to get sick,” said Ryoji Tanaka, an executive at the Toyama prefectural federation of fishing cooperatives. “In addition to popular cooking methods, such as boiling them in salted water, you can also add them to pasta or pizza.”
Now there is a new addition: eating hotaruika raw as sashimi. However, due to reports that parasites have been found in their internal organs, the Health, Labor and Welfare Ministry recommends eating the squid after its internal organs have been removed, or after it has been frozen for at least four days at minus 30 C or lower.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
The data center keeps growing, with well over 500 Petabytes of data under management we needed more systems administrators to help us keep track of all the systems as our operation expands. Our latest systems administrator is Billy! Let’s learn a bit more about him shall we?
What is your Backblaze Title? Sr. Systems Administrator
Where are you originally from? Boston, MA
What attracted you to Backblaze? I’ve read the hard drive articles that were published and was excited to be a part of the company that took the time to do that kind of analysis and share it with the world.
What do you expect to learn while being at Backblaze? I expect that I’ll learn about the problems that arise from a larger scale operation and how to solve them. I’m very curious to find out what they are.
Where else have you worked? I’ve worked for the MIT Math Dept, Google, a social network owned by AOL called Bebo, Evernote, a contractor recommendation site owned by The Home Depot called RedBeacon, and a few others that weren’t as interesting.
Where did you go to school? I started college at The Cooper Union, discovered that Electrical Engineering wasn’t my thing, then graduated from the Computer Science program at Northeastern.
What’s your dream job? Is couch potato a job? I like to solve puzzles and play with toys, which is why I really enjoy being a sysadmin. My dream job is to do pretty much what I do now, but not have to participate in on-call.
Favorite place you’ve traveled? We did a 2 week tour through Europe on our honeymoon. I’d go back to any place there.
Favorite hobby? Reading and listening to music. I spent a stupid amount of money on a stereo, so I make sure it gets plenty of use. I spent much less money on my library card, but I try to utilize it quite a bit as well.
Of what achievement are you most proud? I designed a built a set of shelves for the closet in my kids’ room. Built with hand tools. The only electricity I used was the lights to see what I was doing.
Star Trek or Star Wars? Star Trek: The Next Generation
Coke or Pepsi? Coke!
Favorite food? Pesto. Usually on angel hair, but it also works well on bread, or steak, or a spoon.
Why do you like certain things? I like things that are a little outside the norm, like musical covers and mashups, or things that look like 1 thing but are really something else. Secret compartments are also fun.
Anything else you’d like you’d like to tell us? I’m full of anecdotes and lines from songs and movies and tv shows.
Pesto is delicious! Welcome to the systems administrator team Billy, we’ll keep the fridge stocked with Coke for you!
Interesting article by Major General Hao Yeli, Chinese People’s Liberation Army (ret.), a senior advisor at the China International Institute for Strategic Society, Vice President of China Institute for Innovation and Development Strategy, and the Chair of the Guanchao Cyber Forum.
Against the background of globalization and the internet era, the emerging cyber sovereignty concept calls for breaking through the limitations of physical space and avoiding misunderstandings based on perceptions of binary opposition. Reinforcing a cyberspace community with a common destiny, it reconciles the tension between exclusivity and transferability, leading to a comprehensive perspective. China insists on its cyber sovereignty, meanwhile, it transfers segments of its cyber sovereignty reasonably. China rightly attaches importance to its national security, meanwhile, it promotes international cooperation and open development.
China has never been opposed to multi-party governance when appropriate, but rejects the denial of government’s proper role and responsibilities with respect to major issues. The multilateral and multiparty models are complementary rather than exclusive. Governments and multi-stakeholders can play different leading roles at the different levels of cyberspace.
In the internet era, the law of the jungle should give way to solidarity and shared responsibilities. Restricted connections should give way to openness and sharing. Intolerance should be replaced by understanding. And unilateral values should yield to respect for differences while recognizing the importance of diversity.
Deputy Attorney General Rosenstein has given talks where he proposes that tech companies decrease their communications and device security for the benefit of the FBI. In a recent talk, his idea is that tech companies just save a copy of the plaintext:
Law enforcement can also partner with private industry to address a problem we call “Going Dark.” Technology increasingly frustrates traditional law enforcement efforts to collect evidence needed to protect public safety and solve crime. For example, many instant-messaging services now encrypt messages by default. The prevent the police from reading those messages, even if an impartial judge approves their interception.
The problem is especially critical because electronic evidence is necessary for both the investigation of a cyber incident and the prosecution of the perpetrator. If we cannot access data even with lawful process, we are unable to do our job. Our ability to secure systems and prosecute criminals depends on our ability to gather evidence.
I encourage you to carefully consider your company’s interests and how you can work cooperatively with us. Although encryption can help secure your data, it may also prevent law enforcement agencies from protecting your data.
Encryption serves a valuable purpose. It is a foundational element of data security and essential to safeguarding data against cyber-attacks. It is critical to the growth and flourishing of the digital economy, and we support it. I support strong and responsible encryption.
I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so.
Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a “backdoor.” In fact, those very capabilities are marketed and sought out.
I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.
The question is whether to require a particular goal: When a court issues a search warrant or wiretap order to collect evidence of crime, the company should be able to help. The government does not need to hold the key.
Rosenstein is right that many services like Gmail naturally keep plaintext in the cloud. This is something we pointed out in our 2016 paper: “Don’t Panic.” But forcing companies to build an alternate means to access the plaintext that the user can’t control is an enormous vulnerability.
Last month, the DHS announced that it was able to remotely hack a Boeing 757:
“We got the airplane on Sept. 19, 2016. Two days later, I was successful in accomplishing a remote, non-cooperative, penetration,” said Robert Hickey, aviation program manager within the Cyber Security Division of the DHS Science and Technology (S&T) Directorate.
“[Which] means I didn’t have anybody touching the airplane, I didn’t have an insider threat. I stood off using typical stuff that could get through security and we were able to establish a presence on the systems of the aircraft.” Hickey said the details of the hack and the work his team are doing are classified, but said they accessed the aircraft’s systems through radio frequency communications, adding that, based on the RF configuration of most aircraft, “you can come to grips pretty quickly where we went” on the aircraft.
My colleague Sandy Carter delivered the Enterprise Innovation State of the Union last week at AWS re:Invent. She wrote the guest post below to recap the announcements that she made from the stage.
“I want my company to innovate, but I am not convinced we can execute successfully.” Far too many times I have heard this fear expressed by senior executives that I have met at different points in my career. In fact, a recent study published by Price Waterhouse Coopers found that while 93% of executives depend on innovation to drive growth, more than half are challenged to take innovative ideas to market quickly in a scalable way.
Many customers are struggling with how to drive enterprise innovation, so I was thrilled to share the stage at AWS re:Invent this past week with several senior executives who have successfully broken this mold to drive amazing enterprise innovation. In particular, I want to thank Parag Karnik from Johnson & Johnson, Bill Rothe from Hess Corporation, Dave Williams from Just Eat, and Olga Lagunova from Pitney Bowes for sharing their stories of innovation, creativity, and solid execution.
Among the many new announcements from AWS this past week, I am particularly excited about the following newly-launched AWS products and programs that I announced at re:Invent to drive new innovations by our enterprise customers:
AI: New Deep Learning Amazon Machine Image (AMI) on EC2 Windows As I shared at re:Invent, customers such as Infor are already successfully leveraging artificial intelligence tools on AWS to deliver tailored, industry-specific applications to their customers. We want to facilitate more of our Windows developers to get started quickly and easily with AI, leveraging machine learning based tools with popular deep learning frameworks, such as Apache MXNet, TensorFlow, and Caffe2. In order to enable this, I announced at re:Invent that AWS now offers a new Deep Learning AMI for Microsoft Windows. The AMI is tailored to facilitate large scale training of deep-learning models, and enables quick and easy setup of Windows Server-based compute resources for machine learning applications.
IoT: Visualize and Analyze SQL and IoT Data Forecasts show as many as 31 billion IoT devices by 2020. AWS wants every Windows customer to take advantage of the data available from their devices. Pitney Bowes, for example, now has more than 130,000 IoT devices streaming data to AWS. Using machine learning, Pitney Bowes enriches and analyzes data to enhance their customer experience, improve efficiencies, and create new data products. AWS IoT Analytics can now be leveraged to run analytics on IoT data and get insights that help you make better and more accurate decisions for IoT applications and machine learning use cases. AWS IoT Analytics can automatically enrich IoT device data with contextual metadata such as your SQL Server transactional data.
New Capabilities for .NET Developers on AWS In addition to all of the enhancements we’ve introduced to deliver a first class experience to Windows developers on AWS, we announced that we are including .NET Core 2.0 support in AWS Lambda and AWS CodeBuild, which will be available for broader use early next year. .NET Core 2.0 packs a number of new features such as Razor pages, better compatibility with .NET framework, more than double the number of APIs compared to the previous versions, and much more. With this announcement, you will be able to take advantage of all latest .NET Core features on Lambda and CodeBuild for building modern serverless and DevOps centric solutions.
License optimization for BYOL AWS provides you a wide variety of instance types and families that best meet your workload needs. If you are using software licensed by the number of vCPUs, you want the ability to further tweak vCPU count to optimize license spend. I announced the upcoming ability to optimize CPUs for EC2, giving you greater control over your EC2 instances on two fronts:
You can specify a custom number of vCPUs when launching new instances to save on vCPU based licensing costs. For example, SQL Server licensing spend.
You can disable Hyper-Threading Technology for workloads that perform well with single-threaded CPUs, like some high-performance computing (HPC) applications.
Using these capabilities, customers who bring their own license (BYOL) will be able to optimize their license usage and save on the license costs.
Server Migration Service for Hyper-V Virtual Machines As Bill Rothe from Hess Corporation shared at re:Invent, Hess has successfully migrated a wide range of workloads to the cloud, including SQL Server, SharePoint, SAP HANA, and many others. AWS Server Migration Service (SMS) now supports Hyper-V virtual machine (VM) migration, in order to further support enterprise migrations like these. AWS Server Migration Service will enable you to more easily co-ordinate large-scale server migrations from on-premise Hyper-V environments to AWS. AWS Server Migration Service allows you to automate, schedule, and track incremental replications of live server volumes. The replicated volumes are encrypted in transit and saved as a new Amazon Machine Image (AMI), which can be launched as an EC2 instance on AWS.
Microsoft Premier Support for AWS End-Customers I was pleased to announce that Microsoft and AWS have developed new areas of support integration to help ensure a great customer experience. Microsoft Premier Support is on board to help AWS assist end customers. AWS Support engineers can escalate directly to Microsoft Support on behalf of AWS customers running Microsoft workloads.
Best Practice Tools: HIPAA Compliance and Digital Innovation Workshop In November, we updated our HIPAA-focused white paper, outlining how you can use AWS to create HIPAA-compliant applications. In the first quarter of next year, we will publish a HIPAA Implementation Guide that expands on our HIPAA Quick Start to enable you to follow strict security, compliance, and risk management controls for common healthcare use cases. I was also pleased to award a Digital Innovation Workshop to one of our customers in my re:Invent session, and look forward to seeing more customers take advantage of this workshop.
AWS: The Continuous Innovation Cloud A common thread we see across customers is that continuous innovation from AWS enables their ongoing reinvention. Continuous innovation means that you are always getting a newer, better offering every single day. Sometimes it is in the form of brand new services and capabilities, and sometimes it is happening invisibly, under the covers where your environment just keeps getting better. I invite you to learn more about how you can accelerate your innovation journey with recently launched AWS services and AWS best practices. If you are migrating Windows workloads, speak with your AWS sales representative or an AWS Microsoft Workloads Competency Partner to learn how you can leverage our re:Think for Windows program for credits to start your migration.
NAS + CLOUD GIVEAWAY FROM MORRO DATA AND BACKBLAZE
Backblaze and Morro Data have teamed up to offer a hardware and software package giveaway that combines the best of NAS and the cloud for managing your photos and videos. You’ll find information about how to enter this promotion at the end of this post.
Whether you’re a serious amateur photographer, an Instagram fanatic, or a professional videographer, you’ve encountered the challenge of accessing, organizing, and storing your growing collection of digital photos and videos. The problems are similar for both amateur and professional — they vary chiefly in scale and cost — and the choices for addressing this challenge increase in number and complexity every day.
In this post we’ll be talking about the basics of managing digital photos and videos and trying to define the goals for a good digital asset management system (DAM). There’s a lot to cover, and we can’t get to all of it in one post. We will write more on this topic in future posts.
To start off, what is digital asset management (DAM)? In his book, The DAM Book: Digital Asset Management for Photographers, author Peter Krogh describes DAM as a term that refers to your entire digital photography ecosystem and how you work with it. It comprises the choices you make about every component of your digital photography practice.
Anyone considering how to manage their digital assets will need to consider the following questions:
How do I like to work, and need to work if I have clients, partners, or others with whom I need to cooperate?
What are the software and hardware options I need to consider to set up an efficient system that suits my needs?
How do DAS (direct-attached storage), NAS (network-attached storage), the cloud, and other storage solutions fit into a working system?
Is there a difference between how and where I back up and archive my files?
How do I find media files in my collection?
How do I handle a digital archive that just keeps growing and growing?
How do I make sure that the methods and system I choose won’t lock me into a closed-end, proprietary system?
Tell us what you’re using for digital media management
Earlier this week we published a post entitled What’s the Best Solution for Managing Digital Photos and Videos? in which we asked our readers to tell us how they manage their media files and what they would like to have in an ideal system. We’ll write a post after the first of the year based on the replies we receive. We encourage you to visit this week’s post and contribute your comments to the conversation.
Getting Started with Digital Asset Management
Whether you have hundreds, thousands, or millions of digital media files, you’re going to need a plan on how to manage them. Let’s start with the goals for what a good digital media management plan should look like.
Goals of a Good Digital Media Management System
1) Don’t lose your files
At the very least, your system should preserve files you wish to keep for future use. A good system will be reliable, support maintaining multiple copies of your data, and will integrate well with your data backup strategy. You should analyze each step of how you handle your cameras, memory cards, disks, and other storage media to understand the points at which your data is most vulnerable and how to minimize the possibility of data loss.
2) Find media when you need it
Your system should enable you to find files when you need them.
3) Work economically
You want a system that meets your budget and doesn’t waste your time.
4) Edit or Enhance the images or video
You’ll want the ability to make changes, change formats, and repurpose your media for different uses.
5) Share media in ways you choose
A good system will help you share your files with clients, friends, and family, giving you choices of different media, formats, and control over access and privacy.
6) Doesn’t lock your media into a proprietary system
Your system shouldn’t lock you into file formats, proprietary protocols, or make it difficult or impossible to get your media out of a particular vendor’s environment. You want a system that uses common and open formats and protocols to maintain the compatibility of your media with as yet unknown hardware and software you might want to use in the future.
Media Storage Options
Photographers and videographers differ in aspects of their workflow, and amateurs and professionals have different needs and options, but there are some common elements that are typically found in a digital media workflow:
Data is collected in a digital camera
Data is copied from the camera to a computer, a transport device, or a storage device
Data is brought into a computer system where original files are typically backed up and copies made for editing and enhancement (depending on type of system)
Data files are organized into folders, and metadata added or edited to aid in record keeping and finding files in the future
Files are edited and enhanced, with backups made during the process
File formats might be changed manually or automatically depending on system
Versions are created for client review, sharing, posting, publishing, or other uses
File versions are archived either manually or automatically
Files await possible future retrieval and use
These days, most of our digital media devices have multiple options for getting the digital media out of the camera. Those options can include Wi-Fi, direct cable connection, or one of a number of types and makes of memory cards. If your digital media device of choice is a smartphone, then you’re used to syncing your recent photos with your computer or a cloud service. If you sync with Apple Photos/iCloud or Google Photos, then one of those services may fulfill just about all your needs for managing your digital media.
If you’re a serious amateur or professional, your solution is more complex. You likely transfer your media from the camera to a computer or storage device (perhaps waiting to erase the memory cards until you’re sure you’ve safely got multiple copies of your files). The computer might already contain your image or video editing tools, or you might use it as a device to get your media back to your home or studio.
If you’ve got a fast internet connection, you might transfer your files to the cloud for safekeeping, to send them to a co-worker so she can start working on them, or to give your client a preview of what you’ve got. The cloud is also useful if you need the media to be accessible from different locations or on various devices.
If you’ve been working for a while, you might have data stored in some older formats such as CD, DVD, DVD-RAM, Zip, Jaz, or other format. Besides the inevitable degradation that occurs with older media, just finding a device to read the data can be a challenge, and it doesn’t get any easier as time passes. If you have data in older formats that you wish to save, you should transfer and preserve that data as soon as possible.
Let’s address the different types of storage devices and approaches.
Direct-attached Storage (DAS)
DAS includes any type of drive that is internal to your computer and connected via the host bus adapter (HBA), and using a common bus protocol such as ATA, SATA, or SCSI; or externally connected to the computer through, for example, USB or Thunderbolt.
Solid-state drives (SSD) are popular these days for their speed and reliability. In a system with different types of drives, it’s best to put your OS, applications, and video files on the fastest drive (typically the SSD), and use the slower drives when speed is not as critical.
A DAS device is directly accessible only from the host to which the DAS is attached, and only when the host is turned on, as the DAS incorporates no networking hardware or environment. Data on DAS can be shared on a network through capabilities provided by the operating system used on the host.
DAS can include a single drive attached via a single cable, multiple drives attached in a series, or multiple drives combined into a virtual unit by hardware and software, an example of which is RAID (Redundant Array of Inexpensive [or Independent] Disks). Storage virtualization such as RAID combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both.
Network-attached Storage (NAS)
A popular option these days is the use of network-attached storage (NAS) for storing working data, backing up data, and sharing data with co-workers. Compared to general purpose servers, NAS can offer several advantages, including faster data access, easier administration, and simple configuration through a web interface.
Users have the choice of a wide number of NAS vendors and storage approaches from vendors such as Morro Data, QNAP, Synology, Drobo, and many more.
NAS uses file-based protocols such as NFS (popular on UNIX systems), SMB/CIFS (Server Message Block/Common Internet File System used with MS Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare). Multiple protocols are often supported by a single NAS device. NAS devices frequently include RAID or similar capability, providing virtualized storage and often performance improvements.
NAS devices are popular for digital media files due to their large capacities, data protection capabilities, speed, expansion options through adding more and bigger drives, and the ability to share files on a local office or home network or more widely on the internet. NAS devices often include the capability to back up the data on the NAS to another NAS or to the cloud, making them a great hub for a digital media management system.
The Cloud
The cloud is becoming increasingly attractive as a component of a digital asset management system due to a number of inherent advantages:
Cloud data centers employ redundant technologies to protect the integrity of the stored data
Data stored in the cloud can be shared, if desired
Cloud storage is limitless, as opposed to DAS and most NAS implementations
Cloud storage can be accessed through a wide range of interfaces, and APIs (Application Programming Interfaces), making cloud storage extremely flexible
Cloud storage supports an extensive ecosystem of add-on hardware, software, and applications to enhance your DAM. Backblaze’s B2 Cloud Storage, for example, has a long list of integrations with media-oriented partners such as Axle video, Cantemo, Cubix, and others
Anyone working with digital media will tell you that the biggest challenge with the cloud is the large amount of data that must be transferred to the cloud, especially if someone already has a large library of media that exists on drives that they want to put into the cloud. Internet access speeds are getting faster, but not fast enough for users like Drew Geraci (known for his incredible time lapse photography and other work, including the opening to Netflix’s House of Cards), who told me he can create one terabyte of data in just five minutes when using nine 8K cameras simultaneously.
While we wait for everyone to get 10GB broadband transfer speeds, there are other options, such as Backblaze’s Fireball, which enables B2 Cloud Storage users to copy up to 40TB of data to a drive and send it directly to Backblaze.
There are technologies available that can accelerate internet TCP/IP speeds and enable faster data transfers to and from cloud storage such as Backblaze B2. We’ll be writing about these technologies in a future post.
CloudNAS
A recent entry into the storage space is Morro Data and their CloudNAS solution. Files are stored in the cloud, cached locally on a CloudNAS device as needed, and synced globally among the other CloudNAS systems in a given organization. To the user, all of their files are listed in one catalog, but they could be stored locally or in the cloud. Another advantage is that uploads to the cloud are done behind the scenes as time and network permit. A file stays local until such time as it it safely stored in the B2 Cloud then it is removed from the CloudNAS device, depending on how often it is accessed. There are more details on the CloudNAS solution in our A New Twist on Data Backup: CloudNAS blog post. (See below for how to enter our Backblaze/Morro Data giveaway.)
Cataloging and Searching Your Media
A key component of any DAM system is the ability to find files when you need them. You’ll want the ability to catalog all of your digital media, assign keywords and metadata that make sense for the way you work, and have that catalog available and searchable even when the digital files themselves are located on various drives, in the cloud, or even disconnected from your current system.
Adobe’s Lightroom is a popular application for cataloging and managing image workflow. Lightroom can handle an enormous number of files, and has a flexible catalog that can be stored locally and used to search for files that have been archived to different storage devices. Users debate whether one master catalog or multiple catalogs are the best way to work in Lightroom. In any case, it’s critical that you back up your DAM catalogs as diligently as you back up your digital media.
The latest version of Lightroom, Lightroom CC (distinguished from Lightroom CC Classic), is coupled with Adobe’s Creative Cloud service. In addition to the subscription plan for Lightroom and other Adobe Suite applications, you’ll need to choose and pay a subscription fee for how much storage you wish to use in Adobe’s Creative Cloud. You don’t get a choice of other cloud vendors.
Another popular option for image editing is Phase One Capture One, and Phase One Media Pro SE for cataloging and management. Macphun’s Luminar is available for both Macintosh and Windows. Macphun has announced that will launch a digital asset manager component for Luminar in 2018 that will compete with Adobe’s offering for a complete digital image workflow.
Any media management system needs to include or work seamlessly with the editing and enhancement tools you use for photos or videos. We’re already talked about some cataloging solutions that include image editing, as well. Some of the mainstream photo apps, such as Google Photos and Apple Photos include rudimentary to mid-level editing tools. It’s up to the more capable applications to deliver the power needed for real photo or video editing, e.g. Adobe Photoshop, Adobe Lightroom, Macphun’s Luminar, and Phase One Capture One for photography, and Adobe Premiere, AppleFinal Cut Pro, or Avid Media Composer (among others) for video editing.
Ensuring Future Compatibility for Your Media
Images come out of your camera in a variety of formats. Camera makers have their proprietary raw file formats (CR2 from Canon, NEF from Nikon, for example), and Adobe has a proprietary, but open, standard for digital images called DNG (Digital Negative) that is used in Lightroom and products from other vendors, as well.
Whichever you choose, be aware that you are betting that whichever format you use will be supported years down the road when you go back to your files and want to open a file with whatever will be your future photo/video editing setup. So always think of the future and consider the solution that is most likely to still be supported in future applications.
There are myriad aspects to a digital asset management system, and as we said at the outset, many choices to make. We hope you’ll take us up on our request to tell us what you’re using to manage your photos and videos and what an ideal system for you would look like. We want to make Backblaze Backup and B2 Cloud Storage more useful to our customers, and your input will help us do that.
In the meantime, why not enter the Backblaze + Morro Data Promotion described below. You could win!
ENTER TO WIN A DREAM DIGITAL MEDIA COMBO
Morro Data and Backblaze Team Up to Deliver the Dream Digital Media Backup Solution
+
Visit Dream Photo Backup to learn about this combination of NAS, software, and the cloud that provides a complete solution for managing, archiving, and accessing your digital media files. You’ll have the opportunity to win Morro Data’s CacheDrive G40 (with 1TB of HDD cache), an annual subscription to CloudNAS Basic Global File Services, and $100 of Backblaze B2 Cloud Storage. The total value of this package is greater than $700. Enter at Dream Photo Backup.
Amazon SES now adds DMARC verdicts to incoming emails, and publishes aggregate DMARC reports to domain owners. These two new features will help combat email spoofing and phishing, making the email ecosystem a safer and more secure place.
What is DMARC?
DMARC stands for Domain-based Message Authentication, Reporting, and Conformance. The DMARC standard was designed to prevent malicious actors from sending messages that appear to be from legitimate senders. Domain owners can tell email receivers how to handle unauthenticated messages that appear to be from their domains. The DMARC standard also specifies certain reports that email senders and receivers send to each other. The cooperative nature of this reporting process helps improve the email authentication infrastructure.
How does Amazon SES Implement DMARC?
When you receive an email message through Amazon SES, the headers of that message will include a DMARC policy verdict alongside the DKIM and SPF verdicts (both of which are already present). This additional information helps you verify the authenticity of all email messages you receive.
Messages you receive through Amazon SES will contain one of the following DMARC verdicts:
PASS – The message passed DMARC authentication.
FAIL – The message failed DMARC authentication.
GRAY – The sending domain does not have a DMARC policy.
PROCESSING_FAILED – An issue occurred that prevented Amazon SES from providing a DMARC verdict.
If the DMARC verdict is FAIL, Amazon SES will also provide information about the sending domain’s DMARC settings. In this situation, you will see one of the following verdicts:
NONE – The owner of the sending domain requests that no specific action be taken on messages that fail DMARC authentication.
QUARANTINE – The owner of the sending domain requests that messages that fail DMARC authentication be treated by receivers as suspicious.
REJECT – The owner of the sending domain requests that messages that fail DMARC authentication be rejected.
In addition to publishing the DMARC verdict on each incoming message, Amazon SES now sends DMARC aggregate reports to domain owners. These reports help domain owners identify systemic authentication failures, and avoid potential domain spoofing attacks.
Note: Domain owners only receive aggregate information about emails that do not pass DMARC authentication. These reports, known as RUA reports, only include information about the IP addresses that send unauthenticated emails to you. These reports do not include information about legitimate email senders.
How do I configure DMARC?
As is the case with SPF and DKIM, domain owners must publish their DMARC policies as DNS records for their domains. For more information about setting up DMARC, see Complying with DMARC Using Amazon SES in the Amazon SES Developer Guide.
DMARC reporting is now available in the following AWS Regions: US West (Oregon), US East (N. Virginia), and EU (Ireland). You can find more information about the dmarcVerdict and dmarcPolicy objects in the Amazon SES Developer Guide. The Developer Guide also includes a sample Lambda function that you can use to bounce incoming emails that fail DMARC authentication.
Fascinating article about two psychologists who are studying interrogation techniques.
Now, two British researchers are quietly revolutionising the study and practice of interrogation. Earlier this year, in a meeting room at the University of Liverpool, I watched a video of the Diola interview alongside Laurence Alison, the university’s chair of forensic psychology, and Emily Alison, a professional counsellor. My permission to view the tape was negotiated with the counter-terrorist police, who are understandably wary of allowing outsiders access to such material. Details of the interview have been changed to protect the identity of the officers involved, though the quotes are verbatim.
The Alisons, husband and wife, have done something no scholars of interrogation have been able to do before. Working in close cooperation with the police, who allowed them access to more than 1,000 hours of tapes, they have observed and analysed hundreds of real-world interviews with terrorists suspected of serious crimes. No researcher in the world has ever laid hands on such a haul of data before. Based on this research, they have constructed the world’s first empirically grounded and comprehensive model of interrogation tactics.
The Alisons’ findings are changing the way law enforcement and security agencies approach the delicate and vital task of gathering human intelligence. “I get very little, if any, pushback from practitioners when I present the Alisons’ work,” said Kleinman, who now teaches interrogation tactics to military and police officers. “Even those who don’t have a clue about the scientific method, it just resonates with them.” The Alisons have done more than strengthen the hand of advocates of non-coercive interviewing: they have provided an unprecedentedly authoritative account of what works and what does not, rooted in a profound understanding of human relations. That they have been able to do so is testament to a joint preoccupation with police interviews that stretches back more than 20 years.
Purism and KDE are working together to adapt Plasma Mobile to Purism’s Librem 5 smartphone. “The shared vision of freedom, openness and personal control for end users has brought KDE and Purism together in a common venture. Both organisations agree that cooperating will help bring a truly free and open source smartphone to the market. KDE and Purism will work together to make this happen.”
Purism and KDE are working together to adapt Plasma Mobile to Purism’s Librem 5 smartphone. “The shared vision of freedom, openness and personal control for end users has brought KDE and Purism together in a common venture. Both organisations agree that cooperating will help bring a truly free and open source smartphone to the market. KDE and Purism will work together to make this happen.”
Andrew “bunnie” Huang and Edward Snowden have designed a hardware device that attaches to an iPhone and monitors it for malicious surveillance activities, even in instances where the phone’s operating system has been compromised. They call it an Introspection Engine, and their use model is a journalist who is concerned about government surveillance:
Our introspection engine is designed with the following goals in mind:
Completely open source and user-inspectable (“You don’t have to trust us”)
Introspection operations are performed by an execution domain completely separated from the phone”s CPU (“don’t rely on those with impaired judgment to fairly judge their state”)
Proper operation of introspection system can be field-verified (guard against “evil maid” attacks and hardware failures)
Difficult to trigger a false positive (users ignore or disable security alerts when there are too many positives)
Difficult to induce a false negative, even with signed firmware updates (“don’t trust the system vendor” — state-level adversaries with full cooperation of system vendors should not be able to craft signed firmware updates that spoof or bypass the introspection engine)
As much as possible, the introspection system should be passive and difficult to detect by the phone’s operating system (prevent black-listing/targeting of users based on introspection engine signatures)
Simple, intuitive user interface requiring no specialized knowledge to interpret or operate (avoid user error leading to false negatives; “journalists shouldn’t have to be cryptographers to be safe”)
Final solution should be usable on a daily basis, with minimal impact on workflow (avoid forcing field reporters into the choice between their personal security and being an effective journalist)
This looks like fantastic work, and they have a working prototype.
Of course, this does nothing to stop all the legitimate surveillance that happens over a cell phone: location tracking, records of who you talk to, and so on.
Under the law, internet companies would have the same obligations telephone companies do to help law enforcement agencies, Prime Minister Malcolm Turnbull said. Law enforcement agencies would need warrants to access the communications.
“We’ve got a real problem in that the law enforcement agencies are increasingly unable to find out what terrorists and drug traffickers and pedophile rings are up to because of the very high levels of encryption,” Turnbull told reporters.
“Where we can compel it, we will, but we will need the cooperation from the tech companies,” he added.
Never mind that the law 1) would not achieve the desired results because all the smart “terrorists and drug traffickers and pedophile rings” will simply use a third-party encryption app, and 2) would make everyone else in Australia less secure. But that’s all ground I’ve covered before.
Asked whether the laws of mathematics behind encryption would trump any new legislation, Mr Turnbull said: “The laws of Australia prevail in Australia, I can assure you of that.
“The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia.”
Next Turnbull is going to try to legislate that pi = 3.2.
Last week, the Department of Justice released 18 new FISC opinions related to Section 702 as part of an EFF FOIA lawsuit. (Of course, they don’t mention EFF or the lawsuit. They make it sound as if it was their idea.)
There’s probably a lot in these opinions. In one Kafkaesque ruling, a defendant was denied access to the previous court rulings that were used by the court to decide against it:
…in 2014, the Foreign Intelligence Surveillance Court (FISC) rejected a service provider’s request to obtain other FISC opinions that government attorneys had cited and relied on in court filings seeking to compel the provider’s cooperation.
[…]
The provider’s request came up amid legal briefing by both it and the DOJ concerning its challenge to a 702 order. After the DOJ cited two earlier FISC opinions that were not public at the time — one from 2014 and another from 2008 — the provider asked the court for access to those rulings.
The provider argued that without being able to review the previous FISC rulings, it could not fully understand the court’s earlier decisions, much less effectively respond to DOJ’s argument. The provider also argued that because attorneys with Top Secret security clearances represented it, they could review the rulings without posing a risk to national security.
The court disagreed in several respects. It found that the court’s rules and Section 702 prohibited the documents release. It also rejected the provider’s claim that the Constitution’s Due Process Clause entitled it to the documents.
This kind of government secrecy is toxic to democracy. National security is important, but we will not survive if we become a country of secret court orders based on secret interpretations of secret law.
Abstract:Apple’s 2016 fight against a court order commanding it to help the FBI unlock the iPhone of one of the San Bernardino terrorists exemplifies how central the question of regulating government surveillance has become in American politics and law. But scholarly attempts to answer this question have suffered from a serious omission: scholars have ignored how government surveillance is checked by “surveillance intermediaries,” the companies like Apple, Google, and Facebook that dominate digital communications and data storage, and on whose cooperation government surveillance relies. This Article fills this gap in the scholarly literature, providing the first comprehensive analysis of how surveillance intermediaries constrain the surveillance executive. In so doing, it enhances our conceptual understanding of, and thus our ability to improve, the institutional design of government surveillance.
Surveillance intermediaries have the financial and ideological incentives to resist government requests for user data. Their techniques of resistance are: proceduralism and litigiousness that reject voluntary cooperation in favor of minimal compliance and aggressive litigation; technological unilateralism that designs products and services to make surveillance harder; and policy mobilization that rallies legislative and public opinion to limit surveillance. Surveillance intermediaries also enhance the “surveillance separation of powers”; they make the surveillance executive more subject to inter-branch constraints from Congress and the courts, and to intra-branch constraints from foreign-relations and economics agencies as well as the surveillance executive’s own surveillance-limiting components.
The normative implications of this descriptive account are important and cross-cutting. Surveillance intermediaries can both improve and worsen the “surveillance frontier”: the set of tradeoffs between public safety, privacy, and economic growth from which we choose surveillance policy. And while intermediaries enhance surveillance self-government when they mobilize public opinion and strengthen the surveillance separation of powers, they undermine it when their unilateral technological changes prevent the government from exercising its lawful surveillance authorities.
A new storage engine, MyRocks, that gives you high compression of your data without sacrificing speed. It has been developed in cooperation with Facebook and MariaDB to allow you to handle more data with less resources.
flashback, a feature that can rollback instances/databases/tables to an old snapshot. The version in MariaDB 10.2 is DML only. In MariaDB 10.3 we will also allow rollback over DML (like DROP TABLE).
NO PAD collations, which means that end space are significant in comparisons.
InnoDB is now the default storage engine. Until MariaDB 10.1, MariaDB used the XtraDB storage engine as default. XtraDB in 10.2 is not up to date with the latest features of InnoDB and cannot be used. The main reason for this change is that most of the important features of XtraDB are nowadays implemented in InnoDB . As the MariaDB team is doing a lot more InnoDB development than ever before, we can’t anymore manage updating two almost identical engines. The InnoDB version in MariaDB contains the best features of MySQL InnoDB and XtraDB and a lot more. As the InnoDB on disk format is identical to XtraDB’s this will not cause any problems when upgrading to MariaDB 10.2
There are a lot of other new features, performance enhancements and variables in MariaDB 10.2 for you to explore!
I am happy to see that a lot of the new features have come from the MariadB community! (Note to myself; This list doesn’t include all contributors to MariadB 10.2, needs to be update.)
Thanks a lot to everyone that has contributed to MariaDB!
You’re right that the world is terrible, but this isn’t really a contributing factor to it. There’s a few reasons why. The first is that there’s really not any indication that the CIA and MI5 ever turned this into an actual deployable exploit. The development reports[1] describe a project that still didn’t know what would happen to their exploit over firmware updates and a “fake off” mode that left a lit LED which wouldn’t be there if the TV were actually off, so there’s a potential for failed updates and people noticing that there’s something wrong. It’s certainly possible that development continued and it was turned into a polished and usable exploit, but it really just comes across as a bunch of nerds wanting to show off a neat demo.
But let’s say it did get to the stage of being deployable – there’s still not a great deal to worry about. No remote infection mechanism is described, so they’d need to do it locally. If someone is in a position to reflash your TV without you noticing, they’re also in a position to, uh, just leave an internet connected microphone of their own. So how would they infect you remotely? TVs don’t actually consume a huge amount of untrusted content from arbitrary sources[2], so that’s much harder than it sounds and probably not worth it because:
YOU ARE CARRYING AN INTERNET CONNECTED MICROPHONE THAT CONSUMES VAST QUANTITIES OF UNTRUSTED CONTENT FROM ARBITRARY SOURCES
Seriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they’ll do it via your phone. If you’re paranoid enough to take the battery out of your phone before certain conversations, don’t have those conversations in front of a TV with a microphone in it. But, uh, it’s actually worse than that.
These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it’s not uncommon for it to be possible to reconfigure an input as an output or vice versa. From software.
Anyone who’s ever plugged a microphone into a speaker jack probably knows where I’m going with this. An attacker can “turn off” your TV, reconfigure the internal speaker output as an input and listen to you on your “microphoneless” TV. Have a nice day, and stop telling people that putting glue in their laptop microphone is any use unless you’re telling them to disconnect the internal speakers as well.
If you’re in a situation where you have to worry about an intelligence agency monitoring you, your TV is the least of your concerns – any device with speakers is just as bad. So what about Alexa? The summary here is, again, it’s probably easier and more practical to just break your phone – it’s probably near you whenever you’re using an Echo anyway, and they also get to record you the rest of the time. The Echo platform is very restricted in terms of where it gets data[3], so it’d be incredibly hard to compromise without Amazon’s cooperation. Amazon’s not going to give their cooperation unless someone turns up with a warrant, and then we’re back to you already being screwed enough that you should have got rid of all your electronics way earlier in this process. There are reasons to be worried about always listening devices, but intelligence agencies monitoring you shouldn’t generally be one of them.
tl;dr: The CIA probably isn’t listening to you through your TV, and if they are then you’re almost certainly going to have a bad time anyway.
[1] Which I have obviously not read [2] I look forward to the first person demonstrating code execution through malformed MPEG over terrestrial broadcast TV [3] You’d need a vulnerability in its compressed audio codecs, and you’d need to convince the target to install a skill that played content from your servers
You’re right that the world is terrible, but this isn’t really a contributing factor to it. There’s a few reasons why. The first is that there’s really not any indication that the CIA and MI5 ever turned this into an actual deployable exploit. The development reports[1] describe a project that still didn’t know what would happen to their exploit over firmware updates and a “fake off” mode that left a lit LED which wouldn’t be there if the TV were actually off, so there’s a potential for failed updates and people noticing that there’s something wrong. It’s certainly possible that development continued and it was turned into a polished and usable exploit, but it really just comes across as a bunch of nerds wanting to show off a neat demo.
But let’s say it did get to the stage of being deployable – there’s still not a great deal to worry about. No remote infection mechanism is described, so they’d need to do it locally. If someone is in a position to reflash your TV without you noticing, they’re also in a position to, uh, just leave an internet connected microphone of their own. So how would they infect you remotely? TVs don’t actually consume a huge amount of untrusted content from arbitrary sources[2], so that’s much harder than it sounds and probably not worth it because:
YOU ARE CARRYING AN INTERNET CONNECTED MICROPHONE THAT CONSUMES VAST QUANTITIES OF UNTRUSTED CONTENT FROM ARBITRARY SOURCES
Seriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they’ll do it via your phone. If you’re paranoid enough to take the battery out of your phone before certain conversations, don’t have those conversations in front of a TV with a microphone in it. But, uh, it’s actually worse than that.
These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it’s not uncommon for it to be possible to reconfigure an input as an output or vice versa. From software.
Anyone who’s ever plugged a microphone into a speaker jack probably knows where I’m going with this. An attacker can “turn off” your TV, reconfigure the internal speaker output as an input and listen to you on your “microphoneless” TV. Have a nice day, and stop telling people that putting glue in their laptop microphone is any use unless you’re telling them to disconnect the internal speakers as well.
If you’re in a situation where you have to worry about an intelligence agency monitoring you, your TV is the least of your concerns – any device with speakers is just as bad. So what about Alexa? The summary here is, again, it’s probably easier and more practical to just break your phone – it’s probably near you whenever you’re using an Echo anyway, and they also get to record you the rest of the time. The Echo platform is very restricted in terms of where it gets data[3], so it’d be incredibly hard to compromise without Amazon’s cooperation. Amazon’s not going to give their cooperation unless someone turns up with a warrant, and then we’re back to you already being screwed enough that you should have got rid of all your electronics way earlier in this process. There are reasons to be worried about always listening devices, but intelligence agencies monitoring you shouldn’t generally be one of them.
tl;dr: The CIA probably isn’t listening to you through your TV, and if they are then you’re almost certainly going to have a bad time anyway.
[1] Which I have obviously not read [2] I look forward to the first person demonstrating code execution through malformed MPEG over terrestrial broadcast TV [3] You’d need a vulnerability in its compressed audio codecs, and you’d need to convince the target to install a skill that played content from your servers
comments
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.