Security updates have been issued by CentOS (emacs), Debian (apache2, gdk-pixbuf, and pyjwt), Fedora (autotrace, converseen, dmtx-utils, drawtiming, emacs, gtatool, imageinfo, ImageMagick, inkscape, jasper, k3d, kxstitch, libwpd, mingw-libzip, perl-Image-SubImageFind, pfstools, php-pecl-imagick, psiconv, q, rawtherapee, ripright, rss-glx, rubygem-rmagick, synfig, synfigstudio, techne, vdr-scraper2vdr, vips, and WindowMaker), Oracle (emacs and kernel), Red Hat (emacs and kernel), Scientific Linux (emacs), SUSE (emacs), and Ubuntu (apache2).
Post Syndicated from Andy Klein original https://www.backblaze.com/blog/media-archive-solutions/
Does this sound familiar?
You spent almost all day Saturday archiving your latest video project onto two 8 TB external hard drives. You need to archive four months’ worth of work from a recently finished video project to external hard drives to make room on your local storage system for the next project. You diligently label each of the newly minted archive drives with the project name and stack the drives neatly in your closet. There must be 50 drives in there. The final step is to add the file list from each drive to the catalog you keep on a shared spreadsheet so your employees and contractors can find content from previous projects. In reality, this type of searching is rarely done as rummaging through the closet for the correct archive drive is time consuming and on more than one occasion the drive has failed.
Are you thinking that maybe it is time to upgrade your media archive solution?
Media Archiving Solutions
There is no shortage of media archiving solutions and you’ve looked at everything from tape drive systems to SAN and NAS systems. Some are expensive, some are complex, and some are both. Here are a few things you’ve decided that you would like your media archive solution to do:
- Fit into your company’s workflow
- Make the archive more accessible and useable
- Protect your archive off site
Archiving Your Media Content with Archiware P5 Archive
One proven solution is Archiware P5 Archive, which is part of the P5 software suite that provides data management at every step of your data’s life cycle. The P5 suite works with all kinds of data, but it has become well known for how well it archives and backs up media files, e.g. video and photos.
P5 Archive lets you easily archive data from your primary storage system to less expensive storage such as external disk, tape, and the cloud. Once the data is archived, you can use P5 to quickly locate your data in the archive by searching with keywords or previews. For example:
- Search assets by keyword — Besides the default search parameters, you can add custom metadata to individualize your data storage. You can include categories such as time of day, lens, or film location, and thus locate and re-use your data more quickly and effectively.
- Search assets with previews — P5 Archive has a direct integration with FFmpeg and ImageMagick and can create low resolution previews and proxies for all common video and audio formats.
With these capabilities, P5 can function as a bare bones asset management solution managing video and image archives. When you are ready to move forward with a more robust media asset management (MAM) system, P5 has integrations to leading providers including axle Video, Cat DV, and Cantemo Portal.
P5 Archive also includes the ability to customize the end-user experience so that users can access data (archived or live) based on their user profile.
Your Media Archive can be an Asset
P5 Archive lets you move or migrate data to disk, tape and the cloud. With Archiware P5 version 5.5, you can backup and archive your media files to Backblaze B2 Cloud Storage. With B2, your archived files are readily available for retrieval via P5. When this is combined with the P5 preview and keyword search capabilities, you can locate and retrieve archived video clips, images, and files in minutes versus hours or days when using external disk or tape.
Getting Rid of Your Closet Full of Hard Drives
Even if you move forward with P5, you still have your current closet full of archived data. To help with that, Backblaze provides the B2 Fireball data transfer service, which allows you to transfer up to 40 TB of data per trip from your location to your B2 Cloud Storage account. In this case, you’ll have to transfer the data from each external drive in your closet to a server or NAS device. Once there, the collected data is transferred to the Fireball and the Fireball is returned to Backblaze, where the data is extracted and placed in your B2 account.
As noted, each Fireball holds up to 40 terabytes of data — that’s ten 4 TB external drives — so it will take three round trips to transfer the 100 TB of archived data in your closet. Of course you can decide to transfer some or all of the data residing in your closet. How much you transfer depends on how much you want to protect it offsite and how valuable ready access to it on B2 from anywhere is to you.
If your media archive is a pile of hard drives or an aging tape library, the combination of Archiware P5 and Backblaze B2 provides a practical, affordable way to move your media archive library to the cloud. This will let you improve access to your archived data, reduce the management of your local storage system, protect your valuable assets offsite, and best of all you’ll be able to use your closet to store old computer monitors and pristine user manuals like everyone else.
Security updates have been issued by Debian (connman, faad2, gnupg, imagemagick, libdbd-mysql-perl, mercurial, and php5), openSUSE (postgresql93 and samba and resource-agents), Oracle (poppler), Scientific Linux (poppler), SUSE (firefox and php7), and Ubuntu (pyjwt).
Security updates have been issued by Debian (augeas, connman, fontforge, freeradius, git, mariadb-10.1, openjdk-7, php5, qemu, qemu-kvm, and tenshi), Fedora (augeas, libsndfile, thunderbird, and xen), Gentoo (AutoTrace and jbig2dec), Mageia (dbus, flash-player-plugin, groovy, groovy18, heimdal, kernel-linus, kmail(kdepimlibs4), libice, libmodplug, miniupnpc, and postgresql9.3/4/6), openSUSE (freeradius-server, gnome-shell, ImageMagick, and openvswitch), and SUSE (java-1_8_0-ibm, libzypp, and postgresql94).
Security updates have been issued by CentOS (git), Debian (firefox-esr and mariadb-10.0), Gentoo (bind and tnef), Mageia (kauth, kdelibs4, poppler, subversion, and vim), openSUSE (fossil, git, libheimdal, libxml2, minicom, nodejs4, nodejs6, openjpeg2, openldap2, potrace, subversion, and taglib), Oracle (git and kernel), Red Hat (git, groovy, httpd24-httpd, and mercurial), Scientific Linux (git), and SUSE (freeradius-server, ImageMagick, and subversion).
Post Syndicated from Christie Gifrin original https://aws.amazon.com/blogs/compute/messaging-fanout-pattern-for-serverless-architectures-using-amazon-sns/
Sam Dengler, Amazon Web Services Solutions Architect
Serverless architectures allow solution builders to focus on solving challenges particular to their business, without assuming the overhead of managing infrastructure in AWS. AWS Lambda is a service that lets you run code without provisioning or managing servers.
When using Lambda in a serverless architecture, the goal should be to design tightly focused functions that do one thing and do it well. When these functions are composed to accomplish larger goals in microservice architectures, the complexity shifts from the internal components to the external communication between components. It’s all too easy to accidentally back into an architecture that is rigid to change because components are too knowledgeable of each other via the communication paths between them.
Solution builders can address this architectural challenge by using messaging patterns, resulting in loosely coupled communication between highly cohesive components to manage complexity in serverless architectures. As introduced in the recent Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox post, a common approach when one component wishes to deliver the same message to multiple receivers is to use the fanout publish/subscribe messaging pattern.
The fanout pattern for message communication can be implemented in code. However, depending on your requirements, alternative solutions exist to offload this undifferentiated responsibility from the application. Amazon SNS is a fully managed pub/sub messaging service that lets you fan out messages to large numbers of recipients.
In this post, I review a serverless architecture from PlayOn! Sports as a case study for migration of fanout functionality from application code to SNS.
PlayOn! Sports serverless video processing platform
PlayOn! Sports is one of the nation’s leading high school sports media companies. They operate a comprehensive technology platform, enabling high-quality, low-cost productions of live sports events for the NFHS High School Sports Network.
At the 2014 AWS re:Invent conference, Lambda was announced. The PlayOn! Sports technology team recognized the parallels between serverless demos featuring image processing using ImageMagick to video processing using ffmpeg.
At the time, PlayOn! Sports was broadcasting live video with adaptive bit rates, requiring a transcoding of the video stream to multiple quality levels for consumption on desktop, mobile, and connected devices. This is not unusual for an internet media company. However, with over 50,000 live broadcasts produced in 2014, the traditional media and entertainment technological approaches and pricing models would not work.
After some consultation with the Lambda team to validate support for custom binary execution, PlayOn! Sports moved forward with the development of a new serverless video processing platform according to the architecture diagram below.
In the architecture, a laptop in the field captures the video from a camera source and divides it into small fragments, according to the HLS protocol. The fragments are published to Amazon S3, through an Amazon CloudFront distribution for accelerated upload. When the file has been written to S3, it triggers a Lambda function to initiate the video segment processing.
Video transcoding fanout implementation in Lambda
Given Lambda’s integration growth across AWS, it’s easy to forget that it did not include managed integration with SNS when it was announced in November 2014.
PlayOn! Sports was actively experimenting with approaches to video processing to address quality control, audience growth, and cost constraints. Lambda was a great tool for rapid innovation. A goal of the architecture design was the ability to add and remove video processing alternatives to the workflow, using the fanout pattern to identify optimal solutions. Below is example code from the initial implementation:
import json import logging import boto3 logger = logging.getLogger('boto3') logger.setLevel(logging.INFO) client = boto3.client('lambda') fanout_functions = ['media_info', 'transcode_audio'] def lambda_handler(event, context): logger.info(json.dumps(event)) logger.info('fanout_functions: %s', fanout_functions) for fanout_function in fanout_functions: logger.info('invoke: %s', fanout_function) response = client.invoke( FunctionName=fanout_function, InvocationType='Event', Payload=json.dumps(event) ) logger.info('response: %s', response) return 'done'
Each Lambda function is invoked asynchronously, injecting the same S3 event that triggered the original Lambda function. For example, the media_info Lambda function could be scaffolded similar to the following code snippet:
import json import logging import boto3 logger = logging.getLogger('boto3') logger.setLevel(logging.INFO) def lambda_handler(event, context): logger.info(json.dumps(event)) # MediaInfo Processing # see: https://aws.amazon.com/blogs/compute/extracting-video-metadata-using-lambda-and-mediainfo/ return 'done'
Refactoring fanout implementation using SNS
The PlayOn! Sports development team was familiar with SNS, but had not used it previously to support system-to-system messaging patterns. After the announcement of SNS triggering of Lambda functions, the PlayOn! Sports team planned to migrate to the new feature to offload the overhead of managing the fanout Lambda function.
When invoking a Lambda function, SNS wraps the original event with SNSEvent. The Lambda function can be refactored by adding a function to parse the S3 event from SNSEvent, as seen in the following code:
import json import logging import boto3 logger = logging.getLogger('boto3') logger.setLevel(logging.INFO) def lambda_handler(event, context): s3_event = parse_event(event) logger.info(json.dumps(s3_event)) # MediaInfo Processing # see: https://aws.amazon.com/blogs/compute/extracting-video-metadata-using-lambda-and-mediainfo/ return 'done' def parse_event(event): record = event['Records'] if 'EventSource' in record and record['EventSource'] == 'aws:sns': return json.loads(record['Sns']['Message']) return event
This Lambda function modification can be authored, tested, and deployed before enabling the SNS integration to verify that the existing Lambda fanout execution path continues to operate as before. The Lambda function invocation can now be transferred from the fanout Lambda function to SNS without disruption to S3 processing.
As the diagram below shows, the resulting architecture is similar to the original. The exception is that objects written to S3 now trigger a message to be published to an SNS topic. This sends the S3 event to multiple Lambda functions to be processed independently.
Sample architecture deployment using AWS CloudFormation
AWS CloudFormation gives developers and system administrators an easy way to create and manage a collection of related AWS resources. CloudFormation provisions and updates resources in an orderly and predictable fashion. To launch the CloudFormation stack for the sample fanout architecture, choose the following button:
Follow these steps to complete the architecture deployment:
- If necessary, sign into the console for your account when prompted.
- On the Select Template page, choose Next.
- Under Parameters, for S3BucketName, enter a globally unique name.
- For SnsTopicName, enter a region-unique name.
- Choose Next, Next.
- Select the checkbox for I acknowledge that AWS CloudFormation might create IAM resources, and choose Create.
After the stack has completed creation, you can test both paths of execution by uploading files to the S3 bucket that you created. Uploading a file to the “/uploads/lambda/” directory in S3 triggers the Lambda fanout function. Uploading a file to the “/uploads/sns/” directory in S3 triggers the SNS fanout execution path. You can verify execution by monitoring the Lambda function outputs in CloudWatch Logs.
In this post, I reviewed the fanout messaging pattern and options for its inclusion in a serverless architectures using Lambda application code and SNS. Using the PlayOn! Sports serverless video processing pipeline use case, I demonstrated how easy it is to refactor an existing application to use the SNS fanout approach.
I also provided a sample architecture in CloudFormation that you can run in your own account. Try it out and expand the sample architecture by adding other Lambda functions to the SNS topic, to demonstrate the flexibility of the fanout messaging pattern!
You can get started with SNS using the AWS Management Console, or the SDK of your choice. For more information about how to use SNS fanout messaging, see the following resources:
- Amazon SNS Developer Guide:
- AWS Compute Blog: Fanout S3 Event Notifications to Multiple Endpoints
If you have questions or suggestions, please comment below.
Post Syndicated from Eevee original https://eev.ee/blog/2017/08/09/growing-up-alongside-tech/
IndustrialRobot asks… or, uh, asked last month:
industrialrobot: How has your views on tech changed as you’ve got older?
This is so open-ended that it’s actually stumped me for a solid month. I’ve had a surprisingly hard time figuring out where to even start.
It’s not that my views of tech have changed too much — it’s that they’ve changed very gradually. Teasing out and explaining any one particular change is tricky when it happened invisibly over the course of 10+ years.
I think a better framework for this is to consider how my relationship to tech has changed. It’s gone through three pretty distinct phases, each of which has strongly colored how I feel and talk about technology.
In which I start from nothing.
Nothing is an interesting starting point. You only really get to start there once.
Learning something on my own as a kid was something of a magical experience, in a way that I don’t think I could replicate as an adult. I liked computers; I liked toying with computers; so I did that.
I don’t know how universal this is, but when I was a kid, I couldn’t even conceive of how incredible things were made. Buildings? Cars? Paintings? Operating systems? Where does any of that come from? Obviously someone made them, but it’s not the sort of philosophical point I lingered on when I was 10, so in the back of my head they basically just appeared fully-formed from the æther.
That meant that when I started trying out programming, I had no aspirations. I couldn’t imagine how far I would go, because all the examples of how far I would go were completely disconnected from any idea of human achievement. I started out with BASIC on a toy computer; how could I possibly envision a connection between that and something like a mainstream video game? Every new thing felt like a new form of magic, so I couldn’t conceive that I was even in the same ballpark as whatever process produced real software. (Even seeing the source code for
GORILLAS.BAS, it didn’t quite click. I didn’t think to try reading any of it until years after I’d first encountered the game.)
This isn’t to say I didn’t have goals. I invented goals constantly, as I’ve always done; as soon as I learned about a new thing, I’d imagine some ways to use it, then try to build them. I produced a lot of little weird goofy toys, some of which entertained my tiny friend group for a couple days, some of which never saw the light of day. But none of it felt like steps along the way to some mountain peak of mastery, because I didn’t realize the mountain peak was even a place that could be gone to. It was pure, unadulterated (!) playing.
I contrast this to my art career, which started only a couple years ago. I was already in my late 20s, so I’d already spend decades seeing a very broad spectrum of art: everything from quick sketches up to painted masterpieces. And I’d seen the people who create that art, sometimes seen them create it in real-time. I’m even in a relationship with one of them! And of course I’d already had the experience of advancing through tech stuff and discovering first-hand that even the most amazing software is still just code someone wrote.
So from the very beginning, from the moment I touched pencil to paper, I knew the possibilities. I knew that the goddamn Sistine Chapel was something I could learn to do, if I were willing to put enough time in — and I knew that I’m not, so I’d have to settle somewhere a ways before that. I knew that I’d have to put an awful lot of work in before I’d be producing anything very impressive.
I did it anyway (though perhaps waited longer than necessary to start), but those aren’t things I can un-know, and so I can never truly explore art from a place of pure ignorance. On the other hand, I’ve probably learned to draw much more quickly and efficiently than if I’d done it as a kid, precisely because I know those things. Now I can decide I want to do something far beyond my current abilities, then go figure out how to do it. When I was just playing, that kind of ambition was impossible.
So, I played.
How did this affect my views on tech? Well, I didn’t… have any. Learning by playing tends to teach you things in an outward sprawl without many abrupt jumps to new areas, so you don’t tend to run up against conflicting information. The whole point of opinions is that they’re your own resolution to a conflict; without conflict, I can’t meaningfully say I had any opinions. I just accepted whatever I encountered at face value, because I didn’t even know enough to suspect there could be alternatives yet.
That started to seriously change around, I suppose, the end of high school and beginning of college. I was becoming aware of this whole “open source” concept. I took classes that used languages I wouldn’t otherwise have given a second thought. (One of them was Python!) I started to contribute to other people’s projects. Eventually I even got a job, where I had to work with other people. It probably also helped that I’d had to maintain my own old code a few times.
Now I was faced with conflicting subjective ideas, and I had to form opinions about them! And so I did. With gusto. Over time, I developed an idea of what was Right based on experience I’d accrued. And then I set out to always do things Right.
That’s served me decently well with some individual problems, but it also led me to inflict a lot of unnecessary pain on myself. Several endeavors languished for no other reason than my dissatisfaction with the architecture, long before the basic functionality was done. I started a number of “pure” projects around this time, generic tools like imaging libraries that I had no direct need for. I built them for the sake of them, I guess because I felt like I was improving some niche… but of course I never finished any. It was always in areas I didn’t know that well in the first place, which is a fine way to learn if you have a specific concrete goal in mind — but it turns out that building a generic library for editing images means you have to know everything about images. Perhaps that ambition went a little haywire.
I’ve said before that this sort of (self-inflicted!) work was unfulfilling, in part because the best outcome would be that a few distant programmers’ lives are slightly easier. I do still think that, but I think there’s a deeper point here too.
In forgetting how to play, I’d stopped putting any of myself in most of the work I was doing. Yes, building an imaging library is kind of a slog that someone has to do, but… I assume the people who work on software like PIL and ImageMagick are actually interested in it. The few domains I tried to enter and revolutionize weren’t passions of mine; I just happened to walk through the neighborhood one day and decided I could obviously do it better.
Not coincidentally, this was the same era of my life that led me to write stuff like that PHP post, which you may notice I am conspicuously not even linking to. I don’t think I would write anything like it nowadays. I could see myself approaching the same subject, but purely from the point of view of language design, with more contrasts and tradeoffs and less going for volume. I certainly wouldn’t lead off with inflammatory puffery like “PHP is a community of amateurs”.
I think I’ve mellowed out a good bit in the last few years.
It turns out that being Right is much less important than being Not Wrong — i.e., rather than trying to make something perfect that can be adapted to any future case, just avoid as many pitfalls as possible. Code that does something useful has much more practical value than unfinished code with some pristine architecture.
Nowhere is this more apparent than in game development, where all code is doomed to be crap and the best you can hope for is to stem the tide. But there’s also a fixed goal that’s completely unrelated to how the code looks: does the game work, and is it fun to play? Yes? Ship the damn thing and forget about it.
Games are also nice because it’s very easy to pour my own feelings into them and evoke feelings in the people who play them. They’re mine, something with my fingerprints on them — even the games I’ve built with glip have plenty of my own hallmarks, little touches I added on a whim or attention to specific details that I care about.
Maybe a better example is the Doom map parser I started writing. It sounds like a “pure” problem again, except that I actually know an awful lot about the subject already! I also cleverly (accidentally) released some useful results of the work I’ve done thusfar — like statistics about Doom II maps and a few screenshots of flipped stock maps — even though I don’t think the parser itself is far enough along to release yet. The tool has served a purpose, one with my fingerprints on it, even without being released publicly. That keeps it fresh in my mind as something interesting I’d like to keep working on, eventually. (When I run into an architecture question, I step back for a while, or I do other work in the hopes that the solution will reveal itself.)
I also made two simple Pokémon ROM hacks this year, despite knowing nothing about Game Boy internals or assembly when I started. I just decided I wanted to do an open-ended thing beyond my reach, and I went to do it, not worrying about cleanliness and willing to accept a bumpy ride to get there. I played, but in a more experienced way, invoking the stuff I know (and the people I’ve met!) to help me get a running start in completely unfamiliar territory.
This feels like a really fine distinction that I’m not sure I’m doing justice. I don’t know if I could’ve appreciated it three or four years ago. But I missed making toys, and I’m glad I’m doing it again.
In short, I forgot how to have fun with programming for a little while, and I’ve finally started to figure it out again. And that’s far more important than whether you use PHP or not.
Security updates have been issued by Debian (freerdp and ghostscript), Fedora (freerdp, jackson-databind, moodle, remmina, and runc), Red Hat (authconfig, devtoolset-4-jackson-databind, gnutls, libreoffice, NetworkManager and libnl3, pki-core, rh-eclipse46-jackson-databind, samba, and tcpdump), and Ubuntu (apache2, bash, imagemagick, openjdk-8, and rabbitmq-server).
Security updates have been issued by Debian (catdoc, gsoap, and libtasn1-3), Fedora (GraphicsMagick, java-1.8.0-openjdk, krb5, librsvg2, nodejs, phpldapadmin, rubygem-rack-cors, and yara), Mageia (irssi), openSUSE (rubygem-puppet), Red Hat (kernel), Slackware (tcpdump), and Ubuntu (imagemagick, linux, linux-raspi2, linux-snapdragon, linux-lts-xenial, mysql-5.5, samba, and xorg-server, xorg-server-hwe-16.04, xorg-server-lts-xenial).
Security updates have been issued by Arch Linux (c-ares, freeradius, gvim, lib32-libtiff, libtiff, pcre, rkhunter, and vim), Debian (apache2, evince, imagemagick, unattended-upgrades, and vim), Fedora (openldap, php, and poppler), Oracle (freeradius), SUSE (evince and systemd, dracut), and Ubuntu (apport, icu, and libtasn1-3).
Security updates have been issued by Arch Linux (kernel, linux-zen, and tcpreplay), Debian (drupal7, exim4, expat, imagemagick, and smb4k), Fedora (chromium, firefox, glibc, kernel, openvpn, and wireshark), Mageia (mercurial and roundcubemail), openSUSE (kernel, libmicrohttpd, libqt5-qtbase, libqt5-qtdeclarative, openvpn, and python-tablib), Scientific Linux (sudo), and SUSE (firefox).
Post Syndicated from Eevee original https://eev.ee/blog/2017/06/17/digital-painter-rundown/
You should totally write about drawing/image manipulation programs! (Inspired by https://eev.ee/blog/2015/05/31/text-editor-rundown/)
This is a little trickier than a text editor comparison — while most text editors are cross-platform, quite a few digital art programs are not. So I’m effectively unable to even try a decent chunk of the offerings. I’m also still a relatively new artist, and image editors are much harder to briefly compare than text editors…
Right, now that your expectations have been suitably lowered:
I do all of my digital art in Krita. It’s pretty alright.
Okay so Krita grew out of Calligra, which used to be KOffice, which was an office suite designed for KDE (a Linux desktop environment). I bring this up because KDE has a certain… reputation. With KDE, there are at least three completely different ways to do anything, each of those ways has ludicrous amounts of customization and settings, and somehow it still can’t do what you want.
Krita inherits this aesthetic by attempting to do literally everything. It has 17 different brush engines, more than 70 layer blending modes, seven color picker dockers, and an ungodly number of colorspaces. It’s clearly intended primarily for drawing, but it also supports animation and vector layers and a pretty decent spread of raster editing tools. I just right now discovered that it has Photoshop-like “layer styles” (e.g. drop shadow), after a year and a half of using it.
In fairness, Krita manages all of this stuff well enough, and (apparently!) it manages to stay out of your way if you’re not using it. In less fairness, they managed to break erasing with a Wacom tablet pen for three months?
I don’t want to rag on it too hard; it’s an impressive piece of work, and I enjoy using it! The emotion it evokes isn’t so much frustration as… mystified bewilderment.
I once filed a ticket suggesting the addition of a brush size palette — a panel showing a grid of fixed brush sizes that makes it easy to switch between known sizes with a tablet pen (and increases the chances that you’ll be able to get a brush back to the right size again). It’s a prominent feature of Paint Tool SAI and Clip Studio Paint, and while I’ve never used either of those myself, I’ve seen a good few artists swear by it.
The developer response was that I could emulate the behavior by creating brush presets. But that’s flat-out wrong: getting the same effect would require creating a ton of brush presets for every brush I have, plus giving them all distinct icons so the size is obvious at a glance. Even then, it would be much more tedious to use and fill my presets with junk.
And that sort of response is what’s so mysterious to me. I’ve never even been able to use this feature myself, but a year of amateur painting with Krita has convinced me that it would be pretty useful. But a developer didn’t see the use and suggested an incredibly tedious alternative that only half-solves the problem and creates new ones. Meanwhile, of the 28 existing dockable panels, a quarter of them are different ways to choose colors.
What is Krita trying to be, then? What does Krita think it is? Who precisely is the target audience? I have no idea.
Anyway, I enjoy drawing in Krita well enough. It ships with a respectable set of brushes, and there are plenty more floating around. It has canvas rotation, canvas mirroring, perspective guide tools, and other art goodies. It doesn’t colordrop on right click by default, which is arguably a grave sin (it shows a customizable radial menu instead), but that’s easy to rebind. It understands having a background color beneath a bottom transparent layer, which is very nice. You can also toggle any brush between painting and erasing with the press of a button, and that turns out to be very useful.
It doesn’t support infinite canvases, though it does offer a one-click button to extend the canvas in a given direction. I’ve never used it (and didn’t even know what it did until just now), but would totally use an infinite canvas.
I haven’t used the animation support too much, but it’s pretty nice to have. Granted, the only other animation software I’ve used is Aseprite, so I don’t have many points of reference here. It’s a relatively new addition, too, so I assume it’ll improve over time.
The one annoyance I remember with animation was really an interaction with a larger annoyance, which is: working with selections kind of sucks. You can’t drag a selection around with the selection tool; you have to switch to the move tool. That would be fine if you could at least drag the selection ring around with the selection tool, but you can’t do that either; dragging just creates a new selection.
If you want to copy a selection, you have to explicitly copy it to the clipboard and paste it, which creates a new layer. Ctrl-drag with the move tool doesn’t work. So then you have to merge that layer down, which I think is where the problem with animation comes in: a new layer is non-animated by default, meaning it effectively appears in any frame, so simply merging it down with merge it onto every single frame of the layer below. And you won’t even notice until you switch frames or play back the animation. Not ideal.
This is another thing that makes me wonder about Krita’s sense of identity. It has a lot of fancy general-purpose raster editing features that even GIMP is still struggling to implement, like high color depth support and non-destructive filters, yet something as basic as working with selections is clumsy. (In fairness, GIMP is a bit clumsy here too, but it has a consistent notion of “floating selection” that’s easy enough to work with.)
I don’t know how well Krita would work as a general-purpose raster editor; I’ve never tried to use it that way. I can’t think of anything obvious that’s missing. The only real gotcha is that some things you might expect to be tools, like smudge or clone, are just types of brush in Krita.
Ah, GIMP — open source’s answer to Photoshop.
It’s very obviously intended for raster editing, and I’m pretty familiar with it after half a lifetime of only using Linux. I even wrote a little Scheme script for it ages ago to automate some simple edits to a couple hundred files, back before I was aware of ImageMagick. I don’t know what to say about it, specifically; it’s fairly powerful and does a wide variety of things.
In fact I’d say it’s almost frustratingly intended for raster editing. I used GIMP in my first attempts at digital painting, before I’d heard of Krita. It was okay, but so much of it felt clunky and awkward. Painting is split between a pencil tool, a paintbrush tool, and an airbrush tool; I don’t really know why. The default brushes are largely uninteresting. Instead of brush presets, there are tool presets that can be saved for any tool; it’s a neat idea, but doesn’t feel like a real substitute for brush presets.
Much of the same functionality as Krita is there, but it’s all somehow more clunky. I’m sure it’s possible to fiddle with the interface to get something friendlier for painting, but I never really figured out how.
And then there’s the surprising stuff that’s missing. There’s no canvas rotation, for example. There’s only one type of brush, and it just stamps the same pattern along a path. I don’t think it’s possible to smear or blend or pick up color while painting. The only way to change the brush size is via the very sensitive slider on the tool options panel, which I remember being a little annoying with a tablet pen. Also, you have to specifically enable tablet support? It’s not difficult or anything, but I have no idea why the default is to ignore tablet pressure and treat it like a regular mouse cursor.
As I mentioned above, there’s also no support for high color depth or non-destructive editing, which is honestly a little embarrassing. Those are the major things Serious Professionals™ have been asking for for ages, and GIMP has been trying to provide them, but it’s taking a very long time. The first signs of GEGL, a new library intended to provide these features, appeared in GIMP 2.6… in 2008. The last major release was in 2012. GIMP has been working on this new plumbing for almost as long as Krita’s entire development history. (To be fair, Krita has also raised almost €90,000 from three Kickstarters to fund its development; I don’t know that GIMP is funded at all.)
I don’t know what’s up with GIMP nowadays. It’s still under active development, but the exact status and roadmap are a little unclear. I still use it for some general-purpose editing, but I don’t see any reason to use it to draw.
I do know that canvas rotation will be in the next release, and there was some experimentation with embedding MyPaint’s brush engine (though when I tried it it was basically unusable), so maybe GIMP is interested in wooing artists? I guess we’ll see.
Ah, MyPaint. I gave it a try once. Once.
It’s a shame, really. It sounds pretty great: specifically built for drawing, has very powerful brushes, supports an infinite canvas, supports canvas rotation, has a simple UI that gets out of your way. Perfect.
Or so it seems. But in MyPaint’s eagerness to shed unnecessary raster editing tools, it forgot a few of the more useful ones. Like selections.
MyPaint has no notion of a selection, nor of copy/paste. If you want to move a head to align better to a body, for example, the sanctioned approach is to duplicate the layer, erase the head from the old layer, erase everything but the head from the new layer, then move the new layer.
I can’t find anything that resembles HSL adjustment, either. I guess the workaround for that is to create H/S/L layers and floodfill them with different colors until you get what you want.
I can’t work seriously without these basic editing tools. I could see myself doodling in MyPaint, but Krita works just as well for doodling as for serious painting, so I’ve never gone back to it.
Drawpile is the modern equivalent to OpenCanvas, I suppose? It lets multiple people draw on the same canvas simultaneously. (I would not recommend it as a general-purpose raster editor.)
It’s a little clunky in places — I sometimes have bugs where keyboard focus gets stuck in the chat, or my tablet cursor becomes invisible — but the collaborative part works surprisingly well. It’s not a brush powerhouse or anything, and I don’t think it allows textured brushes, but it supports tablet pressure and canvas rotation and locked alpha and selections and whatnot.
I’ve used it a couple times, and it’s worked well enough that… well, other people made pretty decent drawings with it? I’m not sure I’ve managed yet. And I wouldn’t use it single-player. Still, it’s fun.
Aseprite is for pixel art so it doesn’t really belong here at all. But it’s very good at that and I like it a lot.
I can’t name any other serious contender that exists for Linux.
I’m dimly aware of a thing called “Photo Shop” that’s more intended for photos but functions as a passable painter. More artists seem to swear by Paint Tool SAI and Clip Studio Paint. Also there’s Paint.NET, but I have no idea how well it’s actually suited for painting.
And that’s it! That’s all I’ve got. Krita for drawing, GIMP for editing, Drawpile for collaborative doodling.
Security updates have been issued by Arch Linux (gnutls and tor), CentOS (qemu-kvm), Debian (libgcrypt20 and libosip2), Fedora (kernel), Mageia (flash-player-plugin, libosip2, and smb4k), openSUSE (ImageMagick), SUSE (mercurial), and Ubuntu (gnutls26, gnutls28).
Security updates have been issued by Arch Linux (lib32-nss), Debian (bind9, exiv2, fop, imagemagick, libical, libonig, libsndfile, mosquitto, openjdk-7, rzip, strongswan, and tnef), Fedora (git, kernel, lynis, moodle, mupdf, samba, systemd, and webkitgtk4), Mageia (perl-Image-Info and vlc), openSUSE (ffmpeg2, git, java-1_7_0-openjdk, libplist, libsndfile, and samba), Oracle (kernel and samba3x), Red Hat (nss), Scientific Linux (nss), and Ubuntu (imagemagick, juju-core, libtiff, strongswan, and webkit2gtk).
Security updates have been issued by CentOS (kernel), Debian (graphicsmagick, imagemagick, kde4libs, and puppet), Fedora (FlightCrew, kernel, libvncserver, and wordpress), Gentoo (adobe-flash, smb4k, teeworlds, and xen), Mageia (kernel, kernel-linus, kernel-tmb, and perl-CGI-Emulate-PSGI), openSUSE (GraphicsMagick and rpcbind), Oracle (kernel), Red Hat (kernel and kernel-rt), and Scientific Linux (kernel).
Security updates have been issued by CentOS (libreoffice), Debian (icedove, icu, and imagemagick), Fedora (bind, bind99, ghostscript, libxml2, ming, ntp, proftpd, and qemu), Oracle (bind and libreoffice), Red Hat (bind, qemu-kvm, and qemu-kvm-rhev), Scientific Linux (bind, libreoffice, and qemu-kvm), Slackware (minicom), and SUSE (xen).
Security updates have been issued by Arch Linux (libpurple), Debian (audiofile, cgiemail, and imagemagick), Fedora (cloud-init, empathy, and mupdf), Mageia (firefox and thunderbird), Scientific Linux (icoutils and openjpeg), Slackware (mcabber and samba), and Ubuntu (eglibc).
Security updates have been issued by Arch Linux (flashplugin, jasper, kernel, lib32-flashplugin, and roundcubemail), Debian (chromium-browser and mariadb-10.0), Fedora (ettercap), openSUSE (firefox, mozilla-nss and thunderbird), Oracle (thunderbird), Red Hat (flash-plugin, kernel, policycoreutils, rabbitmq-server, and tomcat6), Scientific Linux (tomcat6), and Ubuntu (imagemagick).
Security updates have been issued by Arch Linux (linux-grsec and linux-lts), Debian (icoutils, imagemagick, and roundcube), Fedora (freetype, libupnp, libwmf, thunderbird, tor, and w3m), Red Hat (chromium-browser and thunderbird), Scientific Linux (thunderbird), and Ubuntu (icoutils, icu, libevent, pidgin, pillow, and python-imaging).
Security updates have been issued by Debian (texlive-base), Fedora (cacti, drupal7-metatag, freeipa, mingw-gtk-vnc, suricata, and xen), Oracle (kvm), Red Hat (java-1.8.0-ibm and kvm), Scientific Linux (kvm), Slackware (firefox and thunderbird), SUSE (qemu), and Ubuntu (firefox, imagemagick, kernel, linux, linux-gke, linux-raspi2, linux-snapdragon, linux, linux-raspi2, linux, linux-ti-omap4, linux-hwe, linux-lts-trusty, linux-lts-xenial, and network-manager-applet).