After the last post comparing Darktable and Lightroom in terms of noise reduction I went back and did more testing. I'm not sure why I never really tried the setting the HSV lightness mask to the wavelets mode before but I gave it a try and managed what I consider better results for this shot. Look for yourself below.
Wavelets VS Non-local Means
Settings for Wavelets on Left, Right for NLM
Just for comparison here is the Lightroom 5 output.
Adjusting the transparency on the HSV Lightness mask down a bit helped with the ugly pattern and sharpened things up a bit too.
This just goes to show that it pays to try new things. Having this many options available is a bit of a double edge sword. If one chooses to really get in there and tweak things Darktable can produce exemplary images. However, this comes at a steeper learning curve and hit in editing time. Lightroom and ACR in general does a lot behind the scenes where as I'm finding Darktable to me more of a chose your own adventure deal. There's no such things as free lunch.
Quite a few of my shots are done at ISO 1600 and above so noise reduction in post processing is something I pay attention to. I've been using Darktable over the last year and just wanted to do a quick comparison to Lightroom 5's NR features. No I haven't bought Lightroom 6 yet, I barely use it as is so sinking more money into it seems like a poor decision.
First, comparing a couple of well lit shots at ISO 1000 from the X100s.
Image with no noise reduction
Darktable profiled noise reduction, settings in image
Lightroom 5 noise reduction, settings in image
There isn't a lot of noise at ISO 1000 so either program really does an OK job. Darktable seems to preserve more details but at the cost of some muddy colors and artifacts in places. Lightroom's output is smoother. I'm not sure this is the noise reduction or just differences in RAW processing and camera profiles, the screen captures exaggerate it some as well.
Next some rather dark photos from the Dark Sky Observatory. X100s at ISO 3200. Darktable's profiled noise reduction takes a fair bit more tweaking to get satisfactory results. I've taken the approach advised by Robert Hutton and done two passes of the module. The first set to wavelets mode, blending uniformly for HSV color to remove the color noise. This Darktable handles quite well. Where it runs into problems (like with the other images) is luminance noise.
ISO 3200 no noise reduction
Darktable color only noise reduction
Darktable color and luminance noise reduction
Lightroom noise reduction
After comparing the darker shots you can see where Lightroom pulls out in front with luminance smoothing. More detail is lost but the image is generally better IMO. Darktbale's profiled denoise on a HSV lightness layer produces very blocky results, it kind of reminds me of Noise Ninja's output from 2007 or so. It's especially noticeable inside the dome where there's a very patchy pattern in the red light. If you crank up the strength much more the pattern becomes very pronounced. Darktable does do a great job on the color noise removal however. All in all it's not terrible results.
Now, in all fairness Lightroom has been around around twice as long as Darktable and costs a whole lot more. The fact that you can get good results out of a free open source program is pretty incredible. Lightroom was pretty terrible at high ISO NR until version 3 too. Likewise I suspect future versions of Darktable will improve tremendously. But if you're doing a whole lot of shooting at higher ISOs a standalone noise reduction plugin might be a better bet. On OS X it's pretty easy to export the RAW to something like DFine and bring it back into Darktable if you're that picky.
Apparently there was a special on TV about music being heard by the Apollo 10 astronauts while coming around the far side of the moon. Conspiracy theorists had a field day with this as have the usual array of Facebook pages that poorly understand science and engineering. As is the case with most things explanation for the noise is a bit more mundane than aliens.
For the uninitiated NASA has posted an MP3 of the recording. There is a lot of noise from equipment, probably some of the whine is due to power inverters but at around the 2:40 mark you can faintly hear some noise. Personally I wouldn't call it music but whatever. You can hear the lunar module pilot Gene Cernan commenting on it.
If you ask me it sounds like pretty standard feedback or interference from local electronics. The command and lunar modules used VHF radios for communication which are pretty susceptible to interference from a wide range of appliances, inverters, lights, etc. Put your WiFi router near a leaky old microwave and see how far the range gets cut for a demostration of this interference. Apollo 11 reported the same noise, but once the LM touched down Collins reported the woo-woo noises ceased. To me that sounds like some sort of ground issue or feedback, not aliens. Just the sheer amount of noise in that recording would lead me to think it was just some leaky electronics or feedback.
As to the classification of these recordings lending credence to the aliens theory: it was the middle of the Cold War, everything was classified outside of school lunch menus.
Turns out space is a pretty noisy place in the radio part of the spectrum. The gas giant planets have very active magnetic fileds and are very loud radio sources. Cassini has provided us with "sounds" from Jupiter and Saturn. They're kind of creepy too:
So, it's not aliens or a government cover up. With most people these days using digital methods of telecommunication that are relatively noise free these sort of things may seem strange. However noise and interference are common to analog radio signals. Anyone else remember static on the TV or radio stations fading in and out with certain weather conditions? It's the same sort of stuff here.
I've been putting off writing this because I kept changing how I wanted to approach the end of this little series. The longer I waited the more comfortable I became with the tools and techniques and the more my work flow changed. While I plan on hitting the high points here this is still an ever evolving process. I'm making progress though and I find myself jumping back into the Adobe-sphere less and less.
GIMP has come a long way in the last few years. It actually beat Photoshop to the content aware tools by a year or so with Resynthesizer. Resynthesizer works great too and I've been making good use out of it. However GIMP has lagged behind elsewhere though, most notably in handling higher bit depth files. That changed recently in GIMP 2.9.2 but that's still the development branch and won't be in the main release until version 2.10. Luckily a couple of developers manage to package it for the popular Debian based distributions through a PPA. There are also Mac and Windows builds of the GIMP development version out there that work well too.
Before GIMP 2.9.2 I really only used it for files that were only bound for the web. Being limited to 8-bit files kept me away from it for heavy-duty print bound stuff. To be honest it probably doesn't make that much difference but it's still nice to have all of the data available from my camera's RAW files there. I usually save out final edits as 16-bit TIFFs (remember XCFs are like PSDs not really "final product" type format) and export JPEGs from Darktable.
Some photographers like to swap the GIMP keyboard shortcuts for my Photoshop-esque ones. I tend to keep the defaults as I don't spend a lot of time in GIMP (or Photoshop for that matter) and I'm used to platform hopping and remembering different key bindings. YMMV. I do put GIMP into single window mode (View -> Single Window Mode) to make my screen a little less cluttered. Also make sure GIMP is set to use your monitor's color profile under Edit -> Preferences -> Color Management).
Make sure you get the plugins installed. The McGIMP version comes like this automatically and in Debian/Ubuntu/Mint you'll need to install the gimp-plugin-registry. The PPA with GIMP 2.9 has that too. This way you get the Resynthesizer plugin which is ultra useful. Just select an area you want to nuke, go to Filter -> Enhance -> Heal selection and let it do its thing. It seems to work at least as well as Photoshop CS6's version.
GIMP gets the job done for my pixel editing needs. I'm not a graphic designer type so I really only use Photoshop for removing unwanted elements from a frame, touch ups and some very basic compositing. That type of work certainly doesn't need the latest version of CC (nor its recurring costs or QA problems). I suspect most photographers really only use about 5-10% of Photoshop's capability too.
Perhaps the most irritating thing about the experience is the lack of capability for Darktable to "round trip" like Lightroom or Capture One can do. Unfortunately this is due to technical issues. It would be great if you could specify a list of external editors and have Darktable automatically create a TIFF file with the edits applied and send it to GIMP. As it stands now you have to export it through the Lighttable module, open it in GIMP and reimport the file into Darktable. If you have to touch up a whole group of images in GIMP that can really slow things down.
GIMP is available for pretty much any platform out there, including Windows. It's pretty handy to have around. Just make sure you grab the 2.9 version for the 16 and 32-bit image support. It's also a little faster in my testing. Don't be scared by the warning messages about it being a development version.
So far it's been more or less pretty straight forward to have a photography work flow that works in Linux. Most of this software works on OS X too. I actually use Darktable and GIMP on my MacBook Pro regularly. So even if you're not comfortable with installing Linux you can still have a more open work flow.
Today I get to put on my physicist hat and talk about a major discovery. Admittedly the aforementioned hat is a little dusty but I'm just excited I get to use those fancy physics degrees of mine for a minute. There may be some of my old professors reading this and to them I profusely apologize and I really hope I didn't screw the explanation up too much.
Anyway, around 100 years ago there was this fellow named Einstein who had funny ideas about gravity and things that moved sufficiently fast. In 1915 Einstein published his General Theory of Relativity which dealt with gravity, basically it said that spacetime was deformable and what we measure as gravity is due to mass distorting that structure. Because of something called the Lorentz invariance of general relativity we had put a speed limit on propagation of information, this included gravity. This also broke with Newtonian physics as Newton had assumed the information from a gravitational interaction was transmitted instantaneously. Out of this fell the prediction of gravity waves. It seemed to make sense conceptually too (at least to me) given the physical construct of gravity general relativity was proposing.
Hypotheses are nice but as Warnher Von Braun once said "one test result is worth one thousand expert opinions." Fast forward around 100 years. Turns out Albert was right about a lot of things. We've measured the deflection of starlight around the sun during a total eclipse, time dilation as predicted by special relativity and dealt with the precession of the perihelion of Mercury with his theories. Over the years relativity has held up to scrutiny, so it's earned the right to be called a theory. Theory has a different denotation for scientists.
This week the team at LIGO announced they had successfully detected gravity waves from colliding black holes. Now we can add gravity waves to the things Einstein was right about. Great. So what does that mean? Well, first off this is a direct observation of binary black holes. Previously we had relied on observations of the space around black holes. The math said singularities should exist, but because we can't observe or measure anything directly from them (anything within the event horizon would have to break the speed of light, an Einsteinian no-no, to reach us) we've relied on watching what they do to the stuff around them.
Visualization of gravity waves produced by black holes spiraling into each other.
However gravity waves give us a back door of sorts into direct observations. When two massive bodies spiral into each other like that they produce changes in the curvature of spacetime that propagate out in a wavelike fashion. Put another way the very thing everything is resting in is distorted in a rhythmic pattern and that pattern travels at the speed of light out in every direction from said event. LIGO measures this with a couple observatories, each have L shaped detectors that are 4 km long on each side. The fine folks over at LIGO fire a laser down each leg and reflect it back and look at the interference pattern. When the spacetime distortion happens the pattern between the two lasers changes and the wave can be measured. This is a gross simplification but you should get the idea. There are two observatories so they can corroborate their results as repeatability and verification is important in science.
Gravity waves as detected by LIGO
That leads us to important part number two. This discovery pretty much opens a whole new realm of observational astronomy and science. Now you can have radio telescopes, optical telescopes and gravity telescopes to take measurements of distant objects. I imagine we'll be able to use gravity wave measurements to figure out more accurate masses of these objects, the energies involved in these collisions and so on. This discovery is on par with Galileo's first telescope. Indeed LIGO is to gravity astronomy what that glass filled tube is to optical astronomy. Better instruments will allow for better data down the line and further testing of Einstein's theories.
Just like electromagnetic waves there is an entire spectrum of gravity waves to investigate. The masses and energies involved will dictate the frequency and amplitude of the waves along with the size of the detector needed to see it. Some gravity waves will take laser interferometers bigger than we can build here on Earth to detect. The closest analog is when Herschel discovered infrared and that the spectrum extended beyond the visible range. Pretty much everything we know about the universe comes from observations of the EM spectrum or particles interacting with detectors here on Earth. We now have another physical quantity to measure. It's like only being able to describe the make and model of a car and now suddenly you can also describe its color. Until the LIGO team's discovery we were basically blind to a fundamental physical interaction.
Does this mean that Einstein had some sort of ultimate theory? Nope. In fact the discovery of gravity waves will allow us to further stretch and test his math. He was really good at math by the way, that thing about him failing it is an urban legend to make you feel better about your own inability. Newton was shown to have a universal theory of gravitation until it failed to explain some phenomena. Today we have problems reconciling Einstein's relativity with some principles in quantum mechanics so it's not without its problems. This is conjecture but it's probable that relativity is a subset of another set of physical laws we not aware of. Newton's theory of gravity actually falls out of the math from relativity so there is precedence for such an idea. That is Newton's gravity are applicable to specific scenarios where as Einstein's are more general. So it stands to guess that there may be an even more general set of theories we don't know about yet. I don't want to bash Newton too much. We still use his laws to put probes around other planets and the guy basically invented calculus on a dare (eat it Leibniz fans).
Practical, everyday applications? Probably nothing for the foreseeable future. Don't expect Star Trek warp drive, Mass Effect biotics or other science fiction leap out of this either. The keyword there is fiction. Of course applications of this discovery could change as technology advances but I wouldn't hold my breath. I'm sure Einstein didn't envision the digital camera when he discovered the photoelectric effect (what he won the Noble Prize for, not relativity) and now practically the whole world has one at its disposal. In my opinion discovery shouldn't have to put food on the table or justify itself to be worthwhile. It's sad that we live in a world where simple curiosity isn't rewarded, but I digress. A greater understanding of the universe is a reward in itself. Well, back to my day gig of making sure college students can still get to Facebook.
It's another Saturday afternoon. Your hands are sore, everything smells of motor oil, axle grease or ATF. It's getting close to its third decade, you've just started your forth and this three hour repair is on day number two. A mix of distrust and pride prevent you from simply taking it to a shop. You may have an understanding spouse but your friends think you've lost your mind. Questions have arisen about the financial wisdom in maintaining this vehicle or perhaps even some safety concerns.
This car, a simple means of conveyance to most, has been with you a while. Around a decade ago you began this relationship. Back then you had more ideas than money, things looked different. Retirement was something grandparents did. $300 for rent was a challenge and $300,000 for a house was unthinkable. The car took you everyday to work and class. Saw you and friends on a few trips. Eventually it drove you to a couple graduations or to a new job and your friends went their own ways. It took you and your now wife on your first date, later it drove you to your wedding. Dutifully the car moved your possessions from apartment to apartment during those first few years or marriage. Later it slogged through cold winters' days to move you into a new house.
Throughout all this change this machine remained ever constant. It's no longer just transportation but memorial of sorts. A symbol for what you once were and where you've come from.
Not only the memories but it's a connection to the way things used to be. Bluetooth? CVT? Navigation system? Nope. The 1990s weren't that long ago but things have changed. Even basic cars now are more about infotainment than driving. Digital gauges and touch screens now demand more attention than the task at hand. Huge pillars and backup cameras to compensate for the lack of visibility now standard. A truly small, light car with a clutch pedal and sense of danger is as foreign of a concept now as self driving cars were then. As Americans we claim to love driving but the evidence says otherwise.
No matter. You're different now. Progress happened. Perhaps you're a little fatter and a mortgage takes the place of that great next thing. Excitement about your potential future has been displaced somewhat by the reality of a mundane career and the realization that today is that "one day" you spent so long thinking about. Not to complain too much, you have it better than most. You have your health and the means to provide. Still, something is disconcerting about the whole situation. But not when you sit in that machine. After falling into the well worn bucket seat it's twelve years ago again. You were going to be something back then. Perhaps you still are, but this feels different. Turn the ignition and suddenly everything makes sense. The drive to do more than just get by returns.
So you lean back and start thinking about replacing those struts you've been putting off.
Cognitive dissonance is a crazy thing. On one hand I am definitely committed to this whole photography thing. I enjoy walking around with my camera looking for nothing in particular to shoot. There are times when someone is either wearing something that catches my eye or doing something interesting and I'll stop and ask for a photo. Usually I don't do candid "surprise photo" stuff in public. I think that whole in your face street photography style has been done to death. There are times when I'll end up with people in the shot just because they're involved with the larger scene and on rare occasion I have grabbed a compelling shot without asking because that moment was just too good to pass up. When a person just happens to be in the scene they're usually small enough to be unrecognizable as in the example below. Most folks are good sports about it and the people who seem to think every guy with a camera is some sort of Jared Fogle are few and far between.
In the United States the law is pretty clear on this. According to the ACLU you have the right to photograph anything that is in plain view when on public property. This includes people, buildings, and officials. There have been some laws passed in certain locales about not photographing police officers. This also doesn't mean you can get a long lens, stand on the sidewalk and try to shoot in someone's window and on private property the property owner sets the rules. You also can't break other laws when shooting like trespassing, obstructing traffic or vandalism. You also can't take photos of someone on public property where there is a reasonable expectation of privacy, restrooms and changing rooms for example. Even if you can see into it from the outside for whatever reason. On private property the owner or people working for them cannot confiscate your camera. They can just tell you to stop or ask you to leave. Well, they can't take it from you legally but if they're threatening you with a firearm or a good beating I wouldn't argue, you can't get your camera back if you're dead. Just go get the real police if that happens.
Again, these are US laws so please don't use them for justification of public photography anywhere else. Know the laws of the places you intend to visit.
So we have a precedent for public photography being OK. Largely that has been a good thing as it lets the press do their job and allows for free expression, which was the original reasoning behind the legal justification as I understand it. Both of these are important for a world like ours to function correctly and I wholly believe in these principles.
However, most of this was put into place long before the digital age. These days a random snapshot can end up being view by people halfway across the world thanks to things like Flickr, Facebook, 500px, etc. There are also face detection algorithms which a few of these services use to automatically tag someone, along with apps that automatically upload your photos to various cloud providers. This makes for a more interesting set of problems. A photo snapped of a stranger may end up in front of all kinds of eyeballs, unlike in the past, whether deliberate or not. For most people this probably isn't much of a problem. A good part of us first worlders live our lives in public on the internet so a random photo of us walking down a street next to 5-6 Facebook albums of drunk party photos isn't that big of a deal. But there are times that someone may not want their photo plastered all over online. The common response to this is "if you're doing nothing wrong you've got nothing to hide" but it's really not that simple. There are times when people relocate to escape bad situations or other life circumstances. This doesn't mean they're up to something illegal, they could just be trying to remain hidden from an abusive or threatening person or situation. Having their photo online indicating their whereabouts may be less than optimal. These situations, while probably rare, are just a couple of examples of why someone one may not want their photos taken and posted online. Some folks may just want to live online free lives for whatever reason and that's OK too. Several decades ago during the heyday of film photography people didn't really have to worry about things like this. A photo generally didn't travel very far.
I believe that people should have a right to where they want their data stored and how much of that they want public facing. I tend to eschew many cloud services for this reason, only use Facebook/Google/etc for the most inane social interactions, disable location services on my devices, use trust no one encryption when possible and so on. Remember kids, there is no cloud, it's just other people's computers.
So I can see two sides to this argument. In a world where a photo can be seen by thousands of people in a few seconds where does the individual right to privacy end? Should a private citizen be able to ask a photographer not to take a photo or to delete it? Would that potentially be abused by public figures who may be up to no good? What about the media's ability to do its job? What about art? Do the current rules work well enough?
Personally, I'm more or less OK with how things work now. There are some fringe scenarios where everything may not work out but for the most part out in pubic and put on the Internet are close enough to the same thing. That's probably a good way to treat what you do online too. Just because you've put some disclaimer (that I'm laughing at right now) about how private and confidential that your email is doesn't mean there isn't a relay out there with it stored. Use GPG fool. Same goes for Facebook messenger, etc. But I digress.
Moreover, the press and people documenting public officials are protected somewhat by these rules. This is probably one of the most important things to keep in mind. Sure, you might not like that random guy taking photos on the street but there are others who depend on these rights and ultimately we all do to some extent. However, it is good to sit and think about these things every once in a while. Things change every once in a while these days.
Note: Riley Brandt has released some video lessons on using Darktable and other open source software for photo development. I highly recommend checking them out.
Here we are getting into the good part. I've become quite fond of Darktable's RAW developer. The tools are solid and in some areas it even outstrips Lightroom's develop module.
The first thing I noticed is that Darktable doesn't come with a whole lot of modules enabled out of the box. That's easy enough to remedy though, simply expand the area labeled "more modules" and enable the modules you'd like to use. If you click through a few times it will add the module to your favorites list. I recommend enabling color correction, profiled denoise and monochrome as those have been some of my favorites thus far.
Darktable groups modules together in a few logical bins. These groups are called Active, Favorites, Basic, Tone, Color, Correction and Effects in order from left to right just below the histogram. Active is handy for disabling adjustments without digging through the other groups. The Favorites grouping contains all of the modules you've stuck in there. This is analogous to web browser bookmarks. The rest are sort of self explanatory. Things like contrast and exposure are found in the Basic group, tone curve and levels in the Tone group, color correction and input color profile in the Color group, lens corrections in the corrections group, etc. It's pretty logical once you learn your way around the interface but it does take some getting used if you're coming from Lightroom.
I'm not a huge special effects/post processing type photographer so I usually hit a few select modules and export my finished edit. Darktable does store edits in XMP files but not every other software package supports the same adjustments in the same way so it's nice to have a 16-bit TIFF of your final. My first stops are usually input color profile and base curve. Darktable supports camera ICC profiles but not DCP profiles. This posed an issue for me early on as I had been using a ColorChecker Passport and the X-rite software which only generates DCP profiles. This is fine if you're using Lightroom or Camera RAW but not much else. Apparently Argyll CMS supports the ColorChecker Passport with the correct template (which I downloaded). However I could never get it to recognize the test pattern. Not all is lost, I managed to get a working ICC profile from dcp2icc after using X-rite's software on my Mac to create the DCP profile.
Base curve is my next stop, I usually choose the preset that goes along with my camera model. Base curves take the RAW data and essentially tunes it to look good for your monitor. This is similar to the image treatments your camera does to JPEGS when you set it to Camera Standard or Camera Landscape. Darktable has a few built in base curves for each manufacturer and you can create you own as well based on JPEGs from your camera.
If the image needs some exposure tweaks I'll hit up the exposure module. This module works about the same way way as every other exposure adjustment I've ever seen in a RAW developer. Darktable's masking capability is quite good, so it's easy to just paint in adjustments over parts of the image. If no adjustments are needed I'll move on to levels, tone curve, contrast, saturation and local contrast. It's nice to have levels and curves in a RAW developer and when I do end up back in Lightroom for whatever reason I find myself missing it. Local contrast is the closest thing I've found to the clarity adjustment in Lightroom and it works OK, although it seems to highlight noise more than accentuate details. I've been primarily using the tone curve tool to adjust highlights/shadows instead of the highlights and shadows module as it seems to give better results. That's just my findings, you may like the other module better.
Speaking of noise, be sure to take a look at the profiled denoise plugin. It's a bit complicated looking at first but after some trail and error I've found some settings that seem to work fairly well. The wavelets mode seems to work better with color nosie and the non-local means seems to work better for chroma noise. Thankfully in Darktable can apply the same filter multiple times in multiple modes. Generally I'll duplicate the profiled denoise plugin and set one to wavelets, strength of 1 and blend it uniformly in color mode. For the other I'll set it up as non-local means, strength of 0.5 and blend it uniformly in lightness or HSV lightness mode. Those settings have worked well for me, but you may want to tweak things.
On a side note I'd also recommend looking at the color correction module. It's more of what I'd call a special effect as it's most similar to to split toning in Lightroom. But it's very useful for color grading your images. Much more customizable too.
That's about it for a quick pass through the RAW developer. This isn't intended to be a full tutorial, just a quick breeze through some of the steps I use to edit my RAWs. Next up I'll look at the GIMP and a few specialty programs out there for photo editing in the Linux world.
Note: I have been relatively busy of late and I've been constantly revising my workflow and thus these posts. Hence the break in between updates. Still working on getting it dialed in. With Adobe's recent move towards punishing perpetual license holders I imagine alternatives will be getting some more attention.
Now we're ready to so some actual photo work. The first step in any photographic workflow is importing, organizing and tagging. Personally, I prefer to keep my photos organized on the filesystem in lieu of using any sort of albums or collection features of a particular software. This is for a couple of reasons but the main one is to remain independent of whatever development package I'm using. If Aperture and iPhoto have taught us anything it's that these things are moving targets and it's probably not wise to bolt yourself down to any particular application. Most operating systems and desktop environments support meta data in some fashion so in a lot of ways the organizing side of a lot of these applications is a bit redundant now. Mostly Lightroom, Aperture, Darktable, etc just add a GUI that's better suited to managing photos than the file manager built into your OS.
The next little bit is going to be pretty OS independent. It's also my personal way of doing things. This might not work for you or it might not make any sense. That's fine. I organize my photos into directories based upon a few criteria. Mainly I separate them out by either what type of job they were, if they were part of a series or a project or just random day to day snapshots. For example I'll have a Photos directory with sub-directories called 000_Projects, 001_Screenshots, 002_Clients, 003_Models_and_Tests, 004_Photos_by_Year. Inside of those directories I'd have directories named after the clients, job, date, or other criteria. From there I just copy the files over like any other document. After all RAWs and JPEGs aren't any different than other types of files.
Some people like to rename files off the camera, I don't do that. I worked with other photographers on jobs or as a second shooter so I have my camera doing it's own custom naming with my initials (eg LGH_XXXX.NEF). I generally don't use something like Photo Mechanic or Rapid Photo Downloader to cull or backup my files either as I'm generally set into my current workflow. However do recommend giving Rapid Photo Downloader a shot if that's your thing. My shooting style is more deliberate instead of spray and pray so generally don't have many photos to throw out when I get back to my desktop. However, I've used Lightroom for this purpose and continue to use Darktable to cull. Again, this may not be ideal if you generate a ton of images per assignment/vacation/outing/whatever. For whatever reason I like to shoot like I still have a roll of film in my camera and I don't fill up cards.
My next step involves tagging the images and applying meta data. In the case of RAW files Darktable write this information out into an XMP side car file. Lightroom will do the same thing, but you have to turn on that option. I prefer this to storing the metadata in a monolithic library as it's more portable. Darktable has pretty rudimentary metadata editing support but it gets the job done. The presentation could be a little more polished in my opinion and support for a few other fields added. Hopefully more refinements come in future versions. The lighttable module is probably the weakest point of the software right now, but it's still very usable and highly customizable.
Darktable has a few presets for metadata. I started by using one of the Creative Commons options, fill in my name for the creator, and import from there. In the metadata panel you can customize the defaults and create your own presets if one of the defaults doesn't cover you. Unfortunately Darktable does not seem to support the full IPTC Creator fields as of the writing of this post. At least I haven't found anywhere to editing things like the address, phone number and site address fields. However, I simply put my contact info in the tags including my website. Not so much for preventing infringement as for directing people to my site from images they find on places like 500px and Flickr that display the tags. Other than that I don't go crazy with tags. It's definitely one of those less-is-more things. I limit it to 10-15, usually including the subject, name of the client, location, etc. All of this works just like that other commercial product everyone else uses.
Hopefully that was helpful and not too rambling. I keep updating and revising this workflow as I go so it's taken a while to get this all written down. Next up I'll dive into RAW development. That will be more about Darktable and not necessarily specific to Linux as well.
Perhaps the most important part of any graphic workflow is color management. A colorimeter should be up there with a camera and a lens on your list with of equipment to buy. Without one there's just no way to ensure reproducibility of your photos. Colorimeters are relatively cheap nowadays and even the basic ones are more than enough for discerning photographers. If you don't have one I recommend going and picking up the ColorMunki or ColorMunki Display right now. These are great inexpensive colorimeters that work well and last many years. My ColorMunki Display works fine with Xubuntu 14.04 and I imagine most USB colorimeters would.
Once you have the hardware you'll need the software. On Ubuntu based distributions this is very easy. Somewhere between falling off a log and screwing in a lightbulb. You can either used the built in display calibration tool in regular Ubuntu or install dispCalGUI via the Software Center or web download for Kubuntu or Xubuntu. If you are using one of the Ubuntu derivatives as I am you'll need to install whatever bridge you need for the desktop environment and colord if it doesn't have it by default. For Xubuntu and XFCE that's xiccd. Again not terribly hard. This is so the resulting profiles can be loaded by XFCE/Unity/KDE/etc. In standard Ubuntu you won't even have to do that much if you use the built in tools, however I recommend trying out dispCalGUI anyway as it supports more features of the colorimeter's hardware, such as ambient light detection.
As far as I can tell there's no calibration reminders in dispCalGUI like the X-rite software. I may have missed it somewhere though. I just put a reminder in my calendar to overcome that problem.
All in all display calibration is pretty simple in Linux. If you can do it on Windows or OS X you can do it on a modern Linux distribution. The whole process is about as hand-holdy as it gets.
Next up I'll go over organizing, tagging and metadata handling.