Fedora GNOME RAW Thumbnails

I've been trying to give GNOME on my desktop a fair shake the last few months as I like the way it works on my XPS 13. I've run into a few problems here and there but lacking RAW image previews in the file manager is quite the oversight in 2021. Dolphin and even Thunar do it by default in KDE Plasma and XFCE respectively. I usually run Fedora with the KDE Plasma environment on my desktop machine but the last time I tried GNOME 3 a number of years ago I seem to recall RAW thumbnails existing? Perhaps not, but here's how you fix that:

GNOME RAW Thumbnails

Fixing this oversight is not to terrible but does require some fiddling. First you'll need the ufraw package. Install that through the GUI Software center or through dnf:

dnf install ufraw

It'll bring a couple of dependencies with it but not too much bloat.

Next you'll need to create a file at /usr/share/thumbnailers/ufraw.thumbnailer and add the following to it:

[Thumbnailer Entry]
Exec=/usr/bin/ufraw-batch --embedded-image --out-type=png --size=%s %u --overwrite --silent --output=%o
MimeType=image/x-3fr;image/x-adobe-dng;image/x-arw;image/x-bay;image/x-canon-cr2;image/x-canon-crw;image/x-cap;image/x-cr2;image/x-crw;image/x-dcr;image/x-dcraw;image/x-dcs;image/x-dng;image/x-drf;image/x-eip;image/x-erf;image/x-fff;image/x-fuji-raf;image/x-iiq;image/x-k25;image/x-kdc;image/x-mef;image/x-minolta-mrw;image/x-mos;image/x-mrw;image/x-nef;image/x-nikon-nef;image/x-nrw;image/x-olympus-orf;image/x-orf;image/x-panasonic-raw;image/x-pef;image/x-pentax-pef;image/x-ptx;image/x-pxn;image/x-r3d;image/x-raf;image/x-raw;image/x-rw2;image/x-rwl;image/x-rwz;image/x-sigma-x3f;image/x-sony-arw;image/x-sony-sr2;image/x-sony-srf;image/x-sr2;image/x-srf;image/x-x3f;image/x-panasonic-raw2;

Close out of Nautilus and do an rm -rf ~/.cache/thumbnails/* then open a directory with RAW images. You should now see RAW photos as above. Hooray!

Granted I think Nautilus is one of GNOME's weak points, especially compared to what's available in other Linux desktop environments. Nautilus seems neglected and at times it even makes me miss the notoriously bare-bones Mac OS Finder. GNOME excels in a number of areas this makes the mediocrity of the file manager stick out. Split windows like Dolphin and more view mode options instead of just "icons" and "list" would be a good start.

RAW thumbnailing really should be in the base install of any desktop distribution using GNOME. It's in literally everything else (except Windows 10 apparently) and has been for years. If you're running GNOME you're probably looking for more of a modern desktop experience instead of a lean and slim install anyway so a few more megabytes of software and a configuration file won't make that much more difference.

Peak Linux Desktop

Linux Desktop

We are in the peak Linux desktop era and it might be downhill for a while. When I say peak Linux desktop era I mean you can pick up nearly any machine off the shelf, install Fedora, Ubuntu, Mint, Debian or Manjaro as they are and get straight to work without fiddling with the computer itself. There are some corner cases where things are more difficult but even nVidia PRIME works well these days. In my opinion the Linux desktop is basically at parity with the proprietary options in terms of "just works." You can boot the machine up, do the initial create your account and sign into your cloud stuff and boom you're done just like any other desktop OS. The image of Linux being about messing with the computer is really outdated. It's a good productivity tool, as much so as Windows or macOS in my opinion. Yes it's different and takes some adjustment to the new workflow, just as if one were to switch from Windows to Mac or vice-versa, but it's not inherently broken. No you don't have to touch the CLI for anything on the major desktop environments if you don't want to.

A lot of this is due to monumental efforts on the part of volunteers but also companies like Red Hat and Canonical. However there are two major players who have contributed huge amounts of code, time and funding to the Linux ecosystem that people seem to forget about: AMD and Intel. Intel is a major part of the reason why modern USB standards work on Linux and AMD has documented and open sourced their graphics drivers. Intel and AMD pay developers to work on this, upstream code into the kernel and have generally been decent community members. Even with all their flaws we owe them some thanks for Linux hardware support being where it is along with the fleet of volunteers maintaining things at organizations like freedesktop.org.

Late last year Apple released ARM based Macs. Microsoft has just announced a translation layer to allow X64 code to run on ARM on Windows and has shipped ARM machines for years. Many OEMs have been shipping ARM Chromebooks for a while too. The Apple announcement came with their usual showmanship and is going to make the rest of the world take notice. Like everything else Apple does the rest of the industry will be tripping over themselves to follow suite. I wouldn't want to be Intel or AMD right now or be holding their stock. ARM is likely the future for most devices outside of enthusiast desktops or legacy applications. I think that even enthusiast desktop platforms will switch over, but that's just my opinion. Yes I realize Chromebooks are technically Linux as is Android. This is more about the FreeDesktop type of future. Chromebooks and smart phones are still very locked down devices and do not support the full range of what the Linux desktop today can.

This brings us back to peak Linux desktop. Linux runs on armhf and arm64 just fine. The problem is the ARM ecosystem is a mess of cobbled together proprietary things. Even if you aren't dealing with a locked bootloader power management, boot processes, IO and graphics varies tremendously from one ARM platform to the other. Unlike X86 and X64 where UEFI, ACPI and other well documented standards are implemented across nearly the entire range. There are very few ARM vendors working on open sourcing and upstreaming support for their platforms into the Linux kernel either. Even the beloved Raspberry Pi relies on out-of-tree patches for full support. Most of the other consumer facing ARM boards out there for Linux rely on reverse engineering in some part or the other, even if just for the Mali graphics.

There is hope, a standard called ARM ServerReady mandates UEFI and ACPI for compliance. This solves the boot process and power management problem. Despite the data center centric name this standard works fine on desktop and laptops as well. Microsoft's Surface products used ARM ServerReady as does the ARM based Lenovo Yoga. Tyan and Gigabyte have been selling a range of ARM servers that fully support it as well. Indeed you can grab an arm64 Debian installer and slap it right on the Tyan and Gigabyte machines. They work great as long as you don't need graphics.

Alas, this only solves part of the problem though as you're still dealing with a lack of driver support. Mali graphics have a reverse engineered driver that has been mostly accepted into the kernel the last time I checked but like most reverse engineered things it's usable not not fully featured. A very similar feeling to the Nouveau driver for nVidia cards. Most ARM options for Linux on the desktop or mobile right now are low performance patchwork things like this.

What we need is an ARM vendor to step up like Intel and AMD have and work hard on upstreaming support into the Linux kernel for all their hardware and ServerReady to become the default ARM platform. Huawei is probably the closest on this as the Chinese tech sector is ditching western software firms in favor Deepin and Ubuntu Linux. I think a lot of the FUD about Huawei hardware is just US government sabre rattling. I've seen no proof of it and honestly these days it's pick your backdoor if you're using anything from a Five Eyes country anyway. Until someone starts working hard on upstreaming support and ServerReady becomes the defacto standard Linux on ARM is going to be a mess and will revert back to hobbyist only territory for the desktop. I've been using Linux on and off since the late '90s and early '00s. I remember the good-old-bad-old days before Intel and AMD got onboard and I don't want to go back. That's where the "4 hours and 6 kernel recompiles to get your network card to connect" meme came from. I've got other things to get done now and cannot spend that kind of time minding my machine.

Don't get me wrong, I don't think the Linux desktop is going anywhere. But I think the mid-future is more like the Raspberry Pi or PineBook Pro and less like a ThinkPad or XPS with Fedora, Mint or Ubuntu on it. It will be a tool for people to create one off projects on, not the robust desktop we have today. There may be a open, performant, upstreamed and widely available ARM platform for Linux in the future but I think in the meantime were in for a decade of pain. Again, I could be wrong, that happened once before in the 80s.

CentOS, Fedora and Trust

Last week Red Hat made an earth-shaking announcement and ended CentOS 8 almost nine years ahead of schedule. Understandingly this caused quite the stir in the Linux community. CentOS started off life as a community rebuild of RHEL as Red Hat released their source to comply with the GPL. It existed independent of Red Hat until 2014 and was understandably quite popular. There were several community rebuilds of RHEL back then but CentOS was the survivor, Scientific Linux was the other major player that folded up shop last year. Even if you paid for RHEL you probably used CentOS in testing or development environments. Many RHEL admins trained and learned on CentOS too. I remember when Red Hat acquired the CentOS project in 2014 there was a lot of hand wringing over what this meant as CentOS cut directly into Red Hat's bottom line. Once again these community worries came back up in 2018 when IBM acquired Red Hat. Although it took six years it seems many of the doubters were proven correct last week.

I'm not a huge user of RHEL or CentOS user myself. We use RHEL and CentOS in the office in some places we are required to but for the most part my infrastructure pieces are Debian. Among many other things I like the governance and structure of the Debian project much more along with the "universal operating system" approach they take. I've run Debian as a desktop as well and it works just fine in that situation too and has become easier to do with backports and testing actually becoming usable and getting more timely security patches. Debian stable is probably my favorite server operating system to work with. So, the direct impacts of CentOS being taken out back and shot are minimal on my day-to-day life. However, since 2016 or so I've been using Fedora as my desktop operating system. I'm not a fan of Ubuntu for a few reasons and Fedora had what I thought was pretty sane defaults, handled the HiDPi screen on my laptop better and flat out looked the best. I mostly ran the KDE Plasma spin at the time but over the last 8-12 months or so I've even adopted GNOME on my laptop. Fedora is the project that CentOS and RHEL are ultimately derived from and Red Hat/IBM has a controlling stake in the Fedora Project. I'm also a pretty big Ansible user and have been since before they were bought out by Red Hat.

This is where I become concerned. Two of the major projects I use on a daily basis, Fedora and Ansible, are owned and controlled by a company that just threw the on part of the Linux community under the bus. Yeah, I understand that they have a profit center to protect and CentOS was directly impacting that. Part of it was certainly bad communication before hand that CentOS was a "best effort" product and not really guaranteed, but cutting CentOS 8 off in 2021 when many people were counting on it until 2029 is brutal, no two ways about it.

Right now Fedora is likely safe as is the free version of Ansible. I can't say I'm entirely comfortable with Red Hat having such a large stake in Fedora now. Fedora makes a big deal about being community driven but Red Hat still has a controlling stake in the project and puts up a lot of infrastructure for it. On paper Fedora is independent but in practice it's entirely reliant on Red Hat and Red Hat has a substantial driving force in the project. What does this mean going forward? Will Red Hat or IBM start wanting telemetry or some other dubious thing implemented in Fedora? I'm not saying they will but I think a lot of free software users and fans would feel a lot better with a more independent Fedora. Even if it means they need to have NPR or PBS style pledge drives every year to pay for infrastructure I think it would be preferable to having so much reliance on Red Hat and IBM. I'd gladly chuck some money at a Fedora Foundation if it meant they could tell IBM to pound sand if the community thought something from corporate was a bad move.

Red Hat likes to crow a lot about open source and community but this move has burned a lot of bridges. In the short term I suspect they'll net a few more RHEL subscriptions from those who can afford to convert over. But the memory of the community is long and this move may burn them in the long term, trust isn't something that's earned easily as most of us are here because we don't trust closed software vendors like Apple and Microsoft. Right now I've switched back to Debian on my laptop and I may move my workstation back to it in the coming weeks but we will see. For now it's still on Fedora 33.

Debian isn't perfect but it's at least free from a lot of corporate meddling and is a more truly community driven project. I mean, they even have package on anarchism in their distro.

I don't mean this as a knock against the fine folks at the Fedora Project. It's a great piece of software and I still say it's the best "get it installed and get to work" type Linux out there. I still highly recommend it to anyone looking to start out in Linux, Debian isn't nearly as easy to get started with. Fedora also doesn't shove proprietary package managers in behind your back like Ubuntu. I really care about the direction of and enjoy the Fedora Project as a whole. I just don't trust the organization paying your power bills at the moment.

ArgyllCMS Display Calibration

Most photo and graphics people are probably familiar with DisplayCal as a monitor calibration tool. It's more thorough than the software that ships with Xrite devices, even if you're using one of their supported platforms like macOS or Windows. Unfortunately DisplayCal's GUI relies on Python 2 which is end-of-life and is being removed from a number of Linux distributions. Fortunately the GUI is just a front end for Argyll CMS and it is quite easy to calibrate a monitor with just Argyll installed. The command is as follows:

ArgllCMS Calibration

dispcal -d 1 -v -P 0.5,0.5,3.0 -o 9300_Internal_Display

The options are as follows:

-d 1: number of your display, if you have multiple displays this will be 1,2,3.. etc. Just running dispcal with no options will show what displays get what number. If you have just one display just do -d 1.

-v: verbose output

-P: screen posistion and scale, useful for HiDPI displays, options are X-position, Y-postion, scale. Here I have it in the middle of the screen (0.5,0.5) at 3.0X scale.

-o: Output file to save the profile to.

If you don't have a HiPDI, also called a retina display by Apple, the -P option might not be needed. I use it because without scaling the color square displaycal creates it is too small to cover the colorimeter. The software gives you the option to make some display adjustments before calibrating. If you're on a laptop or another display without RGB adjustments you can just press 7 to just straight to calibration and let dispcal do its thing.

Once the ICC profile is created you can import it into your desktop environment's color correction tool and apply it to the correct display.

Micca OriGain AD250

I recently upgraded my desktop audio, for about the last fifteen years I used a 5.1 setup with a Logitech Z-5500 which have been OK. Nowadays I do more listening to music than anything else where stereo is more important than surround, not to mention the reclaimed desk real estate. I have a set of Sony bookshelf speakers laying around in storage so I went looking for a desktop amp. My requirements were an optical input with USB or analog inputs for secondary devices. Bluetooth wasn't a requirement as I can use PulseAudio on Linux to turn my desktop or laptop into a Bluetooth receiver but it would have been nice to have built in. I'm also a stickler for physical buttons and switches for switching inputs.

AD250

The Micca OriGain AD250 was still available from the usual online places even after stocks of other amps dried up due to COVID related importation problems. My other options were a couple of models of SMSL amps but I honestly like the looks of the Micca better. It's 50W on each channel which is more than enough for a desktop setup and has very simple forward facing controls.

AD250

AD250 Back Panel

My desktop machines have had optical out for years and I always use it. It's a nice way to bypass the onboard sound card that is usually mediocre at best and may have proprietary features or codecs that do not work fully in Linux. Optical is becoming far less common on laptops though so I wanted something with wither USB or analog in for times when I need it. Plus I still have an iPod or two laying around.

The AD250 is a compact and good looking box. I like having a physical volume control and the switches for changing inputs are nice and great to use. The power brick is a bit large and you'll want to relocated it off the desk.

Overall it sounds good and the output is well suited for a desktop type scenario. I wouldn't expect this to fill a large room.

The major downside has been the popping when using the optical input. It's like a power on pop and I thought it had to do with the power saving functionality on modern motherboards powering down the LED on the optical output. It's pretty easy to disable this feature in Linux:

/etc/modprobe.d/snd_hda_intel.conf
options snd_hda_intel power_save=0 power_save_controller=N

That should handle it on most onboard sound cards. It still happens less frequently and it may be either a power supply issue on the Micca or the circuit design around the amp. The fact that it doesn't do it on the analog input makes me think it's the Micca's power supply or perhaps the amp lacks a delay circuit. There's a slight chance it's in my Sony SS-B1000 speakers as they are 8 Ohm and that's right at the upper end of what the AD250 will drive. I've tried numerous optical cables and no dice, still pops. I’ve noticed it some one the USB input as well, just not as bad. Unless I can get the popping solved that may end up being a deal breaker for my use. Maybe a revision two of this amp will fix the optical port. I've read some comments from elsewhere on the internet that the optical input noise is not isolated to just my amp so it seems to be either a power supply issue, filter issue or some other design problem with the AD250.

It’s most annoying when you’re scrolling through a web page with many auto playing media sources, for whatever reason even if you have the audio muted each video clip that starts up causes the amp to wake up and pop

Overall I like the look, feel, feature set and user interface of the amp but for whats a $100 desktop amplifier I find the popping to be annoying. From what I came across in some internet searches the popping is not a unique problem to my copy of the amp either so there seems to be some quality control issue with these or some engineering or component issue. I do use a splitter on my desktop's optical output as I also send it out to an OriGen G2 for my headphones and neither of these devices seem to like the active splitter I have. Once the machine powers on they will just blast static at you if the optical input is still selected. I switched to a passive splitter and they behave much better. For now I plan on hanging on to the AD250 and seeing if I can figure out what the deal is with the popping. I have a 2015 MacBook Pro with an optical output as well that I may drag out of the pile to see if it pops with that machine. Could still be some driver or power save feature on my desktop causing it. The old Logitech system didn't have these problems with either machine.

For now I can say if you're using the USB input or the analog input this is a great little desktop amp with an awesome user interface. Just flip the switch to your input and turn the knob on for volume. No remote, no digital display to break just simple, classic and solid which suites my tastes quite well. It drives reasonably sized bookshelf or desktop speakers just fine and sounds very detailed I think. If you're a basshead it's probably not the amp for you. The optical input is still suspect though in my opinion but chances are most people aren't using those in 2020 so that may not be a deal breaker for you. Since I'm using the USB input on my docked laptop and the optical on my desktop I really need both digital inputs though.

AMDGPU-PRO OpenCL on Fedora and Debian

Since the open source AMD GPU Linux drivers are now quite good I swapped my GTX 970 from my old machine for a Vega 56 in the new Threadripper build. Unforuntely the kernel and mesa drivers do not support OpenCL, I tried ROCm for a while but only one build of version 1.2 would work with Fedora 30 and it would sometime casuse kernel panic when in use with Darktable. Not to mention some strange image artifacts when using certain modules.

AMD does have a proprietary closed source driver for Linux but it only supports a very small set of distributions and their checks on the installer are very strict. So there's no just running the installer script or installing the RPMs or DEBs. Plus there's the fact that I just need the OpenCL portions of the driver and not the display portions. You can continue to use the open source kernel drivers for display, OpenGL and Vulkan which is preferable as they outperform the proprietary AMD drivers. Just to reiterate: this is not necessary for 4D acceleration and gaming. The AMDGPU-PRO proprietary drivers are needed for OpenCL and compute only. If you're just looking to play games or run Steam the open source mesa implementation shipping in up-to-date distributions these days is more than good enough.

Fortunately there are just a few packages needed from the proprietary AMD driver for OpenCL to work and it is completely decoupled form the display driver. Start off by downloading the admgpu-pro package for the latest version of Ubuntu. This will work for Debian or Fedora as you're just going to be extracting the .deb packages anyway. You can do the same thing with RPMs and rpm2cpio but it's just a bit more troublesome. dpkg is available on Fedora anyway so it's no big deal.

At any rate you'll need the following packages:

libdrm-amdgpu-amdgpu
libdrm-amdgpu-common
opencl-amdgpu-pro-icd
opencl-orca-amdgpu-pro-icd
libopencl1-amdgpu-pro

The version numbers will vary depending on when you read this and download your package. First extract the tarball:

tar vxf amdgpu-pro-19.20-812932-ubuntu-18.04.tar.xz

Move the DEB packages out that you to a seperate directory

dpkg-deb -x opencllibdrm-amdgpu-amdgpu1_2.4.97-812932_amd64.deb opencl_root
dpkg-deb -x libdrm-amdgpu-common_1.0.0-812932_all.deb opencl_root
dpkg-deb -x opencl-amdgpu-pro-icd_19.20-812932_amd64.deb opencl_root
dpkg-deb -x opencl-orca-amdgpu-pro-icd_19.20-812932_amd64.deb opencl_root
dpkg-deb -x libopencl1-amdgpu-pro_19.20-812932_amd64.deb opencl_root

Change directory to where you extracted those files:

cd opencl_root

Copy the necessary files into place:

sudo cp etc/OpenCL/vendors/* /etc/OpenCL/vendors/
sudo cp -R opt/amdgpu* /opt/.

Now the dynamic linker needs to be updated so it knows where the libraries are located. In /etc/ld.so.conf.d/ create two files and put the following lines in them:

/etc/ld.so.conf.d/amdgpu-pro.conf:

/opt/amdgpu-pro/lib/x86_64-linux-gnu/

/etc/ld.so.conf.d/amdgpu.conf:

/opt/amdgpu/lib/x86_64-linux-gnu/

Then run ldconfig:

sudo ldconfig

Test and see if it works with clinfo -l, if it's working you'll see something like this:

clinfo -l 
...
Platform #0: AMD Accelerated Parallel Processing
 `-- Device #0: gfx900
Platform #1: Clover
 `-- Device #0: Radeon RX Vega (VEGA10, DRM 3.30.0, 5.1.20-300.fc30.x86_64, LLVM 8.0.0)
 ...

Darktable should also allow OpenCL now, you may need to delete the pre-compliled OpenCL kernels in ~/.caches/darktable if you were using ROCm before.

Darktable OpenCL

While this isn't the perfect option in terms of free software it's still preferable to nVidia's driver in my opinion. Hopefully ROCm will become more stable and will be packaged by Debian and Fedora in the near future, which seems likely given AMD has blessed ROCm as their future compute solution.

Linux Desktop Update

It's been a few years since I've become full time Linux user for my photo and media work flow. As we're now in the sixth month of 2019 I thought it'd be a good time to do a quick update and report on how things are going. When I decided to move off OS X for my photography and video work in 2015 the landscape was quite different. I was a seasoned Linux user and admin at the time but had kept Macs around for access to Adobe and Apple programs. But things change, the libre software options were getting better and I was mostly tired of giving Adobe money and to a lesser extent Apple. Keep in mind this was before Apple released the unreliable, throttling, terrible keyboard having and consumable current generation MacBook Pro. If I wasn't set on switching before then those things alone would have sealed the deal. Before then I was less hostile toward Apple's product line.

Fedora KDE Desktop

I still have a 2015 MacBook Pro in the house but I mostly use it for two things: my Canon Pro 100 printer and Epson V600 scanner. I have a what is now ancient copy of Photoshop CS6 and Lightroom 5.7 on it that haven't seen any use in a very long time either. My primary machines these days are an AMD Threadripper desktop and a Dell Precision laptop, both have only Fedora Linux installed on them. No dual booting, no Windows 10 virtual machines, no “cheating” if you could call it that. Honestly don’t think I could be all that useful in a Windows environment these days.

Just laying my cards here on the table from the beginning so no one see a Mac in my presence and thinks I’m making this all up.

Now then, in the intervening years the Linux desktop has improved dramatically. KDE Plasma has become my go to desktop environment for two main reasons. First of all when I picked up the Precision I needed something that did HiDPi scaling reasonably well. This forced me off XFCE and got me comparing GNOME3 and KDE Plasma 5 desktop environments. I know a lot of people like i3 Gaps or MATE or whatever but IMO there's a lot of good reasons to stick with the major players if you're looking to just get things done. Plus if I was going to show normal people how "full featured" the Linux desktop had become some sort of super tweaky desktop made for posting screenshots to /g/ or Reddit was out of the question. My wife should be able to pick up my machine and use with without much fuss. Plasma and GNOME look and act like modern desktops so that's the route I went. I ultimately decided on KDE Plasma 5 since it supported fractional scaling and my Precision looks best at 1.5-1.6x. GNOME3 on the other hand has a stinky foot for a mascot and could only do full integer scaling at the time. I think they've added the fractional scaling as a test feature in one of the latest releases however. Not to mention the difference in resource usage between the two. I really wanted to like GNOME3 since they did something actually brave and different with the desktop interface but it was lacking too many features, broke a lot and ate CPU and RAM like crazy. Plasma 5 on the other hand runs on everything from my super old Latitude E4200 with 3GB of RAM to my Threadripper with 64GB.

The KDE folks have really been hitting it out of the park the last few years with their releases of the Plasmas desktop, as long as you don’t need any accessibility features. GNOME3 is still the only desktop doing serious work on that front. The integrated applications for KDE are a different story too and tend to vary on quality. Gwenview has been great and Okular is the best PDF application out there today IMO. Konsole has become my favorite terminal emulator as well. However KMail is an unusable train wreck. I imagine the resources put into it aren't the greatest as most Linux users are probably going to use Thunderbird, mutt, alpine or a webmail interface.

On the distribution side of things I've mostly stuck to Fedora and Debian. I've been a long time Debian user as it covers a lot of use cases very well, is extremely stable and I'm rather fond of their governance structure. Plus it's dead simple to move from one release to the next. Fedora has made some significant strides in this direction lately and like their "cutting edge adjacent" strategy in terms of software versions. The last time I used Fedora for anything was the Fedora Core 4 through about the Fedora 8 days. After then it went through some shakey periods in terms of stability and usability but nowadays I'd say it's easier to get up and going with than Ubuntu. Fedora's KDE spin is quite nice as well, despite being known as "the GNOME distro." My go to machines for photo and video work are all running Fedora right now and I have a couple of desktop and laptops on Debian, all of my infrastructure stuff runs Debian too. IMO you can't really go wrong with either and if you want to get started with Linux I'd try Fedora, especially if you're a first time Linux user. I'm not a fan of derivative distributions and I've always found some of Ubuntu's choices to be strange, just skip those and go straight for Debian Stable IMO.

Darktable 2.6

On the less day to day bread and butter desktop stuff and more image work flow side I'm continually impressed with Darktable. I don't miss Lightroom in the slightest. I started moving over to Fuji about the same time as I moved to Darktable and it has handled the RAW files nicely. From what I understand until recently Lightroom struggled with the RAF format and making effective use of X-trans, Iridient Developer seems to be more popular with Fuji users on Mac and Windows but I find Darktable’s RAF conversion to be quite fantastic. Indeed others seem to agree with my sentiment. Even if you’re stuck using a Mac or Windows machine I’d suggest trying out Darktable for Fuji RAW conversion. For the rest of the digital asset management workflow Darktable has been more than adequate, it does have more of a learning curve than Lightroom but it also allows for more under-the-hood exploration with modules like equalizer. I tend to be pretty self reliant on the organizing files front on my disks though and have heard other’s coming from things like Aperture and iPhoto saying Darktable doesn’t do enough on that front. The developers have stated time and again they aren’t writing a file manager and there are other better solutions out there for it. Personally I think it’s fine, but if you’re used to just dumping you files at a library management program and letting it handle the files on disk it will be an adjustment. Honestly I don’t like that approach as it locks your organizational structure into that piece of software and I’d rather just have the files available to me in a directory structure to move about as I see fit.

The GIMP has changed some since I first moved over, but in general it’s slower moving than most software package development cycles these days. There seems to be two camps for GIMP users: it’s adequate or it’s a piece of junk. Where you land largely seems to be dependent on what you’re trying to do and in my opinion photographers are better served in the GIMP than designers. Most people I see complaining about it lacking features are designers and print production types. Which is fair, GIMP does lack CMYK mode among other things and there is an adjustment to be made coming form Adobe land. For my needs it still gets the job done especially now that 16-bit and 32-bit images are supported in 2.10. Before then I was running the unstable testing branch as 2.8 only supported 8-bit images. If you only working with JPEGs this isn't a big deal. It only matters when you're working with TIFFs and RAW file derivatives. If all you need is minor retouching there's no reason to use Photoshop, unless you need a specific feature or work with others in a Adobe centric environment. But for most of us just working with ourselves out here GIMP seems to get the job done.

iKdenlive

Video editing is something that has changed massively since 2015. When I first moved my media production efforts to Linux there wasn’t a good option for a non-linear editor so I dual booted and used Sony Vegas or just got out the MacBook Pro for iMovie, Final Cut or QuickTime. That is no longer the case. Kdenlive has made massive strives and recently did a huge refactoring release that squished a lot of long standing odd behaviors, but I’ve been using it since 2017 without many complaints. If you’re used to old-school iMovie and Final Cut the Kdenlive is pretty easy to move around in. I’ve heard some folks say it’s similar to Adobe Premiere as well.

I am still glad I made the complete switchover, even before this I was fine with Linux as a desktop operating system. However now I have moved my comfort zone with it and I’m no longer at the whims of a couple of rather controlling companies for the creative part of my life. I really don’t like feeding anti-competitive, monopolistic, end-user hostile companies trying to squeeze blood from a stone with monthly subscriptions or engineered to fail overly delicate status symbols.

I’ve also come to despise the term “industry standard” as it seems to just be a good excuse to not try to move outside the box you were taught to stay in. “You get what you pay for” is another terrible motto and seems to be used by these corporate types to put down libre software alternatives on a regular basis, but I’m off on a tangent at this point but I’d suggest trying some of these tools. There are even more applications I have not covered here like Krita and Raw Therapee, both of which I've been using lately as well. It really is about the best time it’s even been to jettison the likes of Adobe, Apple, Google or Microsoft these days. Depending on what exactly your need is and what products you wish to avoid. In the case of most of us just working on creative photo and video fields by ourselves or for ourselves it’s really just about overcoming the inertia that Adobe has in this space.

Even professionally it's more than possible, but like switching camera systems there is an initial time cost and you have to decide if the benefits are worth it. In my opinion the freedom is worth trouble and my work hasn't suffered for it. Just don't let others saying "oh, that's not for anyone doing serious work" discourage you. I just don't think that being on a corporate leash is the only way to get things done.

IBM Model F AT

IBM Model F AT and Microsoft Optical Trackball

Probably could use a little cleaning but this is the ultimate buckling spring board in my opinion and is finally in my possession. Capacitive PCB for a sensing assembly instead of a membrane like the Model M's buckling spring mechanism. Not that the Model M is bad, this just lot a smoother, more tactile and louder. Only mild irritation is the placement of ESC since I'm a vi user.

My apologies for the condition of the desktop itself. It's a fifteen year old big box store particle board desk and the veneer is really letting go. A future project will likely be building a new top for it or outright building a whole new desk.

Debian Mirror on Libre Computer ROC-RK3328-CC

ROC-RK3328-CC

In the age of 100Mbps+ fiber internet connections it's hard to imagine why you'd need to mirror your OS packages locally. There are a couple of reasons, most of them go to being a good neighbor to the larger package mirrors and in my opinion it never hurts to have a locally acessible backup of the entire package tree in case things get weird and you lose internet connectivity for a while. Mostly I do it because between physical hosts and virtual machines I have quite a few Debian boxes floating around on different releases. Pointing them all at a local machine and just having the mirror grab the files saves the host on the other end a lot of bandwidth.

Small single board computers are great for this in theory. They take up little space and power but until recently that has come at the cost of performance and storage. I've never been terribly impressed with the Raspberry Pi and it's limited connectivity has made it less than ideal for this sort of task. Enter the Libre Computer ROC-RK3328-CC board. It shares the same form factor as the RPi B type boards (so it works with those cases) but has more RAM, a faster CPU, actual gigabit Ethernet, eMMC slot and a USB 3.0 port. It also uses less power than the RPi3B+. I went with the 4GB edition because why not but this could easily be done on the 2GB or 1GB version of the board if you want to save a little money. The killer features for this task is the gigabit and USB 3.0 port, you don't need 4GB of RAM to run a couple of cron scripts and nginx.

First you'll need an OS for you SBC. I'm currently running a self made Armbian but Libre Computer has Debian-based images as well. My Renegade board off a 32GB eMMC but it can be a bit tricky to flash one of those. The MicroSD slot should provide good enough performance to run the OS if you want to go that route for simplicity's sake. For my mirror I'm grabbing Jessie (oldstable), Stretch (stable), Buster (testing) and Sid (unstable) in AMD64 and i386 flavors using debmirror. This takes up around 600GB as of November 2018 so a 1TB hard drive or SSD should do the job. Personally I just went with a dual bay 3.5" USB 3 enclosure and a couple of 2TB drives with BTRFS RAID1.

The Debian distribution that is packaged for the ROC-RK3328-CC board has an empty fstab by default. After plugging in your USB drive, formatting it (if needed) and finding the UUID (ls -lah /dev/disks/by-UUID) add an entry like so:

UUID=your UUID here /mnt/usb   btrfs    nofail  0       2

The nofail option allows the board to continue booting if the drive is not present for some reason. Btrfs is optional, ext4 or xfs will work fine.

Best practices dictate setting up a separate user for the mirror:

groupadd mirror 
useradd -d /var/mirror -g mirror
mkdir /mnt/usb/mirror
chown -R mirror:mirror /mnt/usb/mirror
ln -s /mnt/usb/mirror /var/mirror 

Next add some packages:

apt-get install nginx ed screen xz-utils debmirror debian-keyring

Debmirror needs access to the GPG keys to verify the source of the packages, so you'll need to import them into the mirror account:

su - mirror
gpg --no-default-keyring --keyring trustedkeys.gpg --import /usr/share/keyrings/debian-archive-keyring.gpg

Every once in a while the keys will need updating (particularly when a new version of Debian is released), updating the keys is the same as the initial installation:

gpg --no-default-keyring --keyring trustedkeys.gpg --import /usr/share/keyrings/debian-archive-keyring.gpg 

Next up we need to script out the mirror update process. After some tinkering and searching mine ended up looking like this:

#!/bin/sh
FTP=ftp.us.debian.org
DEST=/mnt/usb/mirror/debian
VERSIONS=jessie,stretch,testing,sid
ARCH=amd64,i386
debmirror ${DEST} --host=${FTP} --root=/debian --dist=${VERSIONS} -section=main,contrib,non-free,main/debian-installer --i18 --arch=${ARCH} --passive --cleanup $VERBOSE

and I saved it to /usr/local/bin/mirror.sh

It's now time to do the initial sync, this can take a while so run it in screen session:

screen su mirror -c "/usr/local/bin/mirror.sh"

After that's finished (probably several hours later), we'll make the mirror accessible via nginx. In my case I had to uncomment the following line in /etc/nginx/nginx.conf:

server_names_hash_bucket_size  64;

as for some reason it was having a cow over the length of my hostname.

I created a vhost file at /etc/nginx/sites-available/000-yourhost.yourdomain.com:

server {
    listen       80;

    server_name wren.buttonhost.net www.wren.buttonhost.net;

    access_log /var/log/nginx/wren.buttonhost.net-access.log;
    error_log /var/log/nginx/wren.buttonhost.net-error.log;

        location / {
                root /var/mirror/;
                autoindex on;
                }
        }

I have the DNS assigned on my DNS server/gateway but you'll need to figure out how to deal with that. Just make sure yourhost.yourdomain.com points at this machine's IP.

Then link this file in the /etc/nginx/sites-enabled directory:

cd /etc/nginx/sites-enabled
ln -s ../sites/available/000-yourhost.yourdomain.com 000-yourhost.yourdomain.com.cfg

My machine is called wren.buttonhost.net, please change the name in the configuration file to your own machine's vhost name!

and restart nginx:

systemctl nginx restart

Afterwards you should be able to navigate to that host in a web browser and see your mirror.

Debian Mirror in Firefox

The last step is to automate the update process via cron, here in /etc/cron.d/debmirror:

# sync Debian mirrors three times a week
30 5 * * 1,3,5 mirror /usr/local/bin/mirror.sh

You now have a Debian mirror that you can fit in your pocket and run off a lithium battery pack if needed. Why would you need that? I don't know, why wouldn't you? Outside of the parts about the fstab, what packages to install and symlinking the /var/mirror directory to the USB drive the rest of this should work on any Debian machine regardless of architecture.

The 2019 Trackball Revival

Trackball and Mouse

In an attempt to head off some elbow tendon pain I started looking into a different mouse for the office. The suspect mouse is the Dell laser model that came with my work machine a few years ago. At home I use a much more ergonomic Logitech G400 that doesn't seem to give me issues. The Dell has very extreme angle on the front so it's a very unnatural fit for my hand. Maybe it works for people with smaller hands? It doesn't help that the DPI settings seem to be messed up and it will only swap between turtle in molasses and rabbit on meth. It's pretty hard to use in either of those modes. While I'm guilty of keyboard snobbery I'm generally OK with whatever mouse I can find that is comfortable. The Microsoft D66 and Logitech MX518/G400 are usually my go to as they fit my hand well, are relatively inexpensive, don't require any third party software and are basically everywhere. I guess it's time that I started getting into other relatively obscure input devices.

I haven't used a trackball since the late 90s. Back then I didn't care for the trackball as scroll wheels were becoming big and more and more pieces of software were utilizing them. At the time there really weren't any trackballs on the market with a scroll wheel. Optical mice were also starting to come on the scene and were popular among gaming enthusiasts. At the time trackballs didn't have scroll wheels which was a real sore point in early FPS games. We had a Logitech Marble back then, which is still available new BTW. It's a fine track ball but lacking the scroll wheel was a let down back then.

Now a days scrolls wheels are available on the few trackballs still on the market in the US (they're apparently still big in Japan and that's not a joke). There aren't many available as they've fallen out of favor with most people. Logitech makes a couple and there are some sellers online that sell Japanese import Elecoms. I went with a Logitech M570 as the MX Ergo was quite expensive. Plus I'm generally not a fan of non-user replaceable batteries and the rubberized coating the MX Ergo had. While I had used a Logitech Marble years ago I'd never used a thumb ball type trackball period so this was a first. So far I can say the M570 is a pleasure to use. It's only been a little over a week but I'm already comfortable with it. As far as gaming goes I've only managed to try a little Team Fortress 2 with it and I can see a trackball being a huge advantage in FPS games once you adjust. To be honest I'm not sure why these things are more popular than they are with gamers. The M570 is a tad small in the width department so my fingers tend to fall off the right hand side of the device but otherwise it fits my had well. I really don't care for the wireless part, especially since it's not Bluetooth and requires a small USB receiver, but it at least runs on a single AA that's user replaceable. Wireless trackballs don't make a lot of sense if you ask me. They don't move and aren't something you're going to use across the room, but it's what the kids like so whatever.

I think the key to preventing RSI is to stop the repetitive part of it so I'm not ditching the mouse completely. Changing position and devices will help in that direction quite a bit. I'm may be on eBay looking at some older models of trackballs too as most have been relegated to the dust bin of history, although some models can be quite pricey as they have a bit of a cult following. Elecom gets high marks on the new market but they can be pricey as they have to be imported. Really, if you want to one out I'd say give the M570 or a Trackman Marble a shake. You won't be out but for about $20-25 or so and if you like it there are other higher-end options out there. As an added bonus ne'er-do-wells won't really be able to mess you machine and I've already baffled a few people when I had to work in an open office setting this week ...

Page 1 / 6 >>>