Does Adobe need to restart and build Lightroom from scratch?

Lightroom completely dominates the realm of Digital Asset Management (DAM) – a solution for everything, it fits the mold of most photographic workflows, although a bitter pill to swallow is like a display and can be a monthly subscription is. Is it time to start Adobe?

Digital asset management is something that we all do as photographers – is it as simple as copying its images. JPEGs lock your SD card directly and dump them into the “Pictures” folder, Or insert raw files into pre-tagged folders called synced for access anywhere called the cloud. The care you take will depend on what you want to achieve and who you are delivering the images to.

Of course this has not always been the case. The product that eventually became Lightroom – Shadowland – debuted in 2006 with the first public beta in 1999, before the version arrived in 2007. The first product rolled out in 2007 was PixMantec’s Ravshooter, which enabled Lightroom to de-mosaic raw files in addition to the more traditional ones. JPEG and TIFF. In a single stroke, it was nested to Photoshop, leaving heavy lifting for layer-based editing, while specifically targeting the imaging image, RAW conversion and non-destructive editing’s photographic workflow.

The paradigm was to offer the digital equivalent of Darkroom, allowing you to develop your images as a final product. Perhaps this is where Lightroom really shines and exposes Adobe’s real focus: the creative process and ultimately the output. Lightroom was intended to distribute print and digital media, which is why book, slide shows, print and web modules have such prominence. The crowning glory is arguably the image catalog that is incredibly important for maintaining a coherent digital archive – and again using the analog metaphor as part of the creative metaphor. It has been scheduled as a contact sheet for a reason, allowing you to tag the photos you want to develop. Of course, the catalog remains a big buy-in: if you have your own library of 100,000 photographs, it is a big disinfectant to move to another product.

This process served film photographers for decades and molded into the mold of the digital arena. When it comes to workflow, Lightroom has hit the sweet spot and increases the refined level of photo development with each iteration. All of this is non-destructive leaving the original digital negative untouched. So what’s the problem?

So what’s the problem?

The dramatic expansion in digital photography from the 1988 Fuji DS-1P fitted the niche through the rise of the DGSR market that was intended to fill Lightroom. The DS-1P can shoot up to ten 0.4MP images on its 2MB memory card, while Nikon’s D1 shoots 2.5MP images on a 2GB CF card. The initials on images were mainly JPEGs, but DSLRs made the raw format more generic. More problematic was the vendor specific file so if you shot with different brands you needed a range of manufacturer software products to import them. Photojournalists exemplified the problem in which many photographers needed potentially different cameras to move from digital camera to broadsheet.

What has changed since the birth of digital asset management, is the photographer’s shift to shooting more images than ever before using high resolution sensors that create large files. This “wealth of view” is creating a data headache that affects all aspects of photographic workflows, the most important of which is the size of data collection. There was always an upfront cost associated with image processing when shooting a film: you paid for film, development and then printing. There was a cost in each step, before you carefully indexed and recorded your negativity. Digital was broadcast almost as a “no cost” solution; You already had a computer and just threw those little JPEGs into an additional directory. However, to create 100MB + files with cameras like Fuji’s GFX100, you need a large media card, an ultra-fast connection to your PC, manifold storage and a large backup solution. If you are a wedding photographer, shooting 2000 images for any one event is common, which creates a significant data processing headache. This is 200GB of data for a wedding that needs to be ingested, rinse, processed, distributed and backed up. It costs a considerable amount to establish that entire processing chain.

The problem still remains asset management, but one of the added data management in this task. It’s not so much that Lightroom can’t manage your photos – it can – but rather how quickly it can.

Rapid Asset Management

As a result of the greater number of large image files, we are now seeing pressure on the software that manages those photographic assets; It was not compulsory to seek high performance processing when the files were small, but this has become an obvious bottleneck. This time is even more important in important photography such as sports and news, where you may need to upload your imagery literally seconds after capture. Rapid asset management is acutely needed in these domains, but all areas of photography will benefit from being increasingly dull and capable of catalog imagery. I would then do separate processing in two areas: those requiring simple batch-driven editing and those requiring more sophisticated manual processing. Integrating the benefits of the former into the crawling process is important, while the latter can be performed more easily externally (for example, in Photoshop). The tethered capture is probably a special case.

Rapid asset management is relatively new as Lightroom has largely sat on its own, with bespoke products (such as BatchPhoto and PhotoMechanic) targeting fast processing of images outside an image catalog. The competition itself has come in the form of image processing (such as Photoshop, Affinity Photo) or photo-based ingestion and de-mossing products. They work in the same vein as Lightroom, focusing on photoreoms and wide global edits, rather than layer-based models of Photoshop. This has changed over the years with adjustment brushes, control points and more recently adjustment layers. That said, integrated cataloging is a late arrival in many products (such as Luminaire, Photolab, CaptureOn), yet it may be the most important tool for a photographer that can lock you into a product.

I know that one of the things I do after marriage is bribery and wear. It can be a soul destroying monotonous task but in the case of delivering the final product, it makes up everything. It is important to take out images you do not want, tag the keepers, and mark anything that is worth returning to. Then I want to code them based on which media stream they will be distributed. Lightroom is satisfactory for this process, but the import routine is not flexible enough to reduce it in the same way. I find myself either importing everything and then culling / tagging, or culling only then returning to tagging later. It either wastes time importing all the images I don’t want or repeating everything twice and then struggling with Lightroom’s more walking speed. It is accompanied by “Adobe Subscription Tax”, when I will lump the software. However other products are starting to gain traction in the market with ACDSee PhotoStudio, DxO PhotoLab, Skylum Luminar, and CaptureOne.

It has been 20 years since Lightroom first came to the drawing board and photography has changed in that era due to its large share in the market. Software performance is not required, at least for the sheer amount of imagination and to be top and center. Lightroom is not known for its quick interface. However it runs deep: I want to speed up my workflow and adding more developing tools to the interface is not top of the list. I would like to see the greatest efforts made to reduce and tag the opportunity as soon as possible, along with the brisk performance. Does Adobe need to restart with Lightroom?

Hamiltonjack’s lead image overall courtesy via Pixabay under Creative Commons.

Leave a Reply