close up of broken piano keyboard
Editorial

AI-Generated Metadata Isn't Working: Here's How DAM Vendors Might Fix It

9 minute read
Ralph Windsor avatar
The current artificial intelligence image recognition tools used in DAM systems are comparable to a pair of eyes disconnected from the brain.

The current artificial intelligence (AI) image recognition tools used in DAM systems are comparable to a pair of eyes disconnected from the brain. Their limitations — and most vendor's unwillingness to do much more than plug into the data they provide so they can tick the "AI box" in the features list — exhibits an all-too-common lack of lateral, joined-up thinking. I suspect, however, if DAM vendors were not massively preoccupied with numerous other pressing tasks, they would better understand this flaw and be able to devise solutions to address it.

I've recently been researching methods to improve the results of the automated metadata generated by AI image recognition tools for digital assets. I have written before about the issues and limitations with these tools, the most notable being that they fail to acknowledge that metadata is contextual data about digital assets.

I believe we could improve the results from these tools through a more psychological or (dare I say) philosophical approach. In this article, I will explain my hypothesis and propose a possible technique for implementing it. Further I will explain why current DAM vendors are ill-equipped to get the most from AI and machine learning (ML) — and what they need to do to reposition themselves.

Issues With Current AI and ML Image Recognition Tools

The current AI tools used in DAM systems are essentially industrialized pixel-scanners that depend on statistical probability and pattern matching to generate keyword suggestions.

When DAM vendors demo AI modules, they generally follow the same pattern: a single, usually very generic image is uploaded and the resulting keywords initially seem relevant. When the technology is applied to the kind of real-world digital assets that populate a typical corporate DAM, the results become less satisfactory. Either large numbers of assets have the same keywords, diluting their relevance, or they are so vague and generic that they're of little value.

My experience indicates that within six months (or less) most enterprise DAM users will ask for AI features to be disabled, or for the AI generated metadata to be excluded from regular searches.

A significant, yet frequently overlooked factor in AI image recognition software is they are designed for mass market use-cases, which do not represent most DAM vendor's clients. Evidence for this can be seen by the nature of the markets targeted by some image recognition tool vendors, for example, wedding photographers. The DAM software market is not viewed as a high priority because it is complex, fragmented and currently has limited revenue growth. 

In other words, AI tools are not designed for the job most DAM systems have to do.

Related Article: The Uncomfortable Truth About DAM

Images Need Context

Most DAM users aren't searching for generic assets that rely on non-contextual metadata. They care less that an image is of a tall building against a blue sky, and more about whether it's their organization’s headquarters or not. The physical characteristics are subsidiary to the relevance of the subject. This context is the defining factor as to whether an asset's metadata will help or hinder you in finding it.

So where do we find contextual relevance?

Attempting to identify context solely from the pixels in an image is very difficult unless someone has already done so for you (and across the broad range of contexts that encapsulate all of your current interests). Think of the many images you've seen in the last week — from smartphones, computers, televisions, books, magazines, billboards, etc. How likely is it that this will ever happen?

That some image recognition software vendors have needed to introduce case-specific modules in order to identify increasingly diverse subjects is evidence that the current model is unsustainable. So where can we find the contextual hints that do assist in defining the subjective relevance of an image? In the remainder of this article I will describe a method which I believe has a higher likelihood of success.

Moving From Subconscious to Conscious Data

Let’s start by looking at how humans process data to see if we can apply any insights to DAM systems.

We subconsciously remember everything that has ever happened to us. This is comparable to raw data in the digital realm. What governs whether or not we consciously remember something is our belief systems. As I discussed in a previous CMSWire article, Metadata = Context = Meaning = Value. In DAM terms, therefore, our belief systems are like metadata models.

To illustrate this, consider what happens if you meet someone for the first time and they tell you their name. Even though subconsciously you have committed it to memory, you may not be able to recall their name at a later date. If you continue to encounter the person and their significance to your social or work circles increases, the chances of your recalling it also increase. Similarly, if they possess an attribute you find valuable, such as you find them attractive or they can help you achieve a current goal or they exhibit traits that are outside your normal range of experience, the likelihood of remembering it increases again.

This is remarkably similar to what happens when DAM users fail to locate relevant digital assets from searches. If one memory has nothing to contextualize it with another, you will not consciously recall it. Likewise, if your DAM is lacking a metadata model that links items of data to digital assets, you won’t find those either. For DAM systems, therefore, metadata can alternatively be described as "conscious data."

Related Article: Using AI For Metadata Creation

The Implication of Subconscious and Conscious Data

By focusing on data and metadata (subconscious and conscious data), we increase our chances of successfully automating the creation of useful metadata in DAM with machine learning. If we could digitally record all of a skilled digital asset manager’s interactions with a DAM solution, we would have a lot more raw data to use to recreate aspects of their behavior.

More Data = More Potential Value

The more data we collect from user interactions with DAM systems, the greater the chance of obtaining insights. Unlike the human subconscious (which automatically stores everything) data within an IT system is lost unless it is explicitly saved. As such, if there is no mechanism to retain this wealth of information, any potential to analyze it also disappears.

Learning Opportunities

In recent times, tech leviathans such as Facebook and Google have applied this concept with shocking effectiveness. By capturing every user interaction to the smallest degree — irrespective of whether it has any immediately apparent value — they are now in an unrivaled position of power due to the magnitude and richness of the data repositories they control. Their ability to do this is one of the reasons why user volume is considered a key factor when valuing technology businesses. The greater the number of users, the bigger the opportunity to capture the data they generate and re-contextualize it using metadata, thus increasing its value.

Transactions: The Atomic Unit of Value in DAM Systems

Any DAM system which aspires to be used as a credible source of AI/ML insights must have an audit trail which records all user interactions with it and the digital assets it holds. It should also be possible to play back this audit trail frame by frame, like a video recording.

Every transaction in the audit trail contains the raw data to view and model what users are doing within the DAM system. This is the equivalent of the human subconscious: the more granular this transactional log is, the more accurate the insights. 

Related Article: Closing the DAM Expectation Gap

DAM Vendors Currently Play a Minor Role in the AI/ML Story

Although DAM vendors frequently mention AI in their marketing, they rely on third-party components to deliver the limited form of it on offer. This is due to a number of factors: Most are reluctant to get involved with custom development work (for justifiable commercial reasons). Further, their core architectures are typically lacking in the kind of abstract elegant simplicity required to support the kind of data model described.

Very few DAM platforms are as scalable as they need to be. There's an unwillingness to subsidize such development efforts and there's limited human resources to do it comprehensively. When it comes to AI and ML, DAM vendors currently are little more than resellers for third-party technologies.

How DAM Vendors Can Write Themselves Back Into The Plot

A couple of approaches can help solve the above issues.

The first is for vendors to rearchitect their platforms so everything is routed through their API as a transaction, what you might call "Transactional API First." API first means that everything is routed through the DAM system’s API — even the standard user interface. Each API action should be composed of one or more transactions which are stored in the audit trail that can (itself) be analyzed and mined for data.

The second is vendors need to focus on developing their partner networks. I mentioned earlier that vendors dislike custom development work for a multitude of very good reasons. For the foreseeable future, however, getting anything useful out of AI and ML will necessitate a bespoke solution implementation model of the kind that was fairly commonplace in the software industry 20 or 30 years ago. 

DAM vendors need to realize their role is to provide a transactional foundation on which third parties can add value. This may mean forgoing working with some new trending technologies as well as giving up some revenue to a partner. The upside, however, is being far more embedded in the digital asset supply chains of their customers, with all the implications for long-term sustainable revenue growth that this affords.

Related Article: DAM Innovation Is Finally on the Horizon: From Value Chains to Blockchains

The Automated Metadata Conundrum Continues

The automated metadata conundrum is far wider in scope and complexity than many DAM experts are prepared to understand. The DAM industry's track record of dealing with what is arguably the biggest issue with the adoption of DAM technology is fraught with hyperbole, underestimation of its complexity and an over-optimistic assessment of the value of their contribution.

With that said, if DAM vendors are prepared to see the bigger picture and understand the digital asset supply chain context into which their solutions are being deployed, we might just see some decent progress yet. You can read more on how AI and ML might bring value to DAM.

About the author

Ralph Windsor

Ralph Windsor is Project Director for Digital Asset Management Consultants, Daydream and also a contributing editor for DAM News. He has worked in the IT industry since 1995 as a software developer, project manager and consultant.

About CMSWire

For nearly two decades CMSWire, produced by Simpler Media Group, has been the world's leading community of customer experience professionals.

.

Today the CMSWire community consists of over 5 million influential customer experience, digital experience and customer service leaders, the majority of whom are based in North America and employed by medium to large organizations. Our sister community, Reworked gathers the world's leading employee experience and digital workplace professionals.

Join the Community

Get the CMSWire Mobile App

Download App Store
Download google play