All too often, businesses expect to plug in a digital asset management (DAM) system and then — voila — it will begin managing all of the organization’s assets.
The misconception can be forgiven. After all, “management” is in the name, so plug-n-play and you’re good to go, right?
Falling into this trap contributes to an expensive expectations gap that will undermine the success of any DAM initiative and result in costly measures to mitigate the resulting damage. To avoid unpleasant budgetary surprises and prevent your shiny new software from becoming an unnavigable dumping ground, DAM champions must proactively manage stakeholder and user expectations from the beginning of a DAM implementation project — well before any discussion of specific DAM technology.
This isn’t limited to DAM. People love throwing software at information problems with little to no consideration of the people, process and information architecture required to help users find and reuse assets effectively.
Defining Your Minimal Viable Product
So what exactly is required to enable finding assets after dumping ingesting them into a DAM system? Is there a magic wand involved? Does it really require a full time, dedicated administrator or cataloger?
The short answer is no and yes, respectively.
And yes, here’s where I bring in library and information science. Basic information management principles need to be understood, standardized, configured, maintained and governed in concert with your DAM tool in order to be able to find and reuse your digital assets — which is the name of the ROI game, after all.
To achieve maximum ROI from your DAM system, organizations need to recognize that a minimum viable product includes more than just procuring and deploying the system.
Your minimum viable product also needs to include the minimum viable information architecture, curation and resources to support findability and reuse, along with curation and governance of digital assets throughout their continuous lifecycle (spoiler alert: that lifecycle isn’t linear.)
The Key to Finding Your Assets
The people who configure a DAM system must understand how information retrieval works on a basic level, and what is required to achieve relevant search results.
The short version (If you’re looking for a more in-depth study, have at it): a user enters some query terms in the database, the system compares the query terms to its indices, then returns matches back to the user in the form of search results.
Simple, right? Not so much. Finding exactly the asset a user is looking for only happens when users’ search terms match the terms used to describe those assets.
How is a user to know what terms have been used? Humans don’t speak the same language as computers. And presenting an index of terms for users to pick from is not exactly a Google-like experience. This is the essence of the information retrieval conundrum.
Secrets of the Trade
The time-tested solution is to describe assets in a standardized way, represented within a database record in the form of … metadata. Because rich media cannot yet describe its own pixels as well as humans, rich descriptive metadata is still gold. The DAM system then indexes those golden metadata terms, and matches them to user queries.
For a search query to return relevant results, assets must contain the specific terms included within the system indices. This is where the rubber hits the road: if your asset records have little to no relevant indexed metadata, you’re out of luck.
Even if you do have some metadata, if the fields you need to search are not configured to be indexed, you will never find them through a query, since DAM systems do not necessarily include every custom metadata field you may have created.
Learning Opportunities
And even if a field is indexed, how it is indexed is equally important. Do you need an exact term match? Are phrases indexed, or just individual terms? It absolutely makes a difference in whether assets will be returned in your search results.
As those who have been involved in enterprise search initiatives know, someone needs to adapt and fine tune DAM search engines in order to return relevant results. Depending on your DAM system, this may require an information professional who understands the nuances of aggregation and disambiguation (which is beyond the scope of this article, and likely to elicit yawns).
Unlocking the Value of Your Assets
So how do you structure a DAM system’s information architecture and add the metadata needed to avoid creating a DAM black hole?
After performing a content inventory, interviewing your users, analyzing your current and future state metadata needs and workflows, it’s time to design your metadata schema. The key is to understand that designing an effective schema involves more than just picking the “right” fields — you also need to provide guidance and governance around the data that users actually enter into those fields.
And that’s often the missing piece of the findability equation.
Enter standards.
Essentially, there are four types of metadata standards, three of which are directly relevant for designing metadata schemas within DAM systems:
- Data structure standards: these database fields or “schemas” represent what elements are used to describe your assets. This is where NISO’s Understanding Metadata comes in handy, as it categorizes metadata in a way that considers all the different types of information you may need to capture about your assets.
- Value encoding schemes: these are the actual values or terms used within controlled fields (a.k.a. “picklists”), where users can only choose from a predetermined set of values. There are many established vocabularies from which to draw inspiration when designing your own.
- Data content standards: these cataloging guidelines for the format and syntax of the metadata values are populated within your metadata fields (plurality, abbreviations, lower case only, etc.). Understanding how your DAM system handles data within fields and then entering data accordingly — and consistently — is critical to findability. Although Libraries, Museums, and Archives have extensive, complex guidelines to describe their assets, most private organizations can just skim some of these to understand what types of formatting and syntax are important for findability, and get away with two to three pages of streamlined guidance for users.
In addition to the aspects above, folder structures and file naming standards also serve as important metadata, and can sometimes be harvested for scripting to make the asset ingestion/migration process faster and less painful.
Curation is Not Just for Museums
Managing digital collections effectively requires active, continuous curation throughout their lifecycle. This includes more than just the obvious:
- Assets: are the appropriate assets being ingested? Are asset relationships still valid? Are assets rendered inaccessible when rights expire? Has anyone deduped lately? Are assets archived after a specific number of years? And when was the last time those policies were updated?
- Metadata: after structuring your data using schemas, vocabularies and cataloging guidelines, don’t throw away all that potential value by neglecting to govern your metadata. Although some metadata elements such as “Date of Creation” or “Product ID” are (or should be!) static, preservation or workflow data may continue to evolve, which requires continuous updating and maintenance. And perhaps some metadata fields never get populated, which should trigger an evaluation of its relevance or possible ambiguity.
- Controlled vocabularies: You went to the trouble of creating a list of preferred and variant terms to describe your assets — well done. But remember: terms and hierarchies are living entities and also need to be curated. Companies merge, acronyms come and go, rights expire and new products are developed all the time. A controlled vocabulary populated with an abundance of obsolete or irrelevant terms wastes time, invites more cataloging errors, complicates training and frustrates users who have to scroll through old terms every time they need to fill in a metadata field. Multiply that for every controlled metadata field, and you have a serious usability problem that may develop into a user adoption issue.
- Search: what terms are users entering that should be considered for addition to your controlled vocabularies? Are all of your facets serving as helpful filters, or are some of them just taking up valuable screen real estate?
- Collections: are manually curated collections still relevant to users? Do they contain frequently used assets? Are odd assets popping up within automated collections?
- Users: have users moved to different departments or brands, skewing your usage statistics?
- Access: do former employees or agencies still have access to your organization’s intellectual property?
- Documentation: Documentation often sits on a virtual shelf, only to be updated when someone is forced to have to do it for training purposes. The value of current, accessible documentation becomes obvious when users keep asking the same questions over and over, taking up valuable staff time and negating the whole point of self service.
Even from this abbreviated list, it’s easy to see why a dedicated DAM Administrator is required to manage digital asset collections effectively. Ongoing training, periodic audits and a culture of accountability will also go far in maintaining the integrity of your valuable data.
Great Expectations
Plugging in your DAM system after an expensive implementation and discovering it doesn’t just “play” is a horrible time to discover that a minimum viable product also requires a minimum viable information architecture, as well as a team to curate and govern your DAM system.
As many have said before me, information doesn’t manage itself. The DAM expectations gap is avoidable with the right education to the right people, at the right time. And a bonus — the time you save by creating a DAM program at the start can be used to integrate your new system with the rest of your content platforms to multiply your ROI. Which may in fact result in exceeding those DAM expectations.
Learn how you can join our contributor community.