Are you maximizing the value of your digital asset management (DAM)? Understanding the complexities of DAM lifecycles and building efficient workflows can be a daunting task. DAM vendors often do a very good job of getting the DAM up and running, but do not have the deep understanding of your assets and business needs.

If you plan on implementing a new DAM, spend the extra time to gather requirements. Not only will this help you design an efficient DAM workflow, but it will also provide you the information needed to make better decisions when selecting a DAM solution.

If you have an existing DAM solution, understanding how the end users want and need to work with the assets will provide clues on how to improved your DAM workflow. There are almost always opportunities to streamline workflows and improve efficiency, but this can only be done if you have a comprehensive understanding of your assets and your users needs.

A DAM lifecycle consists of five major processes: Create, Ingest, Manage, Distribute and Archive.
The previous blog Improve DAM Workflows Without Breaking the Budget, examined opportunities in the first two processes: Create and Ingest. This blog will examine the last three processes: Manage, Distribute and Archive. Each process consists of complex workflows that offer opportunities to improve efficiency.

Manage

Once the assets have been ingested, the power of the DAM software is utilized to “automagically” enhance, manipulate, index and control access to the assets. The mechanisms are hidden from the DAM user, but provide ways to allow users to quickly and easily find and repurpose the assets. Other mechanisms provide:

  • Security through granular access controls permissions to asset
  • Transformation of assets to produce thumbnails, previews and other desired file formats
  • Version controls to preserve the original assets
  • Control of digital asset metadata classification and standardized taxonomy
  • Audit trail of asset usage
  • Reporting
  • Integration to other systems though API end-points

Leveraging these internal DAM mechanisms can often be done through configuration. The default or initial configuration may not take full advantage of the DAM’s full capabilities or may not be optimized for your needs. As business requirements change or evolve, so should the workflows to ensure the DAM workflow is designed to maximize efficiency and streamline the workflow process for the users.

Having consistent, reliable metadata provides many opportunities for building efficiency, such as automatically routing the assets to the appropriate editor and notifying the teams when new content has arrived for editing or review. For example a photo that has a Category of “S” for sports could automatically be routed to the sports desk for editing. This type of workflow is easily configured and eliminates the need to scan through content not intended for that team.

Providing drop down lists of standardized taxonomy lists allows editors to select the correct terms, which minimizes typing and thus typos. This also ensured that the metadata remains consistent and matches the organization’s standards.

An interesting exercise is to export the metadata from the DAM. There are often opportunities to clean up typos by identifying commonly misspelled or unapproved terms. It is easy to identify these issues and normalize the data by updating the database with the correct terms. This is also a good exercise to perform on other database fields such as date fields.

In some ways, this is simple housekeeping, but far too often overlooked, especially when data has been ingested from previous systems that did not provide strong governance around metadata. If your company is in the process of replacing an existing DAM, conducting an extensive content inventory will provide the opportunity to identify metadata issues. These issues can be resolved and normalized as part of the extract, transform and load (ETL) process as assets are ingested into the new DAM.

Search Tools

Search is one of the most powerful components of a DAM solution. Modern DAM systems use high-end, full featured search engines such as Apache Lucene / SolR, Autonomy, MS Fast and others. These search tools have many features that are often overlooked (see chart below) and configured suboptimally for your specific needs. This provides an opportunity to improve the user experience by making helping users to quickly and easily find and repurpose assets.

The default installation of your DAM search will only produce modest results. A deep understanding of your assets and how they are used is needed to optimize the search tools. Most of these types modification can be done through configurations changes to the search tool.

Here are a few common features offered by most DAM search tools.

Functions Description
Fielded search Search on a value(s) in one or many specific fields or parameters
Search within a search Search within the search to allow our customers to refine their initial search
Synonyms Synonyms associate alternate terms that have the same meaning such as Blue, which would also include: “Azure, Cobalt, Navy, Sapphire, Cerulean or Indigo”
Stemming and Term Expression Stemming reduces the words "fishing", "fished", "fish", and "fisher" to the root word, "fish". It also works the opposite direction taking the string “fish” to return “fisherman”, “fishing” and "fished".
Stop Words Removes stop words from the query (e.g., “a”, “the”) so they are not part of the search. Must be able to add, modify and delete stop words list while system is running without the need to re-index.
More Like This Provide a link that returns results that include similar assets based on relativity of the metadata.
Did you mean?  When searching a custom dictionary, a thesaurus would be used to suggest other possible search terms “Did you mean:” This is especially useful when typos occur or searching for complex or unusual names.
Relevant Searching After you search for an item, as you pick a result, there should be a separate area that will search on "Like" items.
Facetted Filtering Results must include facetted filtering options based on metadata within search results to allow for quickly filtering down results.
Query Term-based Rules Business rules for term or terms that trigger spotlighting of search results i.e., when a specified term is used it would trigger a ranking event to move content to the top of the results.
Boosting Ability to artificially influence relevancy score based on business rules i.e. published assets might have a relevancy “boost” field that increases the ranking of that asset
Static Ranking Ability to elevate a specific result into the search results at a specific position based on a user’s search
Statistical relevancy Results are ranked based on statistical analyses such as term frequency–inverse document frequency (TF-IDF), keyword biasing
Natural Biasing Rules that bias the ranking results based on natural or intuitive affinities in the content, for example, distance the term appears from the beginning of the document, or favoring later dates over earlier dates and higher currency values over lower currency values. Must be able to turn these on or off while system is running without need to re-index.
Popularity biasing Content in the index can be boosted in rank for a particular search term if it has been explicitly selected for a search on the term in the past.
Field weighting Individual fields or parameters can be weighted so that results appearing in some fields are more relevant for the document than others (for example, in HTML data, a term found in the TITLE tag generally means more than if it is found in the BODY tag). Must be able to change these while system is running without need to re-index.

Business rules can also be designed to leverage the assets. For example leveraging metadata such as a project number or assignment number can be automatically linked within the DAM search index. When a user searches for a photo on Water Conservation, a facetted result can be returned that includes linked videos, stories and even model releases, contracts describing asset usage. A wide range of options could be designed to hide assets that have rules based on asset usage restrictions or embargoes, coupled with the access control lists would hide or show assets based on permission level or even business unit.

If your organization receives high volumes of assets, it may prove difficult to find the right assets even if properly tagged. One approach to solving this is to use known data to boost your search results. Let’s look at a big event such as the Oscars. Thousands of images will be ingested in a very short period of time. Business rules could be established to show only published content first when the user performs a search.

Let’s take this concept one step further: The DAM search tools can be integrated with other outside data sources such as trending data or analytics from your organization’s website. If the DAM search engine also had information on the most popular images on the website this information could also be used to boost the search results by setting a business rule. In this case when the users makes a search, the most popular published content, that matches the search query, appears first, followed by everything else.

Understanding and leveraging the capabilities of the DAM can reveal incredible opportunities to design efficient workflows.

Distribute / Syndicate

Workflow for efficiently distributing and syndicating assets provides one of the most effective ways to monetize and maximize your DAM’s ROI.

Distribution workflow provide a way to send assets from the DAM to someplace else, which includes everything from downloading the asset locally to sending the asset to a CMS or even a FTP site. This process can be a manual process or can be automated based on metadata based business rules.

Syndication workflows include automatically sending published content to a B2B customer based on a licensing agreement. Syndication workflows are typically fully automated based on metadata driven by business rules. For example I created a B2B syndication workflow for a news organization which entered a syndication agreement with a travel website. In this case, a business rule was created to automatically deliver every asset that was edited and included keywords travel, leisure and other industry related terms.

The DAM search index can be integrated with other outside data sources such as trending data or analytics from your organization’s website. This provides an opportunity to utilize semantic search information to boost or bias search results for your users. This also offers some interesting opportunities to design workflows that tightly integrate with other tools such as a Web CMS. By leveraging the power of the DAM search tool and data from the CMS, a web gallery could be created to automatically publish assets from the DAM.

To take this even further, by integrating customer data and search query trends from the website, a web page could be created that auto publishes assets based on that specific user's interests. In this example, a user may be interested in dogs and you might even know the specific breed of dog, such as a Black Lab. Additional, you know that user also purchased Black Lab related items from your e-commerce site. A workflow could be created working with the CMS developers to create a specific web page for that user that dynamically pulls the latest Black Labs stories, photos and videos.

Archive

A good DAM archive strategy involves much more than providing a safe and secure repository to protect assets from accidental loss and to provide an official record. It must also include governance policies and processes to ensure the highest quality metadata, it must be easily accessible with an intuitive user interface, and it must consider storage strategies to provide good performance for quickly finding and retrieving assets.

Governance

Any archivist will tell you that metadata is the key to quickly finding and retrieving assets from an archive. I cannot overstate the need for strong metadata policies and process. With that said, the policies must be coupled with good processes that make it simple for users to follow the metadata policy while making their jobs easier.

If your organization establishes a strong metadata process that is too much work for the users, it will NOT be followed. But, if you establish processes that streamline the users’ workflow, while implementing the governance policy and providing the proper training, there will be a higher adoption rate.

Using automation to auto-populate metadata based on the user profile can eliminate the amount of data that needs to be added. Captioning templates are another way to auto-populate the core metadata and base caption / description. This approach will provide consistency and require the content creator to only add minimal information.

Pull-down pick lists are another great way to provide consistent data entry. Another strategy to consider is automatically enhancing metadata based on manually entered description information. It's a semantic approach of understanding the meaning of known information such as words and phrases and generating appropriate metadata, leveraging a standardized taxonomy.

If you make it easier for users to do their jobs and they understand how to do it, they will. It’s a win-win scenario. Users have a faster and easier way to perform their job, and the quality and consistency of the metadata is improved, which makes it easier to find and repurpose in the future.

Usability / User Interface

Users are looking for a simple, easy to use and intuitive user interface. It must provide “Google” like experience. Interface should provide a single search box that will allow one or multiple search terms to be queried. Results must be displayed in an organized manner that makes it easy for the user to quickly find and use information and refine initial search query through faceting or other means of quickly narrowing the search parameters.

If you make your customers work to find what they’re looking for, they will most often look elsewhere for the valuable assets that are stored within the archive. I have seen users go outside their company to purchase assets that could have been repurposed if only the assets could have been quickly found. In some extreme cases, I have even seen users purchasing back their own assets from third party information providers, such as Getty.

The user interface needs to help your customers find what they are searching for. For example, if your user has a typo in their search term adding functionality like “Did You Mean?” may help your customer quickly find and retrieve assets. Another example is the use of synonyms. If your customer searches for a “Blue Blouse,” your search engine should also return items that are “Azure, Cobalt, Navy, Sapphire, Cerulean or Indigo” and also “Shirt, Chemise, Top.”

Finally, include features such as cross-linking assets to show all related assets, even of differing content types, across multiple data repositories or archives with the ability to include or exclude data sets and set a default a preference for the frequently searched data repositories.

Storage Strategies

Storage strategies need to examine a number of factors including: cost, security, performance and maintainability.

The one constant is that storage needs will constantly increase. Determining your growth rate is a good exercise to predict how much storage you will need in the future and when. The more interesting question to ask is what type of storage devices will need to be purchase and what assets / data should be stored there?

Not all assets are created equally! Some assets in the archive may never be used again. Other assets will be used repeatedly.

Armed with information about the asset usage, you can determine if a specific asset should be moved to near-line storage, such as massive array of idle disks (MAID) or automated tape libraries or moved to a high-performance redundant array of independent disks (RAID) 10 Array. Items that are frequently accessed need to be located on the best performing storage options and those items that are rarely accessed still need to be available, but do not require the high-performance storage option and may also be a candidate for long-term offline storage.

A MAID has far greater storage density than a RAID and is much less expensive. A MAID reduces power consumption since the drives are only running when data is requested. MAIDs have lower through put than conventional disks and can exhibit high latency while the drives are spinning back. Overall MAIDs are very reliable, but slow. 

A RAID 10 array provides high performance improved security through data redundancy and fault tolerance.

Let’s not forget about search indexes, which should have the fast input / output (I/O) possible to provide the best performance for the user. Having the search indexes stored on solid state drives (SSD) provides the best possible scenario, but well tuned RAID 10 drive arrays work as well. SSDs are also the most expensive option right now, but the prices on this type of storage are dropping quickly.

Maintainability of these storage systems varies, but should be considered when factoring in the support costs.

Using metrics about asset usage and building automation to move assets to the appropriate storage location can help reduce costs and improve performance that will in turn improve the DAM ROI.

Continual Improvement

Designing efficiency doesn’t have to break the budget. Find ways to continually make incremental improvements. By streamlining workflows and improving user experience, you will make users happier, more productive while at the same time reducing costs and providing better, more consistent metadata.