MM: Let's start off with a brief introduction as far as you two, and the nature of your business.

JT: My name is John Theodore. I'm a senior project manager with SVC, a division of Cognizant, an information technology, consulting and business process outsourcing services provider and I am based in Los Angeles, California. We provide technology consulting services to the entertainment industry.

I’ve been with SVC for three years. I’ve worked on numerous digital asset management (DAM) projects that have run the gamut from developing forward-looking strategies, putting together solution-selection frameworks, as well as implementing enterprise-wide DAM systems, whether it be a green field or enhancements of an existing system.

MM: Great.

TD: I’m Tim Day and I work alongside John as a senior project manager at SVC. I’ve been working with the studios, as well as broadcasters, helping with their digital transformation initiatives. My main focus of expertise is on the video side of the house as it pertains to moving from traditional analogue processes to digital nonlinear processes. This includes all the changes in workflow, system implementation and architecture. In addition to this, I have been doing a lot of work facilitating the repurposing of content from archives to support multichannel distribution.

MM: Excellent.

When did SVC start?

JT: It was founded in 1995 by Frank Leal and Dave Copper. They were previously PricewaterhouseCoopers consultants.

The company started experiencing significant growth in 1999 and by 2005, which is when I joined, there were about 30 people. We’ve had another growth spurt over the last two and a half years or so, and we’re now at about 70 people. Of those, we had around 20 people that actively engaged in media asset management projects last year.

SVC was acquired by Cognizant, a leading global services firm, in June 2008. Currently, SVC operates as a “Division of Cognizant” in the information, media and entertainment practice.

MM: It would be fair to characterize you guys as master practitioners in the domain of DAM — video and content.

TD: I would say that is a fair statement.

MM: That leads in perfectly to our first examination or set of questions. Could you summarize some of the key developments — high points — in DAM key trends and developments?

TD: One of the things we’ve found, and I’ll talk specifically around the broadcasting and video content, is that companies are now starting to implement media asset management systems that are for longer-form content. This is because of the fact that the storage and server capacity required to store long-form content is now at a point where it is cost effective.

A lot of SVC's clients are either doing pilot projects or, alternatively, are pretty deep in the throes of their conversion projects.

It is important to note, however, that there are still a lot of processes in which they are reverting back to analogue due to system integration challenges. The barriers to that are, however, being broken down by using open standards and changing the way that systems are implemented.

JT: I’ll address the traditional marketing collateral side — or ad-publishing-type systems.

One key development we’re seeing is enterprises trying to figure out where to put an application. “Do we put it in our DMZ (outside the corporate firewall in a ‘demilitarized zone’)?” I think one thing that was happening earlier on was that people were nervous about putting their assets out on the internet and using just the authentication that comes with a system, for example.

We’re obviously seeing a lot more people now getting much more sophisticated — putting their systems out there and feeling comfortable with the security that they’re able to provide, even though they’re housing some of their most precious assets. The reason they’re doing this is to save money by enabling collaboration in their workflows with external partners.

Another key development we’re seeing is, I’d say, really trying to figure out how to keep the connection between the assets and the metadata as the assets move along the workflow. This is really being driven by the digital downloads and digital distribution opportunities that are out there.

When you start talking about that, you also start talking about transformation engines, and how to deliver. You’ll have one asset in your system but you want to, or have to, be able to deliver that in a multitude of file formats and codecs to all of the different places where you’re distributing that asset. One of the key challenges is trying to keep the metadata connected to the asset as it moves along the workflow.

MM: That challenge has largely been a function of the typical workflow and files produced in file-based workflows for video workflows. Typically they haven’t carried a lot of metadata in the actual file itself, unlike photographic assets that for years have had ITPC headers or now XMP metadata in the header.

TD: Yes. I would actually say it's a combination of not only immature digital formats but also the metadata wrappers that go with them. A lot of organizations are set up to actually have a very solid approach to their content production but are not setup to efficiently share content.

In other words, a large studio may have a group that's specifically looking at feature film, a broadcasting group and now a new media group focusing on multichannel distribution. Often, their only interchange is actually a physical asset, at this point.

For example — if a group is putting a full-length show online, they often dub a tape from the master and have that tape sent to the new media group who will digitize it for their use. Part of the breakdown is in addition to having to revert to analogue processes, there is no way — apart from in a tape jacket — to send metadata along with the content.

Obviously there's emerging technology like AAF and MXF that are designed to support the metadata wrapper standards and some of the MPEG standards that are coming out as well, to help facilitate content interchange. In terms of the organizational aspect and the systems that are in place today, however, you are right. More often than not, there is no way that they can actually read the data if it was in the file, as their systems and processes are not compatible yet.

MM: That highlights another issue. There are the technologies and the capabilities of the technology. Then you have these organizational issues. Largely, they reflect the stovepipe or siloed business operations that heretofore didn’t really collaborate in creating source material. As you said, they’d meet up at the finished film and then work backwards from that.

Could you speak a little more to some of the organizational changes that you see that DAM enables? Then, more specifically, those that are underway in either the broadcast or the entertainment industry?

JT: I’d say my initial reaction is just that there's a long way to go. From what I’ve seen, I think there's still a very long way to go. There's probably a long tail in terms of how much value you can actually get out of a DAM system, and how much people are actually getting out of the systems that they have right now — from what I’ve seen.

I’m still surprised that there are still tons of silos. There are still enterprises that have four, five or six different DAM systems. They’re trying to figure out what they do. “Do we consolidate? Do we build middleware and federate?” A lot of that is what we’re actually seeing right now. “How do we get all of this stuff under one roof?”

TD: Right. Another problem that enterprises face is how to classify their metadata tagging. As an example, an asset in System A might be a “one-sheet”; in System B it might be a “poster.” So the metadata management associated with the aggregated search is important if relevant content is to be found across silos.

MM: Have any kind of insights emerged as far as how to do that?

TD: Yes. We’ve been researching and implementing Master Data Management (MDM) tools for our clients.

MM: These are from SAP? Or from other companies?

TD: From various organizations such as SAP, Oracle, IBM, etc. Basically, MDM solutions are allowing our customers to define the set of master data associated with particular content elements. An example of this would be “This is the master synopsis of a particular title,” or “This is the official title of a movie.” Once identified, it can be leveraged across the enterprise.

MM: So this MDM system or repository is more than just a database of facets. But it's also probably tagged as XML so that you can push these metadata sets or schemas into systems that require that metadata?

Is that a fair characterization?

JT: Correct. The systems output XML, but MDM solutions offer more than just XML output.

TD: Right. One of the examples of that is when you look at multichannel distribution. In a lot of cases, you have to name the files in a certain way for the vendor that's actually receiving them. You also have to provide a set metadata standard for them as well. That same content may be repurposed for Video-on-Demand (VOD) and Broadband (eg iTunes).

For VOD, they may have to use the CableLabs spec and therefore need to have a metadata translation system or approach that knows exactly how to map metadata schemas to make sure that when they are distributed to VOD they can be read by the downstream system appropriately. In Broadband, however, there are no specific specifications for metadata and as such each broadband system may have a different requirement for metadata.

MM: This gets to a conversation I had with a researcher at IBM Research Labs around treating metadata — and more specifically, metadata schemas — as just another data type within a database.

The idea then is that, rather than managing just data, I’m managing field types — the schema. Then, through smart algorithms, I can reconstitute new schemas almost as a query.

TD: Yes. You have to keep in mind, however, that when there are all of these legacy systems, a lot of the metadata is not necessarily DAM-centric. You may have a master title management system that part of the metadata is derived from. In addition to DAM, Enterprise ERP systems would utilize the title management system for key fields. To this extent, financial systems, DAM systems and physical asset management systems all need to leverage core metadata.

There's actually a significant amount of system integration analysis that needs to happen to enable any kind of MDM solution. In addition to that, some kind of enterprise ontology or enterprise taxonomy needs to be thought about so that people can understand exactly how to label content.

MM: For those who are just coming into this conversation — could you define and perhaps compare and contrast an enterprise ontology versus a taxonomy?

TD: Basically, the taxonomy is the standard naming convention or terms that would be utilized across the organization.

MM: But it's a hierarchical set of categories by which to categorize and therefore access various content. Is that a fair characterization?

TD: Yes.

MM: So it's keywords, but they’re in a hierarchy. And they typically are mutually exclusive categories. Right?

TD: Right.

MM: And they represent conventional or common ways of classifying and sorting, and therefore searching for objects. This usually shows up in a faceted taxonomy where you have facets that indicate unique ways.

TD: Exactly.

MM: So somebody in purchasing would look at a content repository differently than somebody in Marcom — differently from somebody in sales — differently from somebody in production.

TD: Yes. Absolutely.

MM: So each one of those would be a facet that ultimately would come back to a unified way of looking at content from multiple points of view.

TD: Exactly. The goal is to enable the ability to actually do lookups to your enterprise nomenclature registry and ensure that people are actually leveraging the same metadata across systems without the need to rekey data and therefore introduce possible errors.

MM: Right. You introduced the notion of an enterprise ontology. What is that, and how would you compare and contrast it to this notion of an enterprise taxonomy, which is really a faceted technology?

TD: We think of taxonomies as a way of classifying or describing assets within a particular domain. From a practical perspective, we think of it as an enterprise dictionary that enables the ability to do lookups. You may know partially what a word or title is and in effect, what a facet of the metadata would be.

To ensure that if I’m typing in somebody's name, it's actually suggesting what the actual real name would be. So everybody across the system can start to create — I don’t want to say “primary keys” — but start to ensure that their data is actually consistent.

When we start talking about ontologies, this introduces the idea of business rules and event-driven concepts to our enterprise domain and enables machines to make sound decisions and classifications without our intervention.

MM: Another way — correct me if I’m wrong… When you go to large websites, you’ll see the little breadcrumbs across. Like you’re in the “product” section and you’re in this product group or this product type and this brand. Right?

TD: Yes.

MM: That little breadcrumb that goes across the top of a webpage really constitutes an ontology for the site. Is that a fair characterization?

JT: Yes, that's fair and I would add that the breadcrumb represents the classification system or taxonomy that is a part of the overall ontology.

MM: It describes not only a state, but a particular location in an assembly of a bunch of content specific to that destination.

JT: Yes.

MM: In the context that you’re using an ontology here, if I do a sort on “Jay Leno” or a sort on “The Tonight Show,” it then brings together an inferred logical set of things that someone sorting on “Jay Leno” might want to see.

TD: Absolutely.

MM: So it's an educated guess in terms of what is inferred in a search.

TD: That's one of the things that we’ve been actually doing quite a bit of research into. Enterprise search at its most basic form is linking together an aggregated search across multiple systems, and then also faceting the results to ensure that you can give a navigational approach so that people don’t end up with dead links. And they can also have a visual cue on how many assets are related to their search.

MM: Hence, the ontology.

TD: Yes. Having the ability to map the data into an ontology is valuable.

MM: Underneath — underlying that capability then is both metadata in terms of taxonomy, but certain kind of many-to-many relationships that would form these sets. Is that correct?

TD: Yes. One of the other things we’ve also been working on and associated with this that gets more tricky is “How do you ensure that the enterprise and system security models support that kind of search and navigation?” This is especially important when you’re going across systems.

For example — if I’m doing a search, should I be able to see the faceted results that are coming from the physical asset management or the vault? Or should I just be able to see the content that's in the DAM system for marketing, as opposed to the actual deep archive.

MM: If we go back to the root word, “ontology,” which — from my exposure in philosophy was the study of the nature of being — which computer science — cybernetics — kind of co-opted into the “state of a computer operation.” Not that computers have any “being” in them at all.

Nonetheless, it represents a state. Now you’re saying that these ontologies — which have logical and inferred collections of stuff — now have certain policies associated with them. Those relate back to your security or governance model.

Tim introduced the notion that these ontologies — these particular retrieved collections of “stuff,” have to kind of go through a filter of a certain governance or a certain security model. John, could you speak more to the mechanics of how that filter works, by introducing the notion of policies to manage the presentation of a state.

JT: I can only speak to this based on my exposure via a proof-of-concept system where we threw a faceted navigation system on top of a traditional DAM system.

The biggest stumbling block that we were coming across is exactly what Tim is talking about in terms of exposing the security policies coming from multiple systems. Okay — “We’ve got multiple systems and we have unique security policies and permissions within each one of those systems. How do we expose that through to this layer that's on top — that's assisting with the faceted navigation?” It gets complicated when you start drilling down to asset level security across multiple systems.

TD: Well, we have started to see two approaches emerge. Either you ensure that every system that's integrated with this architecture utilizes a single sign-on approach or you use federated identity management across the systems. The former approach is to take the security policy that you would apply and match it to the search results and allow that to filter the result set.

JT: Then you expose that. Then you’re just leveraging the application that's underneath. You’re leveraging the permissions system that's within there.

TD: Correct.

The other approach you can do is if you don’t want to single-sign-on enable your application or unify your security model, there are tools out there right now that are doing federated identity management. They allow you to know — similar to MDM — that John Doe is actually “John Doe” in System 1 and “Doe John” in System 2. Thus, you can actually create that security relationship, and use that as part of the filtering process as well.

MM: Who are some of the companies that are providing that federated identity management?

TD: There are a few of them out there but IBM is an example that springs to mind as part of their SOA product suite. There are a few others out there as well.

MM: Excellent.

TD: That's actually a pretty good segue into some of the real meat and potatoes of where we see DAM going. Within Strategic Vision Consulting, we have what we call our Media Asset Management (MAM) practice. To that extent, we don’t see managing content as only DAM. We see it as the collection of both physical and digital assets.

So, associated with this practice is what we’ve been talking about today. It's having the ability to see and search across multiple systems and ensure that the security is across them. Then, enable specific services to the enterprise, so that those systems can harness them — whether it be on the digital side, for example — encryption, watermarking or DRM, or on the physical side, for example — work order management, vaulting, etc.

What we’ve actually started to see now is that in real enterprise media asset management — especially within the media and entertainment sector — the ability to manage physical and digital is of paramount importance. By having a service-orientated architecture to support the integration of the various services — whether it be MDM or security, watermarking, data mining or other services, the approach is starting to take root.

In terms of if you look at the logical lifecycle that our customers have been going through — they are definitely somewhat at the infancy stage of implementing these MAM systems. They’ve had the physical asset management for a long time. They now have the DAM systems that they’re implementing. They’re actually starting to find that they need to have a lot more integration with services — and obviously, a holistic view of what an asset looks like and supporting its workflow and lifecycle is key.

MM: That really then puts a much greater emphasis on this whole integration layer of a services architecture. What have been some key developments in terms of that content-services bus, for lack of a better term?

TD: The implementation of Service Orientated Architectures (SOA) is something that the M&E sector is looking at quite closely. Some groups are further along than others.

MM: Would it be fair to characterize the M&E sector as an average or late adopter of SOA?

TD: I would agree with that statement. I think part of the reason for that is that if you look at it strictly from the management perspective, DAM systems are only starting to be implemented now. As such, the challenges associated with digital workflows are now starting to be felt and addressed.

MM: And the business model isn’t really about provisioning services to the web. It's about putting stuff on movie or TV screens. Right?

TD: Exactly. And to your point, the speed at which you can do that and the speed at which studios can start a new service, whether it be something that they own, a third-party broadband service or VOD service, is important. The speed at which they can respond to that business requirement and the ability of the business to be agile is becoming very important. So that's why having the capability to not only manage media assets (digital and physical) is really important. In effect, revenue or business opportunities may be lost by not being able to service those requests quickly enough to meet changing demand.

MM: This gets to an underlying business strategy and a supply-chain strategy for content or entertainment assets. The notion that I like to introduce or frame supply chains by really breaks down to three ideas. One is the theory of constraints — meaning, identify the weakest links and eliminate them. Transaction costs, which are the hidden hand-offs and the delays that some hand-offs entail. Then finally, how we bring strategic sourcing — competitive bidding — into how we source or buy various aspects of the content supply chain.

With that as a frame, if only stipulated, could you speak to the emergence of the digital supply chains for the entertainment or publishing industries?

JT: Yes, a couple points on that. Just to start, obviously, XML is a key technology. When you talk supply chain, it's inextricably linked with XML. Open data transfer standards have obviously just exploded. All of these services now — any time there's a pass-off — typically the data's being sent with XML.

Having services that can output XML and bring in XML and make something out of it are crucial and that is what people are driving toward. Those are key components of the web services that we’ve been talking about.

I think one of the bigger things that I’m looking at is that many of these are small workflows that are tightly focused workflows and are not linked together yet. We are seeing folks really trying to move toward figuring out an open way to do workflows, so that they’re seamless across enterprises. Those will reduce your transaction costs, hopefully.

The other thing there that we’ve really got to figure out is flexibility. How flexible can these things be? There's always a point of diminishing returns. At some point, you get too flexible and it's no longer valuable. You can introduce so much flexibility that people go nuts and start customizing to their hearts’ content, and then can’t work with anybody else.

There's a very fine balance that partners have to ride and collaborate along in order to get to the place where the supply chain is smooth and there are no weak links and everybody's at the same level.

TD: I would say when I’m looking at the entire supply chain, if you started by looking at it from a video angle, that one of the biggest challenges, I believe, in the industry in setting up supply chains in this area is specifically, as it pertains to video. There is no digital video standard that is supported across the board. Whether it be for distribution or for editing or for archive mastering or mezzanine mastering or whatever it may be.

Part of the challenge is that there is a lot of the M&E sector that utilizes vendors across the board. That may be vendors for color collection or editing or graphics, et cetera. Part of the challenge that they have when setting up end-to-end digital workflows is that there are few common standards for digital file formats. To this extent, there is nothing in digital files that compares as a standard to physical tapes such as digi-beta or 35 mm film. There is no standard right now for that process. So to this extent, there can be a hidden transaction cost associated with file movement and transformation, as you start to look at these end-to-end workflows.

The other thing that we also see is that there is no standard message that goes along with a piece of content when it is shipped digitally. By that I mean there's no common work order process.

As an example, if I were to send a vendor a specific job, maybe it would be to make 20 dupes or 20 different versions of a file. Standards are actually the key point here when trying to set up an end-to-end pipeline. Today there is no way I could digitally send those instructions in a standardized way. There are no vendor work-order messaging standards. Because of this lack of standards the costs for integration can rise, as each vendor's format needs to be known and supported.

Another challenge as well that I’ve seen is that many organizations are relatively segmented. A lot of times, if they’re not careful, they end up buying similar products or the same products or competing products across the board when trying to set up these supply channels.

For example, group one may require a particular piece of software for file delivery, and group two, having the same requirement, will go and either buy from a different vendor or, alternatively, go to the same vendor and duplicate the licenses and infrastructure costs. That's definitely an organizational issue, but it's certainly something that occurs across the board.

MM: An issue that you brought up — John, I think — was the notion of flexibility versus openness. It seems to me that in my conversations with other folks like yourselves the workflow issue really breaks down into three interrelated areas. Each requires a unique set of functions or services.

The first area is what I’ll call collaboration. In a collaborative workspace, basically the work is ideation and discovery. Sort of iterative communication and interaction and collaboration with peers. It tends to be non-linear, circular and not necessarily predictable except that it happens.

Usually, the issue in collaboration spaces is how we manage asynchronous and synchronous communication — that is, real-time, ear-to-ear, screen-to-screen, face-to-face collaboration — as opposed to the voicemail or video voicemail equivalent.

The second area of the workflow tends to be more structured, procedural flows with decision points. Pass/no pass, rework/throw away. Those tend to be a function of routing, forms and policies or business logic, in terms of who has to see what — what the options are. That seems to be a pretty structured and well-defined data model and a set of policies.

The third kind of links these two. That's the scheduling module. Scheduling is what precedes project management. Project management tends to be a version of that workflow. Right?

Scheduling is more like, “Well, we have this many people to do ‘Y’ things, and we constantly have to reprioritize tasks and activities and workflows based on new contingencies and new things. We kind of need an air traffic control system by which to sync up planes in the air with available landing strips and debarkation gates. Is that a fair characterization of the factory of an entertainment or broadcast firm?”

JT: Yes. I think that definitely makes sense to me. There are a couple of points I’d like to make on some of those.

On the first point of collaboration…when looking at maturity, at least through my experience, the collaboration piece is the most immature. I have actually been seeing this first hand in the video game industry where distributed game development models are starting to take shape. There's a tremendous amount of artwork outsourcing that's happening right now in the video game industry that is forcing the publishers and developers to build solutions for collaboration.

MM: Not just artwork, but underlying functions of the games.

JT: Yes. But really, the predominant piece of work that's being outsourced is the creation of artwork. Obviously, there's a tremendous amount of collaboration, ideally, that's going on. Your art director is passing off the specs for the piece of art that they need, and it's being done wherever in the world, and coming back. The art director needs to look at it and sketch up on it and make comments and stuff like that. That is the meat of the collaboration.

I just haven’t seen any tools that really excel at that yet. Really, what I’m seeing in the video game industry are mostly homegrown tools at this point.

The collaboration piece, I think, is very interesting to look at, especially when you frame it up with O’Reilly's Web 2.0 framework. You’re seeing Google and a lot of these leaders in that space developing immense collaboration-type systems with truly rich interfaces and an architecture that leverages participation. When you’re looking at some of the DAM vendors, they’re just not there.

It's very interesting to me to see where the internet is going. The artists are on the web every day and they’re seeing this stuff, but it doesn’t come into the enterprise. It's just not there yet.

You have this conundrum. They’re looking at this thing and saying, “Well, why can’t we have that? There it is. It's being done. But I can’t do that. How come?” To me, the collaboration piece is the most interesting.

I think the structural and procedural stuff is nailed down. It's like you were saying, it's business rules. The main thing there, obviously, is just analysis. A lot of times you’ll see organizations that just underestimate the scope or the level of effort that really needs to go into the analysis in determining what those business rules are and what the decision points should be.

MM: Generally, the business end of that workflow piece is usually some sort of federated work management dashboard. That basically says, “ This is me. Here are my projects. These are my due dates. I’m ahead on these, I’m behind on these.” Basically, just a logical assembly of my commitments to others to deliver various items.

TD: That's one of the things that we are starting to see on the enterprise side. As organizations start adopting SOA and integrating it with portals and dashboards, to John's point (and these are the more advanced enterprises), they have the ability to actually start to interact with the workflows, and to start to reroute the content if Key Performance Indicators (KPIs) are not met. At present, however, this type of thinking is in its infancy and as such it is a work in progress.

MM: That's a completely different can of worms. That was the dashboard. Then there's the actual work that I am doing. That's usually dealing with really large files and multiple versions at multiple states of completion, in terms of work in process. The real work is review, approval, color-matching, proofing and that stuff that is really — from an IT perspective — pretty heavy lifting. Would you agree?

TD: I would agree.

I’d say that people are definitely addressing, from a systematic basis, the archive of the finished product. It is so much easier to manage that than the work in progress. The question I get a lot is “Should I save every version of every asset ever created, just in case I might need it in the future?” There are a lot more tools out there that support the end archives, as opposed to the true work-in-progress assets and workflows.

JT: Yes. Just to add one other point, I think one thing we’re going to start seeing more and more of is making use of the data that's being collected within these DAM systems.

Yes, you’ve got your workflows and stuff like that. But there are also other events you can capture and note in the database. The vendors built a DAM system so you could manage inventories; they didn’t really start thinking about 10 years down the line. If we were collecting all of this event-driven information, we could actually start making some pretty good inferences as to how people are using this system and how they could gain more from this system.

MM: I’ve seen a couple. I can count on one hand the number of companies that have put together ROI dashboards for their asset repository.

The dashboard basically aggregates activity and user data. It's just converting user and asset activity into economic results. But it's all more or less real-time dashboard. So anytime the person in charge of the DAM gets called in to a finance meeting, he or she has data that says, “Well, this is what I brought to the corporation this year. What did you bring?”

TD: Right. That's what we’ve started to do with some of our customers. We’ve started to show, as you move from the physical workflows to digital workflows, how the ROI can help them and how they can get their projects funded, to ensure that they could grow in the future.

MM: I’ve also found that to be essential and pivotal for consulting practices or consulting practices or whomever is supporting that DAM administrator. To actually have an analytics session once a month with your customer.

We’ve found that most DAM administrators tend to either be super-librarians or super-users that came out of the production environment and are trying to bring peace back into the production environment. They’re not really statistical wonks. They’re not quantitative wonks. So they don’t really get into the fine-grained data analytics — then abstracting it up into, “What does this mean to the business?”

“It's almost like there has to be an academy. An ongoing, peel-the-onion one-analytic-concept-at-a-time.”

Would you agree with that? If so or if not, why or why not?

JT: I would definitely agree with that. When I think about the analytic stuff, my heart is always with the end user. I just feel pain for end users for some reason. I always want to make their lives better.

To me, I’m always trying to think, “Why aren’t we harnessing all the data that could be collected from all the people that are using this, and make the app better, and make it more usable?” I am always looking for tweaks or ways of leveraging as much of this information that could potentially be collected, to make this a better application.

Even just very simple things like, “Somebody that searched for this also searched for this.” “The last five downloads were this,” and you’d click on it to see all this type of information that's being updated dynamically.

That's one side of it, but then there's obviously the side you were talking to, which was the economics. I think the struggle is always in coming up with what the dollar value is. “What is this asset worth?”

MM: I think it starts with the fact that most of the DAM administrators really don’t have a solid enough grounding and quantitative method to make the leap. Someone — and I’m pointing to you guys — someone has to lead them along in the process.

TD: If you look at some of the projects that I’ve been on recently, that's what we’ve actually done. We have defined and tracked the metrics. Once a month, to your point, we do actually have a session where we go through all of the metrics of the application and report of KPIs.

MM: We call that IT service management. Or, IT services. Right?

TD: Correct. So not only do we do IT service management, as you call it, we also do asset tracking and reporting. As an example, exports and imports by set users and regions/customers may be key to track. In addition to this, we also compare these stats to the support side because SVC is also supporting some of the applications for our customers. We give the business sponsors metrics on how many users have been trained versus created, et cetera.

From that perspective, what they can then start to track is a real ROI, associated with not only data shipped but also user interaction and competence level.

An example would be in one system we’ve used the metric of a “CD” being the standard metric tracked for content delivery, as this was the process that was used before digital distribution. This helps the executives understand the translation between data shipped digitally versus physically and therefore helps to determine the cost and potential savings by moving to a digital workflow. An example would be that right now, we’re shipping approximately 5,000 CDs a month of content that would’ve traditionally been shipped by CD. We tally that up and give them metrics from a month-by-month basis, to make sure that they understand the impact of the application on the enterprise. As an example, if you took the 5,000 CD's × $50 per CD for creation, shipping and handling, that translates to approximately $250,000 per month. This does not take into account the savings in time for international distribution.

The same is being used now on the video side to replace digi-beta delivery. We tally up what it would cost to create a DigiBeta, a DVD or satellite it as well. When creating digital workflows it is key to track your metrics against the legacy process to ensure an ROI can be calculated and optimized.

MM: Have you had client requirements or engagements or conversations around calculating or computing the asset value of a discrete digital asset?

TD: Not really. I’d say we’ve had projects, to the extent of what content is being shipped, and some of the impacts to it, but not necessarily how much does the content or a piece of the content cost.

The reason, at least on the project side, is that the content types are varied. For example, in broadcast, you might have a TV spot or a finished show. It is difficult to track, as let's say for example you have got some footage from a specific event. That could’ve cost you “X,” versus content that you’ve actually paid an actor to go and do, which is going to cost you “Y,” which would be considerably more. I feel it's pretty difficult to quantify the actual value of the content but relatively easy to track it once you have a MAM workflow in place. Is that what you’re referring to?

MM: Yes. Specifically, we’ve run across a handful of firms that have capitalized the development of an asset. They’ve put that capitalization cost on the balance sheet as a new asset class.

TD: Okay.

MM: The upside of that is, it's a way to increase the book value of the firm.

Because you’ve got this heretofore unrecognized asset class. Secondly, it puts an economic framework around why we’re managing all that stuff. So it kind of puts in place, from the CFO down, a certain generally accepted accounting principle or practice for how we create and track and manage these digital things.

TD: Yes.

MM: Ultimately, that comes back to metadata and business rules. Then finally, the downside of that, of course. How do you reflect material changes in the value of this asset?

The simple way is just to simply have an amortization that says after five years it has no value. But that's the scary point for most CFOs. “Why am I putting something onto my balance sheet for which I don’t have a clear-cut way of measuring changes?” Rather than a straight-line depreciation.

TD: I don’t know about you, John, but I haven’t found anybody looking at that. It's an interesting topic because if you look at it from an M&E perspective, if a bank manages money, an M&E company manages media — that is their currency.

But you’re right in the fact that not a lot of work has been done in that area to date, to quantify the value of an asset. They all assume that it's important. As we interview various people, we’ll find some people who say, “An asset's always valuable. Never delete it.” On the other hand some groups always say, “If it's not used in five years, purge it.” So there's definitely a different perception of not only the economic value of an asset, but even the reuse of the asset.

MM: The utility value.

TD: Having said that, I think that as these systems start to be put into the enterprise, and now it's no longer just a physical approach, we can actually start to get real metrics back on how many times an asset was used and when it was last used, how many times it's been aired, all that kind of as-run information, et cetera.

That actually can start to create the case. But until such time as they get their assets and MAM systems in a way that can facilitate that, they’ve got no idea.

MM: Absolutely.

JT: I had a conversation with a DAM manager from an office products company and the way they measured the value of their system. They did this monthly dashboard with dollar signs on it.

They calculated the value of a download. Every download was worth “X.” The way they figured that was, you didn’t have to set up a photo shoot, because these were physical goods that basically needed to go into a piece of marketing collateral.

We do this, we upload this asset and our European counterpart doesn’t have to set up a photo shoot because they just download the asset. That's $250 saved — or whatever it was.

MM: Yes. I know of a catalogue firm that capitalized all of the expense associated with scanning a library of 30,000 product photos. They just did it at a par value or nominal value of $35 a scan.

GAAP says that if you can show reuse of a period greater than 18 months — that is, a year and a half plus a day — and you have captured the cost — and you’ve linked reuse to some tangible economic event… The best of which, of course, is a sale. That could’ve been a download. Then you’re within GAAP to capitalize the development cost.

I find it interesting, curious, maddening, stupefying that more companies have not yet really gotten on with capitalizing the development of this class of assets.

JT: It's interesting. One thing, when I first started getting into this, I kept on reading stories about determining the ROI. You had to prove ROI before you’d invest in one of these systems.

MM: I plead guilty.

JT: One of the last conversations that I had about ROI was with somebody who had recently, a fairly big company that had recently implemented its first DAM system. I said, “Did you do any ROI analysis?” They said, “John, no. We just knew we had to have it. Our CFO didn’t even care.” Our CFO just said, “Look, you guys, it's a utility. You have to have it. You have to turn the lights on. You have to have a DAM.”

MM: We’ve found for those faith-based business programs — not to offend anyone here… But we’ve found that for those that basically build it because it just makes intuitive good sense… inevitably someone comes back and says, “Okay. Here's my 3-year operational business plan. Here are the 5 people in the headcount that I need to onboard this year. Here's the set of workflow automation add-ons that I need to bring on.”

By the time you’ve rolled up a 3-year operational business plan for the fully functioning enterprise DAM system, you can easily… You will absolutely be $4m into it, and could easily be $8 or 12m into it.

That's the problem that companies typically run into. They’ll go buy a DAM and then they’ll find out that they didn’t have any hay to feed the cow. They didn’t really think through the content operations.

Could you speak to that?

JT: The other thing I’d add to it is that a lot of decisions are made without really looking in the long term, and realizing that it is a living system. I would also relate this same concept to massively multiplayer online games. When you launch that, it becomes a living thing. It needs care and feeding.

You just notice that some organizations don’t understand that and clearly don’t budget for it.

MM: Right.

TD: It becomes the chicken-or-the-egg approach. As an example, a customer may say, “We put this system in. We’re not getting the value out of it. Why not?”

Sometimes we find that not enough resources have been invested in it yet to realize the true value, or the change management (human process change) associated with the system has not been catered for.

MM: This falls into what I call the 90/90 rule. After you do 90 per cent of the work with doing your requirements, selecting the vendor and provisioning the system, you discover that you’ve got a second 90 per cent left to do.

JT: Exactly.

That second 90 per cent almost always is the socialization piece. The organizational change management piece and the ongoing care and feeding of the operation.

TD: Yes. There are two projects that I’m working on right now that immediately spring to mind. One of them is a system that's been around for quite a while, quite a number of years. It was never really adopted at the enterprise level because of usability issues.

What ended up happening was the group could only get lights on funding for the system to keep it going. But to John's point, it was a living and breathing system that had content and end users who relied on it.

MM: It was their lifeline.

TD: Yes, it was in fact their content pipeline. Unfortunately, their content pipeline had a lot of leaks in it.

To that extent, one of the things that we ended up doing there was quite interesting. We did what we called a “Quick Wins” approach. That's basically to plug as many of those holes as we could and in addition to that, to train as many users as we could, so that we could turn around the perception of the application in a short time-frame. This then enabled the group to reevaluate the importance of the application and get further funding for its expansion.

MM: What would be a couple of examples of “quick wins?”

TD: Some examples were usability enhancements, making it easier for users to actually find content, and adjusting the metadata schema to ensure the accuracy of the search results. In addition to this, we also did advanced training for end users to ensure that they were educated on the options they had available to them for finding their content.

JT: Simple configuration tweaks too.

TD: The process took only 2 months’ worth of work to change a system that had been around for many years.

Out of that approach developed a very positive case study for the group. We have now grown that particular system to over 2,500 end users accessing it globally. With a little bit of concerted effort and listening to end users, you can make a lot happen.

In addition to this, there is another project I am on that's actually doing a pilot of a DAM system. The reason that they’ve taken the pilot approach was to your point earlier that they don’t know how much it's going to cost to build it out across the enterprise or the value it will provide to the enterprise.

When they look at their thousands of hours of content that are going to be needed to be managed in the system, definitely the pilot approach works for them, because it limits the risk associated with it.

That's a good approach, because it enables the group to start small and not have to go through a Band-Aid process at the end, to change perception.

There are definitely different approaches to rolling out DAM and MAM systems.

MM: Doesn’t that require then that you have a technology roadmap or a services roadmap that breaks a project down into phases?

TD: Not only that, but an organizational roadmap as well. I think the organizational piece is as important, if not more important, than the actual technology roadmap.

MM: How do you characterize an organizational roadmap, in that sense?

TD: For example, what are the training plans? How do you intend to target specific business units? I’m talking about an enterprise-wide system, not necessarily talking about a small organization. How do you go about supporting that application's growth?

To my point about user perception earlier, I believe that is the most critical aspect within an asset management system. Making sure that user perception is positive is key, as it is what helps get funding in the future to fix any holes that may open up in the pipeline.

Despite what a vendor may tell you, we all know there's no one DAM product out there that does everything. Typically, you have to buy the one that most closely matches your requirements and then enhance it from there. If you never get your users’ perceptions as being positive and supporting you in that process, you’ve got a very long uphill battle ahead of you.

MM: This gets to another point that we’ve seen in a handful of companies. They launch a DAM portal or a DAM services portal or a content portal. But they launch a DAM service as a corporate brand of that company. They invoke all of the cultural norms of launching a product — whether it's Ubisoft, which makes games or Nissan, which makes cars. They come up with a brand identity and a marketing plan and a launch process that mirrors exactly what they did in terms of their core business.

This not only paved the way or pre-paved the way for user consideration and adoption, but it made sure that the square pegs went into the square holes. There were expectations built pre-launch, and then of course there was all of this thinking about it just being a service that you throw over the wall — but there are real customers that we have to onboard, and there is real satisfaction that we have to attain.

Does that make sense?

JT: Absolutely. You’re talking about two of the best companies that know how to meet their customers’ desires, wants and demands.

MM: Yes. This also gets back to a deeper issue, and this might be a topic for another day — which I’d love to have. It's the notion that when we start looking at marketing as an operation, for the most part, marketing puts lipstick on pigs. Seth Gordon has his book out there called “Meatball Sundaes.”

It's the same idea — marketing is usually handed a product or a service to launch and go sell. Oftentimes, because of the rate of innovation and stuff, what they’re handed is not necessarily what the market wants to buy. Marketing is always then shoehorning — persuading — manipulating — coercing. Putting lipstick on pigs to get customers to buy something that they didn’t really want. That's a failure of the innovation process. They didn’t really complete their product.

Now what's interesting is that as marketing moves more and more into what I’ll call “brand interactions” and “customer engagement,” it's no longer just in the lipstick business. It's no longer just in the business of telling a good story, argued or told with passion. Right? But marketing now has the job of provisioning online interactive self-services to consumers that engage and ultimately want to do something.

Could you speak to this seismic shift in terms of operational requirements from just being concerned about content? Whether it's publishing that's producing a book or an entertainment studio that's producing a movie — to actually now provisioning online, interactive services for the self-directed consumer. Ultimately reinforcing the brand through a series of engagements.

If you go to NetFlix, for example. You’re set up with a subscription. Now I want to have a mobile SMS reminding me that the movie I said I wanted is now available — would I like it… through a text message.

JT: Yes.

One quick point for me that I think I’m noticing more and more. Obviously when you work with DAM, a lot of times on the marketing side you’re working with creative services groups. The creative services groups traditionally have been artists. A lot of them are obviously Mac people now. Earlier in the day, they weren’t using computers.

MM: They’re creative.

JT: Yes. Well, not just that, they just didn’t even care for computers. Right? They were designing stuff.

MM: You can make the same case for CMOs.

JT: Yes. Now they have to become geeks. Right?

MM: Yes. Exactly.

JT: Yes. I think you’re seeing a lot of the geekier people going to agencies. You’re seeing design agencies starting to become very technical.

MM: Yes.

JT: Tim, you probably may be able to speak to your past experience here from when you were designing and implementing all those websites. I think that's kind of related to this idea. They were traditionally ad agency firms, and now all of a sudden they’re designing websites and sending off SMS messages and having to figure all that stuff out.

MM: In fact, they shifted from being a service organization to being a digital service platform with creative capabilities.

JT: Yes. It's very interesting to me. Definitely I’m seeing that happen. More often than not it seems it's happening at an agency rather than keeping that within the enterprise; it's just kind of moving out toward the partners. That's why it's related to our earlier workflow conversation, and trying to figure out how we connect these.

MM: Right.

TD: I don’t work too much on the consumer side of the business. I’m focusing a lot more on the B2B aspect. John, with the gaming side, is obviously working a lot more there.

When I look back to my previous tenure before SVC, working at a very large sports management organization, we did produce a lot of consumer websites and broadband systems. We also produced things like SMS and MMS messaging for sports and things like that.

The ability to reach out to consumers in any channel is really important. To your point, the marketing message, I think, and the marketing approach, need to encompass many different channels, and to facilitate that as an ad agency. It's definitely a challenge because there are so many different technologies to deal with now. There are many different platforms that can present content to the consumer.

Traditional M&E companies today, if you look at broadcasters for example, really understand broadcasting. But do they understand how to get SMS or MMS messages out to consumers? Probably not. So they’re usually going to ask for help until they hit a critical volume or an ROI whereby they are making money off of it. Then they’ll try to bring it in-house to reduce their operating expense.

MM: Excellent. We’ll wrap it up here. Thanks to you both.