Abstract
This paper examines the history and current state of search marketing (SM), and presents the authors’ speculations regarding SM's future. It is intended to promote reflection and conversation. The authors believe that digital marketers can, and in fact must, interact with the emerging and powerful new communities. To do this, they must develop a variety of new marketing strategies, such as Digital Asset Optimization, interactive engagement marketing, and community-based communications.
Introduction
This paper, which examines the history and current state of search marketing (SM), also presents speculations regarding the future of SM. Rather than being a guide to action, it is a conceptual paper intended to point out how search marketing is changing and to promote reflection regarding how it will/could evolve in the future.
SM, also called Search Engine Marketing (SEM), is sometimes defined as the practice of compensating search engines in return for placement in search results.1 The authors, however, prefer the more comprehensive definition that is embraced by the Search Engine Marketing Professionals Organization (SEMPO). This definition encompasses Search Engine Optimization (or SEO), paid placement, contextual advertising, and paid inclusion.2 The primary difference between these two definitions is that the latter adds SEO, which involves actions designed to improve a website's ranking in unpaid, or organic, search results, to the mix. It is the SEMPO definition that is used in this paper.
Almost a decade ago, SEO guides provided advice regarding how to make a website ‘crawler-friendly’ and how to get quality links for better indexing. Interestingly enough, most expert advice for obtaining high rankings remains about the same today. The exciting question, however, is ‘How is SM changing and where will it go in the future?’
Search ranking fundamentals
In the internet’s formative days, a website’s ranking was almost completely based on the HTML code and words found on its web pages. In that era, about the only thing online marketers had to do was repeat a keyword tens of times on their web pages, and those pages would rise in the search rankings. Search is different today. Many, if not most, of the practices from ‘The Good Old Days’ are looked on as a form of spamming, a form that can result in penalties being imposed by the search engines — penalties that can involve a decrease in search result rank or even total removal from a search engine’s index.3
A number of years ago, Google made a change in the rules of the search by introducing another factor set (in addition to page content) into its ranking procedure. These factors include the quantity, quality, and context of incoming links to a website. The result of this change was that linkages to a particular website became important determinants of how high that site is ranked with respect to a particular keyword. As a result, today’s search engine rankings for a web page (at the most fundamental level) came to be based on two factors:
-
1
The content of that web page. The more focused and clear the content was on the web page, the greater were the chances of ranking high for the main keywords on the page.
-
2
The pattern of links to the website. That is, the quantity, quality, and context of incoming links to the site’s domain name.4
Google employs a complex, proprietary algorithm based on the above two factors (albeit along with others of lesser significance) to determine which websites rank high for any given search. The company explains the process in the following manner on its corporate website:
‘The software behind our search technology conducts a series of simultaneous calculations requiring only a fraction of a second. Traditional search engines rely heavily on how often a word appears on a web page. We use more than 200 signals, including our patented PageRank™ algorithm, to examine the entire link structure of the web and determine which pages are most important. We then conduct hypertext-matching analysis to determine which pages are relevant to the specific search being conducted. By combining overall importance and query-specific relevance, we’re able to put the most relevant and reliable results first'.
• PageRank Technology: PageRank reflects our view of the importance of web pages by considering more than 500 million variables and 2 billion terms. Pages that we believe are important pages receive a higher PageRank and are more likely to appear at the top of the search results.
PageRank also considers the importance of each page that casts a vote, as votes from some pages are considered to have greater value, thus giving the linked page greater value. We have always taken a pragmatic approach to help improve search quality and create useful products, and our technology uses the collective intelligence of the web to determine a page's importance.
• Hypertext-Matching Analysis: Our search engine also analyzes page content. However, instead of simply scanning for page-based text (which can be manipulated by site publishers through meta-tags), our technology analyzes the full content of a page and factors in fonts, subdivisions and the precise location of each word. We also analyze the content of neighboring web pages to ensure the results returned are the most relevant to a user's query’.5
SM: Past and future
Google states that its mission ‘is to organize the world's information and make it universally accessible and useful’.6 In the view of many observers, however, Google's present web-crawler approach makes the achievement of this objective a ‘mission impossible’. This is not because Google is ‘faulty’. It simply stems from the logistics of the task. Although crawling was perfectly suitable to locating and indexing web pages back in the 1990s when only millions existed, today (as Google itself has acknowledged) even though it maps a trillion URLs, it has no chance of crawling and indexing them in a timely fashion. Furthermore, the trillion URLs in its index do not represent the entire web; they are just those pages, among the billion or more added to the web each day, that Google's web crawler has spidered thus far.7 Numbers such as these make one believe that there must be a ‘better way’.
Crawlers, spiders, and bots … oh my!
‘Crawler’, ‘spider’, and ‘bot’ all are terms used to refer to the software search engines employ to locate, download, and index web pages. There is only one reason that they exist: to download HTML pages to search engines’ indexing systems. Interestingly enough, however, they have remained effectively blind to non-text content, exactly the type that makes up the bulk of user-generated content (ie image, audio, video, and other non-text file types). In other words, search engines have not been able to easily handle or index the file types that make up the great majority of content generated by today's users.8 The latter fact notwithstanding, however, all sorts of plug-ins (eg Flash) that facilitate dealing with this non-textual content are still added to browsers that were never designed to receive them. In effect, this is an effort to morph the browsers into tools they were never intended to be. Thus, one perplexing result of this ‘plug-in fixation’ is that they have made it more and more difficult for search engine spiders to locate, classify, and index the web's content.
While conventional SEO still focuses on the work of traditional search engine bots, it is beset by a number of limiting factors, including the following:
-
The challenge presented when attempting to discern the relevance of existing pages in the search engine's index, while at the same time trying to cope with the rapid arrival rate of new web content.
-
Minimizing the overhead or average number of retrievals required to discover one new page.
-
Bandwidth limitations — it simply would not be workable (or probably even ‘do-able’) to attempt to download the entire content of the web each day. In fact, some sites are so large that, even in a week, they cannot be spidered from beginning to end.
-
An almost infinitely mushrooming number of URLs, ‘spider traps’ (web pages designed to slow down or crash web crawlers), spam and a myriad of other issues simply prevent any crawler from spidering the entire web.
-
The continuing tension between crawling new pages and re-crawling existing pages. In an always-connected world where breaking news is of global interest, search engines have to be capable of indexing and reporting new developments almost in real time to avoid discord among end users.
As can be implied from Google's explanation of its PageRank algorithm (included earlier), the primary means through which new web pages are discovered is when they are linked to from existing indexed pages. Based on this fact, some SEO experts have contended that websites with a large number of links continue to attract more new links than those with only a few. Thus, since more of the formers’ content is indexed and linked to, they may have an edge when it comes to ranking.9
A related phenomenon occurs when a search engine is aware of certain web pages, but has not yet spidered them. For example, Google extracts billions of links from billions of pages, but one must presume that it has to establish some priority regarding which pages it spiders first. This could mean that, while Google is aware of a trillion URLs, it may not necessarily crawl them all.
Finally, what some call ‘the invisible web’ is still out there in cyberspace. It is composed of the millions (more than likely billions) of pages that are secreted away in databases and/or in password-protected storage that spiders are prevented from accessing.
To summarize then, while search engine bots are certainly much more intelligent today than they were in the ‘wild and wooly’ days of the early web, they may never be able to present a thorough and up-to-the-second representation of the web's content. With this contention in mind, the question becomes, ‘What useful purpose will web bots serve in the future?’ Perhaps it will be as a compliment, or even a supplement, to other methods of information retrieval on the internet.
Google Universal Search
The introduction of Google's Universal Search supports the latter contention, and suggests that techniques beyond spidering are necessary to retrieve meaningful information from the evolving structure of the web. In a 2007 press release announcing Universal Search, Google explained:
‘Google's vision for universal search is to ultimately search across all its content sources, compare and rank all the information in real time, and deliver a single, integrated set of search results that offers users precisely what they are looking for. Beginning today, the company will incorporate information from a variety of previously separate sources — including videos, images, news, maps, books, and websites — into a single set of results. At first, universal search results may be subtle. Over time users will recognize additional types of content integrated into their search results as the company advances toward delivering a truly comprehensive search experience’. 10
Since that time, Google has blended results from blog and product searches into Universal Search.11
Nonetheless, there still exists a substantial amount of doubt that data capture based on crawler technology — which is the basis for Universal Search — can effectively serve the needs of the evolving web. As pointed out earlier, the sheer logistics associated with the exponential growth of user-generated content will inevitably defeat the crawler. As alternatives, methods such as user-generated content analysis, cross-content analysis (ie comparing the content of related websites), and community analysis (ie discerning socio-economic characteristics, interests, information needs and wants, as well as patterns of content use) must be considered in order for search engines to provide the most relevant results and richest experience to end users. Thus, the time has come to investigate new approaches for search engines to effectively and efficiently digest the world's information. Examples of new methodology might include developing new protocols for different types of search engine feeds, and/or forging special relationships between publishers of user-generated content and search engines. Change must come, and, in fact, it is coming.
Links, clicks and cliques
A frequently told story relates how eminent computer scientist Jon Kleinberg revealed a serious flaw in the process used by search engines to rank web pages based on the text on the page. Back in the 1990s, when AltaVista was the market-leading search engine, Kleinberg queried the phrase ‘search engine’ in Alta Vista. To his surprise, ‘Alta Vista’ was not included in the results. He then searched ‘Japanese automotive manufacturer’ and discovered that ‘Nissan’, ‘Toyota’, and ‘Honda’ were not at the top of the results. A quick examination of AltaVista's homepage revealed the fact that the phrase ‘search engine’ was not present. Likewise, perusals of Nissan, Toyota and Honda's homepages found that the phrase ‘Japanese automotive manufacturer’ was absent on each.
Kleinberg's discovery led him to develop his Hyperlink-Induced Topic Search (HITS) ranking algorithm, which he devised just before Brin and Page formulated Google PageRank. HITS is based on connectivity data, and ranks documents on what are known as ‘hub’ and ‘authority’ scores. Kleinberg's approach, like PageRank, employs an iterative algorithm based on the linkages.12 One can learn more about Kleinberg's research in Duncan Watts’ Six Degrees: The Science of a Connected Age.13
In a nutshell, Kleinberg's work, as well as that of Brin and Page, contributed to an improvement in the quality of web search through the application of social network analysis to the ranking structure. Rather than using the text on a web page to judge that page's quality, the focus moved to the overall quality of the pages linking to it. It is this fact that explains why the SEO community has placed such a great emphasis on link-building. But consider this: if a link is similar to a vote (as Google puts it) from one web page author in favour of another author's web page, how do members of the web community who do not have web pages vote? In a way, the practice of the incoming links estranges hundreds of millions, perhaps even billions, of web users simply because they do not have links with which to vote.
There is, however, a rapidly expanding polling place for these ‘estranged voters’. The collective intelligence of crowds and the ‘hints’ end users presently are sending are very significant votes, or signals, to search engines. As online bookmarking, tagging, and rating increase in popularity and expand, so too will their influence on the content of search engine results pages (SERPs). The most important of these signals are originating from the data mined from search trails of the ever-present search engine toolbar. These data supply search engines with singular insights into the identification of the most relevant (ie the most authoritative) websites.
Thus, while traditional search has been formulated around signals from the creators of web page content (text, links, and so forth), search is increasingly based around modelling the behaviour of the end user. Searchers begin with initial queries, reformulate these queries (which yield query chain data), click on results (which result in click stream data), and finally leave the search engine. Interestingly enough, however, the sites to which they originally navigate are not always their final destination pages. In fact, studies show that users regularly browse as far as five clicks away from the original search results and thus visit a variety of domains during their search for information.14
The same types of collective intelligence that previously have been gleaned from linkage data for ranking documents can also be extracted from clicks and search trails. Migrating away from the limited abilities of systems that are solely dependent on easy-to-capture-and-model queries and document content is a very significant shift in online information retrieval. Today, toolbar data allow search engines to capture relationships among queries, documents and the end user's ‘true’ search context. And while search engines always had the capability to discern the quality of a web page by end user behaviour prior to the development of toolbars (detecting thousands of clicks on the browser back-button sends a reasonably clear signal that the page is of low quality), today's toolbar-enhanced search engines have access to a very powerful combination of signals.
While search engines have always had access to query and click-through logs that provide implicit feedback from end users that can be employed to re-rank web pages, it is the post-search behaviour of these end users that supplies valuable signals regarding those web pages that are the most relevant to the user's information search. Put another way, searchers generate vast volumes of information with regard to which results of a given query they favour when they click one result in preference to others provided by SERPs. This information, in turn, can be used to provide more relevant search results. Because of this, it is becoming increasingly important for web marketers to gain insight into the post-search behaviour of their potential customers.
Finally, artificial neural networks (ie mathematical models that seek to emulate the human thought processes; as they acquire more and more information they learn and adapt) can be used by search engines to re-order search results so that they better reflect the web pages to which past users have actually navigated. At this point, the reader might ask, ‘Why use neural networks? Why not just record a query and then keep a record of how many times each particular search result is clicked?’ The answer resides in the elegant ability of neural networks to make reasonable predictions regarding the results for queries they have never seen before. They can do this based on the similarity of a new query to queries that have previously been made. This is significant since up to 25 per cent of all queries to search engines each day have never been seen before.15
From SEO to Digital Asset Optimization
Although the web did not change that much when a rapidly expanding number of users adopted much faster, always-on connections, their surfing habits certainly changed.16 End users no longer have to wait for a graphic to download and then stare at a static web page trying to decide whether to read its content; instead, today's user expects a much richer experience. Perhaps this helps explain why YouTube is now as popular (according to some measures, more popular) than Google itself.17
Given the previous discussion, it is a natural progression for search engines to make use of all available sources of information prior to combining and reporting it on a SERP. It is the latter fact that will have the greatest impact on how search marketing is approached. The days when search marketing focused on nudging clients’ websites into the ‘top ten blue links’ on a SERP are behind us. Today the focus is, and tomorrow the focus will be, search engine positioning:
-
Where on that first visible SERP can a marketer make the site in question more visible?
-
What file types or methods must be employed to achieve this?
A significant group of online marketing experts is of the opinion that ranking reports add little to no value. This opinion has only been strengthened in the contemporary environment of Universal Search, whereby ranking reports provide sub-optimal data. A search marketing firm might tell a client that their site is in the top ten for ‘X’ number of keywords at Google, but the real question is: Does that ranking translate into actual visibility?
An example will help explain the latter point. At the end of March 2009, a search for ‘bed and breakfast new york’ yielded the Google SERP shown in Figure 1. If one momentarily ignores the three paid search results that show up at the top of the page, the reader will notice that it is the first of Google's local listings that holds the number one position for the search. All but two of the remaining organic (ie unpaid) results have been pushed off the bottom of the screen, or ‘below the fold’ as online marketers call it. Now examine the richness of the data that accompany those local results: maps, telephone numbers, links to websites and customer reviews. A ranking report for this query from an SEO firm might tell the client that its website is in the top ten organic listings, but in actuality that client would have had to rank number one or two in order to appear above the fold.
Studies have shown that a substantial proportion of end users rarely click through to the second page of results (most do not even bother to scroll; instead, users pursue query chains.18 Following an initial query, users quickly scan the portion of the SERP that is above the fold. When they do not notice a result that looks encouraging, they simply proceed to reformulate their query, thus producing the second link in their query chain. This activity can occur a number of times, increasing the length of the chain. When the search engine notices these query chains, it can employ a neural network algorithm to predict and present the searcher's actual destination page. For instance, an end user searching for ‘fly fishing’ might follow the initial search with ‘fishing flies’, and then ultimately search for ‘hand-tied fishing flies’. If the search engine repeatedly observes this chain for a great number of searchers, then it would be reasonable to assume that the results for the last query (‘hand-tied fishing flies’) would also be good for the first (‘fly fishing’). Being aware of such search behaviour patterns should be of use, as the web marketer chooses keywords for paid search campaigns. It also should have an impact on the designs of the landing pages (ie the page to which searchers are taken when the click on a particular ad) for various keywords.
Google's Universal Search and Google Wiki, which allows searchers to shift around results in the order of their choice, offer a number of new opportunities for increasing visibility on SERPs.19 If the e-marketer can understand these techniques, and can combine that understanding with knowledge of end user behaviour at the search engine interface, the results can be extremely powerful. The authors believe that gaining an awareness of the inter-relationship between end user activity and how search engines organize and display these new data types is the key to developing new search marketing strategies. Many refer to this process as Digital Asset Optimization (DAO), or making certain that search engines are aware of all of a website's content, including videos, animation, podcasts, message boards, maps, images, and other non-text based files.20 If done correctly, DAO offers a unique opportunity to achieve maximum visibility on SERPs in a minimum amount of time.
Collective intelligence
A very significant change in search is the movement toward information-seeking on social networking sites. The acquired knowledge possessed by friends and acquaintances supplements the vast amount of other, less verifiable information on the web. This knowledge of those in an individual's social networks can yield extremely well-qualified answers to very specific queries through a process sometimes called ‘information-seeking via a chain of trust’. The need for research into the dynamics of online communities is becoming increasingly important due to the rich signals these communities send to search engines. These signals are taking search into an entirely new era due to a number of related factors:
-
Web pages are no longer purely static.
-
Real-time chat on the web is ubiquitous.
-
Tagging and folksonomy (collaborative creation and management of tags to categorize content) lead to user-generated data arrangement.
-
User rating and reputation systems add even more valuable information.
A significant amount of research is also directed at combining data from social networks and document reference networks like PageRank to create a dual layer of trust-enhanced (or socially enhanced) search result ranking. For example, Jon Kleinberg has shifted his research focus from the centralized index of search engines to the social structure of large online communities.21 Furthermore, and along related lines, Google is undertaking a new application that will graph social networks across the web.22 These intensive research efforts by major players on the search scene into the web's social fibre clearly indicate an evolution to a new form of information retrieval, one referred to by some as ‘networks of trust’. In effect, the dynamics of the process can be viewed as a combination of algorithm mind-meld and the wisdom of crowds. Web marketers who devise means to positively interest members of appropriate social networks in their websites and/or products could see substantial improvements in their listings on SERPs.
Although the phrase ‘social media’ may be new, the concept of user-generated content is not. Bulletin boards, discussion groups and online forums, for example, all came into being during (or before) the web's formative years. What is new, however, is the format. User-generated content was strictly text-based in the web's early days, but today it is comprised of all varieties of electronic content, including movies, images, tags, rankings, ratings, and so on. Regardless of the term used to refer to it, one cannot deny the real power of user-generated content, as millions of users around the world have found that ‘they cannot live without’ social media.
In particular, community question-answering networks are becoming a popular destination for individuals searching for reliable information.23 Interestingly enough, in some countries, such as South Korea, this type of search has become more popular than search engine results themselves. In that country, 16 million people visit the Naver search portal everyday to make an average of 110 million queries. Naver users also post an average of 44,000 questions each day on Knowledge iN, an interactive question-and-answer database. These questions result in approximately 110,000 answers that range from one-sentence replies to footnoted academic essays.24 This process exemplifies the fact that the influences of social media are stronger than ever. It should be apparent that traditional search engines must take notice of this shift, but the question that must be dealt with is how to process these data and incorporate them into SERPs. And when they are incorporated in those results, web marketers need to devise means of assuring that their sites/products are viewed in a positive light by those answering the questions.
Connected marketing: Generation Y, the always-on mobile generation
Many speculate that the greatest transformation in search will be brought about by always-connected 3G mobile phones. Apple's iPhone, for instance, is already the leader on Google in terms of the number of mobile/local searches. Devices such as the iPhone, which can send and receive email on the move, surf the web, receive RSS feeds, instant message, Twitter and dialogue at social networking sites, demonstrate that this truly is the age of ‘connected marketing’ for end users who are members of networks (or communities) of trust.
In this environment, traditional marketing methods are becoming increasingly ineffective, and even counter-productive.25 The authors agree with Ahonen and Moore, who, in Communities Dominate Brands, contend that the power of the brand and the abuses by marketing have created a scale that needs to be balanced.26 They see the emergence of digitally connected communities as the counter-force that will re-establish that balance. These digitally connected communities are emerging as a formidable force in contrast to the power of big brands and traditional advertising. In effect, today's consumers are forming communities that pool their power. It is this pooled power base that is producing a dramatic transformation in the manner in which businesses interact with their customers.
Digital marketers can, and in fact must, interact with the emerging and powerful new communities. To do this, they must develop new marketing strategies (including new SM strategies) that focus on what have frequently been called ‘interactive engagement marketing’ and ‘community-based communications’ (getting the message out to social communities that include target markets). Ahonen and Moore put it this way:
‘… brands of the 21st century will need to be about engaging their audiences by being enabling, life-simplifying navigation partners; procreators of greater and more valuable experiences. Brands need to embrace the interactive age to break with traditional, ineffective interruptive marketing, using engagement marketing to change customer behaviour through involvement. Brands have to become immersive, sensorial and multi-dimensional’.27
Conclusion
This paper has discussed how search has evolved over the years, how it is evolving today, and how it might change in the future. As the various signals picked up by search engines increase, web marketers must expand their knowledge and devise new techniques to positively influence those sending the signals. For example, techniques such as DAO are already being employed to make certain that all of a firm's digital content is supplying the appropriate clues to search engine spiders. The authors hope that the discussion here will prompt the reader to reflect on how he/she might need to change his/her firm's approach to SM.
The Appendix provides a convenient summary of the primary thoughts presented in this paper. Divided into sections on Search Yesterday, Search Today, and Search Tomorrow, it presents in an abbreviated format the topics and speculations dealt with in this paper. It should serve both as a reference and as a focal point for conversations on the future of search marketing.
References
Elliott, S. (2006) ‘More agencies investing in marketing with a click’ (document on the World Wide Web: http://www.nytimes.com/2006/03/14/business/media/14adco.html?ex=1299992400&en=6fcd30b948dd1312&ei=5088), 14 March.
Sherman, C. (2007) ‘The state of search engine marketing 2006’ (document on the World Wide Web: http://searchengineland.com/070208-095009.php), 8 February.
Wilson, R. F. (2007) Guide to Search Engine Optimization (2007 Edition), Ralph F. Wilson, Rocklin, CA, p. 20.
Wilson, R. and Pettijohn, J. (2007) ‘Search engine optimization: A primer on linkage strategies,' Journal of Direct Data and Digital Marketing Practice, Vol. 8, No. 3, February.
Google. (2009) ‘Technology overview’ (document on the World Wide Web: http://www.google.com/corporate/tech.html), 23 February.
Google. (2009) ‘Company overview’ (document on the World Wide Web: http://www.google.com/corporate/index.html), 23 February.
Shankland, S. (2008) ‘Google reveals scope of Web-crawling task’ (document on the World Wide Web: http://news.cnet.com/8301-1023_3-9999814-93.html), 25 July.
Brin, S. and Page, L. (2009) ‘The Anatomy of a large-scale hypertextual web search engine’ (document on the World Wide Web: http://infolab.stanford.edu/~backrub/google.html), accessed 9 March.
Grehan, M. (2004) ‘Filthy linking rich and getting richer,' e-Marketing News (document on the World Wide Web: http://www.keyworddriven.com/filthy-linking-rich-and-getting-richer.html) October 1.
Google. (2007) ‘Google begins move to universal search’ (document on the World Wide Web: http://www.google.com/intl/en/press/pressrel/universalsearch_20070516.html), 16 May.
Sullivan, D. (2008) ‘Google universal search: 2008 edition’ (document on the World Wide Web: http://searchengineland.com/google-universal-search-2008-edition-1325), 30 January.
Wikipedia. (2009) ‘HITS algorithm’ (document on the World Wide Web: http://en.wikipedia.org/wiki/HITS_algorithm), accessed 18 March.
Watts, D. (2004) Six Degrees: The Science of a Connected Age, W.W. Norton & Company, New York.
Slawski, W. (2008) ‘Microsoft tracking search and browsing behavior to find authoritative pages’ (document on the World Wide Web: http://www.seobythesea.com/?p=1002), 28 February.
Ammirati, S. (2007) ‘Google's Udi Manber — Search is a hard problem’ (document on the World Wide Web: http://www.readwriteweb.com/archives/udi_manber_search_is_a_hard_problem.php), 21 June.
Horrigan, J. and Rainie, L. (2002) ‘The broadband difference: How online Americans’ behavior changes with high-speed Internet connections at home’ (document on the World Wide Web: http://www.pewinternet.org/∼/media//Files/Reports/2002/PIP_Broadband_Report.pdf.pdf).
Alexa. (2009) ‘Website traffic comparisons: Daily pageviews’ (document on the World Wide Web: http://www.alexa.com), accessed 28 March.
Moran, M. (2007) ‘When search results meet the eye’ (document on the World Wide Web: http://www.mikemoran.com/biznology/archives/2007/02/when_search_res.html), 20 February.
Graham, J. (2008) ‘Google Wiki lets you customize search results’ (document on the World Wide Web: http://blogs.usatoday.com/technologylive/2008/11/google-wiki-let.html), 21 November.
Digital Asset Optimization. (2008) ‘Digital asset optimization’ (document on the World Wide Web: http://www.digitalassetoptimization.net).
Danescu-Niculescu-Mizil, C., Kossinets, G., Kleineberg, J. and Lee, L. (2009) ‘How opinions are received by online communities: A case study on amazon.com helpfulness votes’, Proceedings of the 18th International World Wide Web Conference (document on the World Wide Web: http://www.cs.cornell.edu/home/kleinber/www09-helpfulness.pdf), 20–24 April.
Fitzpatrick, B. (2008) ‘URLs are people, too’ (document on the World Wide Web: http://google-code-updates.blogspot.com/2008/02/urls-are-people-too.html), 1 February.
Liu, X. and Bruce Croft, W. (2005) ‘Finding experts in community-based question-answering services’, Center for Intelligent Information Retrieval, Department of Computer Science, University of Massachusetts (document on the World Wide Web: http://maroo.cs.umass.edu/pub/web/getpdf.php?id=577), accessed 28 March 2009.
Sang-Hun, C. (2007) ‘Crowd's wisdom helps South Korean search engine beat Google and Yahoo’, New York Times (document on the World Wide Web: http://www.iht.com/articles/2007/07/04/business/naver.php), 4 July.
Fifteen Digital Marketing. (2007) ‘Key marketing methods for 2008’ (document on the World Wide Web: http://www.15digitalmarketing.co.uk/articles/2007/12/17/key-marketing-methods-for-2008), 17 December.
Ahonen, T. and Moore, A. (2005) Communities Dominate Brands, Futuretext, London, p. 210.
Ahonen, T. and Moore, A. (2005) Communities Dominate Brands, Futuretext, London, p. 224.
Author information
Authors and Affiliations
Appendix
Appendix
See Table A1.
Rights and permissions
About this article
Cite this article
Grehan, M., Pettijohn, J. Search marketing yesterday, today, and tomorrow: Promoting the conversation. J Direct Data Digit Mark Pract 11, 100–113 (2009). https://doi.org/10.1057/dddmp.2009.24
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1057/dddmp.2009.24
Keywords
- search engine
- web marketing
- e-marketing
- online marketing
- optimization
- linkages







