Skip to main content
Log in

Ad creative generation using reinforced generative adversarial network

  • Published:
Electronic Commerce Research Aims and scope Submit manuscript

Abstract

Crafting the right keywords and crafting their ad creatives is an arduous task that requires the collaboration of online marketers, creative directors, data scientists, and possibly linguists. Many parts of this craft are still manual and therefore not scalable especially for large e-commerce companies that have big inventories and big search campaigns. Furthermore, the craft is inherently experimental, which means that the marketing team has to experiment with different marketing messages from subtle to strong, with different keywords from broadly relevant (to the product) to exactly/specifically relevant, with different landing pages from informative to transactional, and many other test variants. The failure to experiment quickly for finding what works results in users being dissatisfied and marketing budget being wasted. For rapid experimentation, we set out to generate ad creatives automatically. The process of generating an ad creative from a given landing page is considered as a text summarization problem and we adopted the abstractive text summarization approach. We reported the results of our empirical evaluation on generative adversarial networks and reinforcement learning methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. https://github.com/yaushian/Unparalleled-Text-Summarization-using-GAN.

  2. National Center for High Performance Computing of Turkey (UHeM).

  3. https://huggingface.co/datasets/cnn_dailymail

  4. https://huggingface.co/models

References

  1. Almasharawi, M., & Bulut, A. (2022). Estimating user response rate using locality sensitive hashing in search marketing. Electronic Commerce Research, 22, 37–51.

    Article  Google Scholar 

  2. Ravi, S., Broder, A., Gabrilovich, E., Josifovski, V., Pandey, S., & Pang, B.(2010)Automatic generation of bid phrases for online advertising. In: Proceedings of the 3rd International Conference on Web Search and Web Data Mining, WSDM 2010, pp. 341–350 . https://doi.org/10.1145/1718487.1718530

  3. Zhang, Y., Gan, Z., & Carin, L.(2016) Generating text via adversarial training.

  4. Rothe, S., Narayan, S., & Severyn, A. (2020). Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics, 8, 264–280.

    Article  Google Scholar 

  5. Zhang, J., Zhao, Y., Saleh, M., & Liu, P.J.(2019) PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. CoRR abs/1912.08777

  6. Raffel, C., Shazeer, N.M., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P.J. (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv abs/1910.10683

  7. Wang, G., Zhuo, l, Li, J., Ren, D., & Zhang. (2018). An efficient method of content-targeted online video advertising. Journal of Visual Communication and Image Representation, 50, 40–48. https://doi.org/10.1016/j.jvcir.2017.11.001.

    Article  Google Scholar 

  8. Punjabi, S., Bhatt, P.: Robust factorization machines for user response prediction. In: International World Wide Web Conferences Steering Committee. WWW ’18, pp. 669–678 (2018). https://doi.org/10.1145/3178876.3186148

  9. Zhou, G., Zhu, X., Song, C., Fan, Y., Zhu, H., Ma, X., Yan, Y., Jin, J., Li, H., & Gai, K.(2018) Deep interest network for click-through rate prediction. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’18, pp. 1059–1068. https://doi.org/10.1145/3219819.3219823

  10. Graepel, T., Candela, J.Q.n., Borchert, T., & Herbrich, R.(2010) Web-scale bayesian click-through rate prediction for sponsored search advertising in Microsoft’s Bing search engine. In: Proceedings of the 27th International Conference on International Conference on Machine Learning. ICML 10, pp. 13–20

  11. Chen, W., Zhan, L., Ci, Y., & Lin, C.(2019) FLEN: leveraging field for scalable CTR prediction. CoRR abs/1911.04690

  12. Schwaighofer, A., Candela, J.Q.n., Borchert, T., Graepel, T., & Herbrich, R. (2009) Scalable clustering and keyword suggestion for online advertisements. In: Proceedings of the 3rd International Workshop on Data Mining and Audience Intelligence for Advertising. ADKDD ’09, pp. 27–36. https://doi.org/10.1145/1592748.1592753

  13. Du, X., Su, M., Zhang, X. M., & Zheng, X. (2017). Bidding for multiple keywords in sponsored search advertising: keyword categories and match types. Information Systems Resesarch, 28(4), 711–722.

    Article  Google Scholar 

  14. Vempati, S., Malayil, K.T., V, S., & R, S.(2019) Enabling hyper-personalisation: Automated ad creative generation and ranking for fashion e-commerce. CoRR abs/1908.10139

  15. Mishra, S., Verma, M., Zhou, Y., Thadani, K., & Wang, W.(2020) Learning to create better ads: Generation and ranking approaches for ad creative refinement. CoRR abs/2008.07467

  16. Rush, A.M., Chopra, S., & Weston, J. (2015) A neural attention model for abstractive sentence summarization. CoRR abs/1509.00685

  17. Gupta, S., & Gupta, S. K. (2019). Abstractive summarization: an overview of the state of the art. Expert Systems with Applications, 121, 49–65. https://doi.org/10.1016/j.eswa.2018.12.011.

    Article  Google Scholar 

  18. Li, W., He, L., & Zhuge, H.(2016) Abstractive news summarization based on event semantic link network. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 236–246. https://aclanthology.org/C16-1023

  19. Nallapati, R., Xiang, B., & Zhou, B. (2016) Sequence-to-sequence RNNs for text summarization. CoRR abs/1602.06023

  20. Keneshloo, Y., Shi, T., Ramakrishnan, N., & Reddy, C.K.(2018) Deep reinforcement learning for sequence to sequence models. CoRR abs/1805.09461

  21. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I.(2017) Attention is all you need. CoRR abs/1706.03762

  22. Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., & Huang, X.(2020) Pre-trained models for natural language processing: A survey. CoRR abs/2003.08271

  23. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V.(2019) Roberta: A robustly optimized BERT pretraining approach. CoRR abs/1907.11692

  24. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K.(2019) BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. https://doi.org/10.18653/v1/N19-1423

  25. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I.: (2019) Language models are unsupervised multitask learners.

  26. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y.(2014) Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. NIPS’14, pp. 2672–2680

  27. Ranzato, M., Chopra, S., Auli, M., & Zaremba, W.(2016) Sequence level training with recurrent neural networks. In: 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings

  28. Wang, Y., Lee, H.-Y.: Learning to encode text as human-readable summaries using generative adversarial networks. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4187–4195 (2018). https://doi.org/10.18653/v1/D18-1451

  29. Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., & Goel, V.(2016) Self-critical sequence training for image captioning. CoRR abs/1612.00563

  30. Yu, L., Zhang, W., Wang, J., & Yu, Y.(2016)Seqgan: Sequence generative adversarial nets with policy gradient. CoRR abs/1609.05473

  31. Li, J., Monroe, W., Shi, T., Ritter, A., & Jurafsky, D.(2017) Adversarial learning for neural dialogue generation. CoRR abs/1701.06547

  32. See, A., Liu, P.J., & Manning, C.D.(2017) Get to the point: Summarization with pointer-generator networks. CoRR abs/1704.04368

  33. Arjovsky, M., Chintala, S., & Bottou, L.(2017) Wasserstein GAN

  34. Lin, C.-Y.: ROUGE: A package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004). https://aclanthology.org/W04-1013

  35. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., & Rush, A.(2020) Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45. https://doi.org/10.18653/v1/2020.emnlp-demos.6

Download references

Acknowledgements

The computing resources used in this work were provided by the National Center for High Performance Computing of Turkey (UHeM) under grant number 4008732020. This project was funded by Turkish National Science Foundation (Tübitak) under grant number 119E031.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sümeyra Terzioğlu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Terzioğlu, S., Çoğalmış, K.N. & Bulut, A. Ad creative generation using reinforced generative adversarial network. Electron Commer Res (2022). https://doi.org/10.1007/s10660-022-09564-6

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10660-022-09564-6

Keywords

Navigation