[1] LIU Y P, CHEN X Y, LIU C, et al. Delving into transferable adversarial examples and black-box attacks[J]. arXiv preprint arXiv:1611.02770, 2016.
[2] 王姿雯. 基于深度学习的多条件个性化文本生成[D]. 北京:北京邮电大学, 2019.
[3] HE D, LU H Q, XIA Y C, et al. Decoding with value networks for neural machine translation[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017:178-186.
[4] LEI W Q, JIN X S, KAN M Y, et al. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018:1437-1447.
[5] PAMUNGKAS E W. Emotionally-aware chatbots: A survey[J]. arXiv preprint arXiv:1906.09774, 2019.
[6] LIAO Y, BING L D, LI P J, et al. QuaSE: Sequence editing under quantifiablem guidance[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018:3855-3864.
[7] FEDUS W, GOODFELLOW I, DAI A M. MaskGAN: Better text generation via filling in the ______ [J]. arXiv preprint arXiv:1801.07736, 2018.
[8] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014:2672-2680.
[9] KUSNER M J,HERNANDE-LOBATO J M.GANs for sequences of discrete elements with the Gumbel-softmax distribution[J]. arXiv preprint arXiv:1611.04051, 2016.
[10]CHE T, LI Y R, ZHANG R X, et al. Maximum-likelihood augmented discrete generative adversarial networks[J]. arXiv preprint arXiv:1702.07983, 2017.
[11]LIN K, LI D Q, HE X D, et al. Adversarial ranking for language generation[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017:3158-3168.
[12]YU L T, ZHANG W N, WANG J, et al. SeqGAN: Sequence generative adversarial nets with policy gradient[C]// The 31st AAAI Conference on Artificial Intelligence. 2017:2852-2858.
[13]HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016:770-778.
[14]DENTON E, CHINTALA S, SZLAM A, et al. Deep generative image models using a Laplacian pyramid of adversarial networks[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015:1486-1494.
[15]SALAKHUTDINOV R. Learning deep generative models[J]. Annual Review of Statistics and Its Application, 2015,2:361-385.
[16]许海明. 基于深度学习的文本生成技术研究[D]. 成都:电子科技大学, 2020.
[17]CHO K, VAN MERRIENBOER B, BAHDANAU D, et al. On the properties of neural machine translation: Encoder-decoder approaches[J]. arXiv preprint arXiv:1409.1259, 2014.
[18]HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997,9(8):1735-1780.
[19]胡懋晗. 基于生成对抗网络的文本生成的研究[D]. 成都:电子科技大学, 2020.
[20]RAMACHANDRAN P, ZOPH B, LE Q V. Swish: A self-gated activation function[J]. arXiv preprint arXiv:1710.05941, 2017.
[21]张志远,李媛媛. 加入目标指导的强化对抗文本生成方法研究[J]. 计算机应用研究, 2020,37(11):3343-3346.
[22]KINGMA D P, BA J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
[23]PAPINENI K, ROUKOS S, WARD T, et al. BLEU: A method for automatic evaluation of machine translation[C]// Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. 2002:311-318.
|