Gender and Use of AI Generated Photographs: Illusion, Delusion and Make- Belief in The Digital Space
DOI:
https://doi.org/10.70680/k78mwh68Keywords:
Gender, AI-Generated photographs, Illusion, Delusion, Make belief, Digital spaceAbstract
This study examines the influence of gender on use of AI generated photographs, and how they create illusion, delusion and make-belief in the mind of the people. The idea behind this study is because of the proliferation of AI-generated images that have overtaken the social media space. The research method adopted for the study is the qualitative research method, leveraging on extensive review of existing literature, documents and articles from reputable journals. The relationship between AI and the digital space is complex and multi-faceted. AI is used to create new digital spaces and experiences, and can also be used to improve existing ones. However, AI is raising questions about privacy, bias, control, and even the nature of personhood. These are important issues that will need to be addressed as AI technology continues to develop. AI generated photographs are a bit of both-they can be both illusion and delusion, depending on how you look at them.
References
Ahmed, A. Yu, K. Xu, W. Gong, Y. and Xing, E. (2008). Training hierarchical feed-forward visual recognition models using transfer learning from pseudo-task. In Proceedings of the European Conference on Computer Vision. Pp. 69-82, Springer, Marseille, France.
Bengio, Y. Lamblin, P. Popovici, D. and Larochelle, H. (2007). Greedy layer-wise training of deep networks,” In: Advances in Neural Information Processing Systems 19 (NIPS '06), pp. 153–160, MIT Press. View at: Google Scholar
Hinton, G. E., and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, (5786), pp. 504 507. View at: Publisher Site | Google Scholar | MathSciNet.
Hinton, G.E., Osindero, S and The, W.H. (2006). A fast-learning algorithm for deep belief nets, Neural Computation. vol. 18, no. 7, pp. 1527 1554. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Larochelle, H. Erhan, D. Courville, A. Bergstra, J. and Bengio, Y. (2007). “An empirical evaluation of deep architectures on problems with many factors of variation,” In: Proceedings of the 24th International Conference on Machine Learning (ICML '07), pp. 473–480. USA: ACM, Corvalis, Ore. View at: Publisher Site | Google Scholar
Lee, H. Grosse, R. Ranganath, R. and Ng, A.Y. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning (ICML '09), pp. 609–616, Canada: ACM, Quebec.View at: Google Scholar.
Nwokeocha, I. M. (2023). Cyber Politics and Voting Behaviour: An Exploratory Study of Social
Media And Voters Mobilisation in The 2023 Presidential Election in Nigeria. Indiana Journal of Humanities and Social Sciences, 4(12), 32-39
Ranzato, M. Boureau, Y.-L. and LeCun, Y. (2008). Sparse feature learning for deep belief networks,” in Advances in Neural Information Processing Systems, pp. 1185–1192. View at: Google Scholar.
Ranzato, M. Poultney, C. Chopra, S. and LeCun, Y. (2006). Efficient learning of sparse representations with an energy-based model. In: Proceedings of the 20th Annual Conference on Neural Information Processing Systems (NIPS '06), pp. 1137–1144, Canada: Vancouver. View at: Google Scholar.
Salakhutdinov, R. and Hinton, G. E. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. In: Proceedings of the 8th International Conference on Artificial Intelligence and Statistics (AISTATS '07), pp. 412–419, Puerto Rico: San Juan, View at: Google Scholar
Vincent, P. Larochelle, H. Bengio, Y. and Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103, Finland: ACM, Helsinki.View at: Google Scholar.
Published
Issue
Section
License
Copyright (c) 2024 Sanskriti: Journal of Humanities

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.