We used data from the Australian Cybercrime Survey to measure the perceived frequency of artificial intelligence (AI) enabled crimes, and which specific technologies pose the greatest perceived risk of victimisation.
Half of respondents were worried about AI causing them harm or being a victim of AI-enabled crime, and nearly one in five believed this would occur in the next 12 months. Respondents were most concerned about AI being used to track their location, AI being used to access their device or accounts to commit other forms of cybercrime, and the use of AI to manipulate, impersonate or trick them in ways that would cause harm or embarrassment.
Perceived frequency of, and victimisation from, different misuses of AI technology varied by respondent age, gender and parental status. Both were also correlated with the use of AI technology.
These findings highlight priority areas for industry safeguards and public education.
References
URLs correct as at November 2025
Australian Bureau of Statistics 2024. National, state and territory population, June 2024. Canberra: ABS. https://www.abs.gov.au/statistics/people/population/national-state-and-territory-population/latest-release
Australian Signals Directorate 2024. Annual cyber threat report, July 2023 to June 2024. Canberra: Australian Signals Directorate. https://www.cyber.gov.au/about-us/view-all-content/reports-and-statistics/annual-cyber-threat-report-2023-2024
Bao L et al. 2022. Whose AI? How different publics think about AI and its social impacts. Computers in Human Behavior 130: 107182. https://doi.org/10.1016/j.chb.2022.107182
Brands J & Van Doorn J 2022. The measurement, intensity and determinants of fear of cybercrime: A systematic review. Computers in Human Behavior 127: 107082. https://doi.org/10.1016/j.chb.2021.107082
Brands J & van Wilsem J 2021. Connected and fearful? Exploring fear of online financial crime, internet behaviour and their relationship. European Journal of Criminology 18(2): 213–234. https://doi.org/10.1177/1477370819839619
Caldwell M, Andrews J, Tanay T & Griffin L 2020. AI-enabled future crime. Crime Science 9: 14. https://doi.org/10.1186/s40163-020-00123-8
Castelo N & Ward A 2021. Conservatism predicts aversion to consequential artificial intelligence. PLoS ONE 16(12): e0261467. https://doi.org/10.1371/journal.pone.0261467
Cave S, Coughlan K & Dihal K 2019. ‘Scary robots’: Examining public responses to AI. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, January 2019, Honolulu, USA. New York: Association for Computing Machinery: 331–337. https://doi.org/10.1145/3306618.3314232
Cave S et al. 2018. Portrayals and perceptions of AI and why they matter. London: Royal Society. https://doi.org/10.17863/CAM.34502
Choi S, Lee C, Park A & Lee J 2024. How the public makes sense of artificial intelligence: The interplay between communication and discrete emotions. Science Communication 47(4): 553–584. https://doi.org/10.1177/10755470241297664
Dreyfus M 2024. New criminal laws to combat sexually explicit deepfakes. Media release, 5 June. https://markdreyfus.com/media/media-releases/new-criminal-laws-to-combat-sexually-explicit-deepfakes-mark-dreyfus-kc-mp/
eSafety Commissioner 2020. Adults’ negative online experiences. Canberra: Australian Government. https://www.esafety.gov.au/research/australians-negative-online-experiences-2022/infographic-adults-online-experiences
Fast E & Horvitz E 2016. Long-term trends in the public perception of artificial intelligence. arXiv: 1609.04904v2. https://doi.org/10.48550/arXiv.1609.04904
Gil de Zúñiga H, Goyanes M & Durotoye T 2023. A scholarly definition of artificial intelligence (AI): Advancing AI as a conceptual framework in communication research. Political Communication 41(2): 317–334. https://doi.org/10.1080/10584609.2023.2290497
Hayward K & Maas M 2021. Artificial intelligence and crime: A primer for criminologists. Crime Media Culture 17(2): 209–233. https://doi.org/10.1177/1741659020917434
Horowitz M & Kahn L 2021. What influences attitudes about artificial intelligence adoption: Evidence from U.S. local officials. PLoS ONE 16(10): e0257732. https://doi.org/10.1371/ journal.pone.0257732
Ikkatai Y, Hartwig T, Takanashi N & Yokoyama H 2022. Octagon measurement: Public attitudes toward AI ethics. International Journal of Human–Computer Interaction 38(17): 1589–1606. https://doi.org/10.1080/10447318.2021.2009669
International Centre for Missing & Exploited Children (ICMEC) Australia 2024. AI and child protection: A collaborative approach to a safer future. Sydney: ICMEC Australia. https://icmec.org.au/saferai-for-children-coalition-discussion-paper/
Isbanner S, O’Shaughnessy P, Steel D, Wilcock S & Carter S 2022. The adoption of artificial intelligence in health care and social services in Australia: Findings from a methodologically innovative national survey of values and attitudes (the AVA-AI Study). Journal of Medical Internet Research 24(8): e37611. https://doi.org/10.2196/37611
Kreps S, George J, Lushenko P & Rao A 2023. Exploring the artificial intelligence ‘Trust paradox’: Evidence from a survey experiment in the United States. PLoS ONE 18(7): e0288109. https://doi.org/10.1371/journal.pone.0288109
McAlister M, Faulconbridge E, Voce I & Bricknell S 2023. Identity crime and misuse in Australia 2023. Statistical Bulletin no. 42. Canberra: Australian Institute of Criminology. https://doi.org/10.52922/sb77048
Powell A, Flynn A, Wheildon L & Bentley K 2024. Understanding the impact of COVID-19 on responses to technology-facilitated coercive control. Trends & issues in crime and criminal justice no. 698. Canberra: Australian Institute of Criminology. https://doi.org/10.52922/ti77505
Rashid A & Kausik A 2024. AI revolutionizing industries worldwide: A comprehensive overview of its diverse applications. Hybrid Advances 7: 100277. https://doi.org/10.1016/j.hybadv.2024.100277
Riek M, Bohme R & Moore T 2016. Measuring the influence of perceived cybercrime risk on online service avoidance. IEEE Transactions on Dependable and Secure Computing 13(2): 261–273. https://doi.org/10.1109/TDSC.2015.2410795
Saeri A, Noetel M & Graham J 2024. Survey assessing risks from artificial intelligence: Technical report. Queensland: Ready Research, The University of Queensland. https://doi.org/10.2139/ssrn.475095
Tan S 2023a. Fun, learning or work? What are most Australians using or planning to use ChatGPT for? YouGov Australia. https://au.yougov.com/technology/articles/46755-fun-learning-work-what-most-australians-using-or-planning-to-use-chatgpt-for
Tan S 2023b. Awareness versus usage of ChatGPT in Australia: How do they vary demographically? YouGov Australia. https://au.yougov.com/technology/articles/46754-awareness-versus-usage-of-chatgpt-in-australia-how-do-they-vary-demographically
Tan S 2023c. What Australians think about ChatGPT: Do more trust and welcome it—or are wary and concerned? YouGov Australia. https://au.yougov.com/technology/articles/46757-what-australians-think-about-chatgpt-trust-and-welcome-or-wary-and-concerned
Umbach R, Henry N, Beard G & Berryessa C 2024. Non-consensual synthetic intimate imagery: Prevalence, attitudes, and knowledge in 10 countries. Proceedings of the 2024 Conference on Human Factors in Computing Systems. New York NY: Association for Computing Machinery: 1–20. https://doi.org/10.1145/3613904.3642382
Voce I & Morgan A 2025a. Cybercrime in Australia 2024. Statistical Report no. 53. Canberra: Australian Institute of Criminology. https://doi.org/10.52922/sr7791
Voce I & Morgan A 2025b. Developing a harm index for individual victims of cybercrime. Trends & issues in crime and criminal justice no. 706. Canberra: Australian Institute of Criminology. https://doi.org/10.52922/ti77666
Voce I & Morgan A 2023a. Cybercrime in Australia 2023. Statistical Report no. 43. Canberra: Australian Institute of Criminology. https://doi.org/10.52922/sr77031
Voce I & Morgan A 2023b. Online behaviour, life stressors and profit-motivated cybercrime victimisation. Trends & issues in crime and criminal justice no. 675. Canberra: Australian Institute of Criminology. https://doi.org/10.52922/ti77062