Angel Sithole, Project Director: Audit and Assurance, SAICA
With regard to the financial ecosystems, AI tools have been useful in completing time-consuming tasks accurately and efficiently, and with the added advantage of machine learning, there can be a transfer of skills from humans to machine. AI tools can also be used by criminals to commit fraud against public interest entities and private companies, including accountancy firms.
It is difficult to be certain about the scale of AI-enabled fraud because fraud tends to be under-reported or even undetected. Recent versions of AIs such as ChatGPT, with the ability to create text or images, including human-like conversations using text or voice imitations, may raise some concerning fraud flags. These types of technologies have presented opportunities and risky trends if used by those who want to scam companies. Even with the absence of AI, in the past sophisticated IT fraud has been committed by well-skilled fraudsters. We have seen public listed entities where there were breaches on the firewalls or payment application systems using the passwords of employees after careful studies had been done on the internal IT controls of an organisation.
There was one instance in late 2018 where the systems of an organisation were hacked using a virus system that was not detected. When one employee logged in to the IT systems the following morning and put in their particulars, the screen went blank: in the back end there was someone who had taken over the employee’s computer. This particular employee was responsible for the payment of pension funds within the organisation. The fraudster managed to gain access, change banking accounts on the payment batch processing system, and then give the employee back their access rights to the company IT systems. This was a one-way payment that was embezzled from the company and paid to the updated incorrect bank account.
One would then ask oneself where you start when it comes to securing the data integrity of the organisation and preventing such breaches. Some of such instances can also occur with the assistance of employees within the organisation. There was another instance where one IT employee was responsible for assisting employees with IT-related issues and also having sole administrative rights over a number of branches. It so happened that after a number of years the employee had gained so much knowledge of the system that they started secretly taking bank deposits from customers’ accounts and allocating them to himself using a fictitious trial balance account. He had a full Ponzi scheme happening in the back end of the translation of the organisation. This was only picked up five years later after he had stolen a lot of money from that particular organisation, because the users of the system trusted him.
Banks have been some of the recent targets of security breaches to their applications. This happens when a fake account is created with the same website page look-alike and details and where individuals/customers are lured to use the fake webpage, after which the perpetrators seize the clients’ funds. In research done on Cyber Security Breaches, it was suggested that one in three businesses was affected by cybersecurity incidents. This relates to fraud where AI technologies are most likely to be used, often to help create convincing fake emails, documents or images which could be used in phishing emails. With cybersecurity threats, the strongest form of defence is training staff to identify risks and take action to mitigate them. It also assists organisations in performing security breach audits. This could be done by an independent auditor or internal auditor within the organisation on a regular basis.
Artificial intelligence can enable fraudsters to create tailored materials and messages faster and on a wider scale, which could be distributed to different countries or continents in seconds. Some audit and accounting firms are in the process of developing AI-based tools to detect anomalies in general ledgers and improve the accuracy of audit processes. These digital tools may use machine learning, algorithms and natural language processing to sift through huge quantities of data and text, looking for patterns or behaviour that suggests fraudulent activity.
It pays to be informed about new trends in technology and possible threats. SAICA has a number of webinars that are presented to members during the year which deal with technological tools and techniques, and possible risks that they can pose for an organisation. Make sure you join us and arm yourself with knowledge.