An Evaluation of Artificial Intelligence (AI) Governance Through the Simulation of Risk and Security Outcomes
By: Adewale D Ashogbon
| Pages: 20 - 29
|
Open
Abstract
This study examines how governance can mitigate risks and security threats in Artificial Intelligence (AI) systems using a simulated approach. With the increasing prevalence of AI in industries, ethical issues, demographic discrimination, adversarial attacks, and a lack of regulation are some of the threats that arise. To address the issues, the study evaluates how well governance practices can identify adversarial inputs, minimize biases, and implement audit controls to make AI safe and trustworthy. Python, TensorFlow, and OpenAI Gym were used to simulate a facial recognition system. It was tested in two cases: unregulated conditions and safety issues. Measures such as error rate, bias, and successful attack were meticulously considered. The findings indicated that the levels of governance decreased the rates of errors (4.8 to 6.0), demographic bias (more than 10 to less than 3), and adversarial attack success (40 to less than 15). These results clearly demonstrate the role of governance in strengthening and making AI systems more equitable. A valuable process for estimating AI risks and justifying evidence-based governance is simulation-based testing.
DOI URL:





