Navigating the Future: Unleashing the Power of AI with Responsible Governance

Introduction

The AI market size is expected to reach $407 billion by 2027 up from $86.9 billion revenue in 2022 with 64% of businesses expecting AI to increase productivity. While this growth is impressive, the risks of AI for corporations and its leaders are also increasing. The Center for AI Safety (CAIS) created a Statement on AI Risk, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.“ with signatories including CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google. Even OpenAI CEO Sam Altman is a signatory of the CAIS statement and he is asking to regulate AI. More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems, including Elon Musk and Steve Wozniak (co-founder of Apple).  The governance of AI for Board Members and C-level executives is an emerging topic with key areas to consider being:  leaders’ lack of comfort with AI, ESG, liability, ethics, regulatory awareness/compliance, use/adoption, and data security, to name a few. A proactive governance framework for AI is needed to address the risks and achieve the expected benefits.

Risks

o  Many Directors and C-level executives aren’t comfortable with AI and may not know who is using AI in their organizations and how they are using it. This doesn’t excuse them, as more and more, AI problems are becoming board problems, and AI oversight is a board responsibility. Regulators are expanding and evolving the rules around AI including Europe draft legislation 2021 and New York City 2023.

o  Zillow, a US real estate company created an AI business to estimate the value of homes and provide cash offers. It shut down the business after only 8 months with a $304 million inventory write-down, and announced layoffs for 25% of its staff.

o  Data Privacy & Security- how personal and organizational data will be used within the AI tool needs to comply with privacy laws.

o  Intellectual Property – Canadian law has not yet provided guidance about when intellectual property laws will apply to AI-generated content, however we expect that formal guidance will be issued.

o  Liability – While there are challenges in determining liability from AI-caused harm, company leadership will need to anticipate the evolution of AI legal frameworks and balance that against the more immediate benefits of AI adoption.

o  ESG

o  Environmental – As AI models grow in size and complexity, so does the necessary computer processing power, which can carry a very large carbon footprint

o  Social / Ethics – Companies that deploy AI for hiring, lending, housing, or insurance decisions need to consider ways to assess and, if necessary, remediate potential discrimination associated with those initiatives. Some AI applications have also been criticized for exacerbating income inequality, displacing large numbers of jobs, facilitating human rights abuses, and manipulating individuals’ behavior.

o  Governance – For AI programs to meet increasing regulatory requirements, as well as emerging ethical standards, the risks described above must be identified and mitigated through appropriate corporate governance, including policies, procedures, training and oversight.

o  Unintended Outcomes – With all the risks, complexity and fast evolution of AI, there can be unintended consequences.

Best Practices for Boards to Ensure Responsible AI Governance

o  Monitor the evolving regulatory environment.

o  Usage: Ensure you understand who is using AI and for what purposes within your organization

o  Establish an AI governance framework. Two examples to reference:

o  BSA Framework – Confronting Bias: BSA’s Framework to Build Trust in AI

o  NIST Framework – Overall AI Framework from U.S. Department of Commerce

o  Identify and assign accountable points of contact in both the C-suite and Board as part of overall ESG oversight. Add AI governance to the Board skills matrix and have at least one person on the Board with this skill. In the C-suite, the CEO or CEO should typically lead AI with CIO/CTO providing input. While the CIO/CTO may have the most technical background in AI, AI governance is broader than the CIO/CTO mandates.

o  Empowering a diverse, cross functional AI steering committee (including ethics) that has the power to veto.

o  Document and secure data sources.

o  Train people to get the best out of AI and to interpret the results.

o  Comply with privacy requirements

o  Designate (and communicate) the stages of the AI lifecycle when testing will be conducted

o  Document relevant findings at the completion of each stage

o  Implement routine auditing/monitoring

Conclusion

AI is growing but so are the risks. With some additional diligence, C-level executives and Board members can have the knowledge and tools necessary to navigate the complex terrain of AI governance. By staying informed, adopting responsible practices, and embracing ethical considerations, you can leverage AI’s potential while mitigating risks and creating a positive impact for your organization. Governance nuances for specific industries and applications will evolve.

The discussion on AI Governance is just getting started. It would be great to hear your comments on this post and feel free to reach out to me directly if you wish. Your feedback can influence the topics on my next blog.

References

-While specific articles have embedded links above, my thanks to the ICD Boardinfo service for their support in finding several of the articles

ChatGPT was referenced for parts of this blog, including the Conclusion

-Blog photo created by Microsoft Bing Image Creator powered by DALL·E, with Prompt Engineering support from Tobias Barber

Hashtags

Institute of Corporate DirectorsCentre for the Governance of AI (GovAI)University of Toronto – Rotman School of Management

David R Beatty C.M., O.B.E., F.ICD, CFASheldon Mahabir,Ryan Resch

#corporategovernance #boardgovernance #goodgovernance #boardofdirectors #boardeffectiveness #ai #artificialintelligence #artificialintelligenceart #machinelearning #innovation #management #technology #future #bigdata

Richard Barber

Richard is a Sales, Marketing & Operations Executive with an MBA and an analytical-technical background which is a unique combination to help organizations create and execute growth strategy. Richard has served on 3 Boards since 2014 and currently serves on the Canadian Professional Sales Association including serving as Board Secretary. Richard also currently serves on the Board of Scientists In School, which is one of Canada’s largest Science Education charities for children in Kindergarten through Grade 8. Previously Richard served on a Toronto Chapter of the Boys & Girls Club including the role of Vice Chair (VP). Richards Committee Experience includes Human Resources & Governance, Board Nomination, Audit & Risk, Fundraising, and 2 Committee Chair roles for an additional charitable organization. Richard’s Executive Experience includes C-Suite Sales & Marketing, General Management, P&L, and Operations at Computershare, Rogers, Celestica, Nortel and IBM. In the role at Computershare, Richard led partnerships with both the Governance Professionals of Canada and the Institute of Corporate Directors.

Linkedin
Twitter

Submit a comment

Your email address will not be published.