Author:
WTTC
Language:
English

Responsible Artificial Intelligence: Overview of AI Risks & Safety & Governance

April 2024
Innovation

Artificial Intelligence (AI) is an exciting technology that opens up many possibilities for society, businesses and the Travel & Tourism sector, but as AI systems become more advanced, it is important they remain under human control and are aligned with our ethical values. There are risks that AI could be misused, or that it may behave in unintended ways, with unintended consequences, if not properly designed and monitored. It is therefore crucial that researchers, companies and governments consider both the upsides and downsides of AI when developing and using AI, so that the world can successfully harness the huge potential of AI, while addressing valid concerns about its risks.

AI Risks

There could be many potential risks of AI, often unique to each situation and use case, but below are five strategic level AI risks that would be useful for all business leaders to be aware of and understand.

Recently there have been several high profile media stories about the risks of AI, including a study from Goldman Sachs investment bank into the impact of AI on the global economy. They estimated that AI and automation could replace up to 300 million jobs over the next 10 year, but also drive a 7% (or almost $7 trillion USD) increase in global GDP 1. For context WTTC data shows that pre-pandemic the global Travel & Tourism sector accounted for nearly 300 million jobs, so this is equivalent to the loss of every single Travel & Tourism job over the next decade. Some workers’ unions have therefore expressed ‘deep worry that employment law is not keeping pace with the AI revolution’ and called for regulation on the use of AI for hiring, firing, performance reviews and setting working conditions.

Goldman Sachs go on explain that in history, jobs displaced by automation have historically been offset by the creation of new jobs, and the emergence of new occupations. They cite that 60% of today’s workers are employed in occupations that didn’t exist in 1940, following many technological innovations since the Second World War. Goldman Sachs therefore propose that AI could dramatically change the working landscape, rather than lead to mass unemployment.

However they also note that unlike the previous automation revolutions which predominantly affected manual (so called ‘blue-collar’) workers, such as factory workers being replaced by machines, the AI revolution would predominantly affect skilled (or ‘white collar’) workers, with managers and professionals some of the most likely to be impacted. The below diagram from the Goldman Sachs report shows that in Europe they estimate that 29% of managerial jobs and 34% of professional jobs (across all industries) could be replaced by AI and automation over the next 10 years.

In early 2023 an open letter was published by the Future of Life Institute 3 calling for a pause on AI development for at least 6 months. The letter argued that the risks of AI are so great, the world needs to take more time to understand and mitigate them. The letter received considerable media attention as it was signed by over 30,000 interested parties, including Elon Musk (Owner of X, Tesla and SpaceX) and Steve Wozniak (Co-founder of Apple). One of the main concerns raised in the letter was the risk of AI becoming ‘too smart’ and taking control of our lives. While the risks of AI are noted by many, the open letters recommendation was not taken forward as a global pause on all AI research and development was widely considered impractical and impossible to enforce.

A few months later, the Center for AI Safety (CAIS) also raised concerns about the existential risk of AI, with a succinct public statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” 4. This too was co-singed by several notable figures including Bill Gates (Founder of Microsoft) and academics Geoffrey Hinton & Yoshua Bengio (who have been nicknamed the ‘Godfathers of AI’ due to their pioneering research in the field).

While many of these public statements emphasised the negative risks of AI, an open letter from the UK Chartered Institute for IT, also published in 2023 and signed by over 1300 academics, was issued to counter the ‘AI doom narrative’ and called for governments to recognise AI as a “transformational force for good, not an existential threat to humanity” 5. The letter argued that AI will enhance every area of our lives, as long as the world gets critical decisions about its development and use right and called for professional and technical standards for AI, supported by a robust code of conduct, with international collaboration and fully resourced regulation.

Contents:

  1. Foreword
  2. AI Governance
    1. AI Risks
    2. Responsible AI
    3. AI Strategies & Regulation
    4. Global Partnership on Artificial Intelligence (GPAI)
    5. AI Industry Voluntary Governance Measures
  3. Acknowledgements

Continue reading...

Get access to 100s of case studies, workshop templates, industry leading events and more.
See membership options
Already a member? Sign in

Responsible Artificial Intelligence: Overview of AI Risks & Safety & Governance

April 2024
Innovation

Artificial Intelligence (AI) is an exciting technology that opens up many possibilities for society, businesses and the Travel & Tourism sector, but as AI systems become more advanced, it is important they remain under human control and are aligned with our ethical values. There are risks that AI could be misused, or that it may behave in unintended ways, with unintended consequences, if not properly designed and monitored. It is therefore crucial that researchers, companies and governments consider both the upsides and downsides of AI when developing and using AI, so that the world can successfully harness the huge potential of AI, while addressing valid concerns about its risks.

AI Risks

There could be many potential risks of AI, often unique to each situation and use case, but below are five strategic level AI risks that would be useful for all business leaders to be aware of and understand.

Recently there have been several high profile media stories about the risks of AI, including a study from Goldman Sachs investment bank into the impact of AI on the global economy. They estimated that AI and automation could replace up to 300 million jobs over the next 10 year, but also drive a 7% (or almost $7 trillion USD) increase in global GDP 1. For context WTTC data shows that pre-pandemic the global Travel & Tourism sector accounted for nearly 300 million jobs, so this is equivalent to the loss of every single Travel & Tourism job over the next decade. Some workers’ unions have therefore expressed ‘deep worry that employment law is not keeping pace with the AI revolution’ and called for regulation on the use of AI for hiring, firing, performance reviews and setting working conditions.

Goldman Sachs go on explain that in history, jobs displaced by automation have historically been offset by the creation of new jobs, and the emergence of new occupations. They cite that 60% of today’s workers are employed in occupations that didn’t exist in 1940, following many technological innovations since the Second World War. Goldman Sachs therefore propose that AI could dramatically change the working landscape, rather than lead to mass unemployment.

However they also note that unlike the previous automation revolutions which predominantly affected manual (so called ‘blue-collar’) workers, such as factory workers being replaced by machines, the AI revolution would predominantly affect skilled (or ‘white collar’) workers, with managers and professionals some of the most likely to be impacted. The below diagram from the Goldman Sachs report shows that in Europe they estimate that 29% of managerial jobs and 34% of professional jobs (across all industries) could be replaced by AI and automation over the next 10 years.

In early 2023 an open letter was published by the Future of Life Institute 3 calling for a pause on AI development for at least 6 months. The letter argued that the risks of AI are so great, the world needs to take more time to understand and mitigate them. The letter received considerable media attention as it was signed by over 30,000 interested parties, including Elon Musk (Owner of X, Tesla and SpaceX) and Steve Wozniak (Co-founder of Apple). One of the main concerns raised in the letter was the risk of AI becoming ‘too smart’ and taking control of our lives. While the risks of AI are noted by many, the open letters recommendation was not taken forward as a global pause on all AI research and development was widely considered impractical and impossible to enforce.

A few months later, the Center for AI Safety (CAIS) also raised concerns about the existential risk of AI, with a succinct public statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” 4. This too was co-singed by several notable figures including Bill Gates (Founder of Microsoft) and academics Geoffrey Hinton & Yoshua Bengio (who have been nicknamed the ‘Godfathers of AI’ due to their pioneering research in the field).

While many of these public statements emphasised the negative risks of AI, an open letter from the UK Chartered Institute for IT, also published in 2023 and signed by over 1300 academics, was issued to counter the ‘AI doom narrative’ and called for governments to recognise AI as a “transformational force for good, not an existential threat to humanity” 5. The letter argued that AI will enhance every area of our lives, as long as the world gets critical decisions about its development and use right and called for professional and technical standards for AI, supported by a robust code of conduct, with international collaboration and fully resourced regulation.

Contents:

  1. Foreword
  2. AI Governance
    1. AI Risks
    2. Responsible AI
    3. AI Strategies & Regulation
    4. Global Partnership on Artificial Intelligence (GPAI)
    5. AI Industry Voluntary Governance Measures
  3. Acknowledgements